Question

logging with EFK "couldn't find any elasticsearch kubernetes"

Posted November 11, 2021 122 views
LoggingKubernetes

Hi, I’ve been trying to setup logging with Elasticsearch Fluentd & Kibana, i’ve followed 3 different tutorials and i’m getting the same result when i open kibana to get the logs: “couldn’t find any elasticsearch kubernetes”… Can you help me ?

These answers are provided by our Community. If you find them useful, show some love by clicking the heart. If you run into issues leave a comment, or add your own answer to help others.

×
Submit an Answer
1 answer

Hi, please could you show me what tutorial are you using? Do you give a try to Helm Charts?

If you tell me your deployment steps or tutorial URL, I can reproduce them in local k8s cluster using k0s and comment to you.

Anyway, I deploy Grafana+Loki with Helm charts, and is very simple to see and search the cluster logs.

helm install -n monitoring promtail \
--set config.lokiAddress="http://loki:3100/loki/api/v1/push" \
grafana/promtail

helm upgrade --install -n monitoring loki  \
--set persistence.enabled=true \
--set persistence.size=64Gi \
--set config.ingester.chunk_block_size=2048000 \
--set config.ingester.chunk_retain_period=500m \
grafana/loki

I think that in your case, Kibana is using a wrong service name. You need to check your services, configmaps, and deployments.

Do not hesitate to write if you think I can help you

;)

    • Hi @voidtecnologia,

      I did some test....

      I installed a single node kubernetes cluster with k0s without docker and fluentd fail ever.

      A new installation from docker and k0s, and fluentd was unable to parse any logs and for this reason elastichsearch doesn’t show any logstash index entries. All received logs contains many “\\” and error messages....

      \\\\\ error="invalid time format: value = 2021-11-13T22:45:14.295770709+01:00 stdout P 2021-11-13 21:45:14 +0000 [warn]: #0 [in_tail_container_logs]
      \\\\\\\\\\
      

      again, elasticsearh hasn’t incoming fluentd messages.

      I deploy a three nodes in DOK into default namespace (for simplicity) and some logs has been received but with time format error.

      From this repo (your youtube tutorial), I clone this and apply manifests …

      Information consulted:

      (I used default namespace for simplicity)

      1-efk-logging-ns.yaml

      #
      # not use this namespace
      #---
      # kind: Namespace
      # apiVersion: v1
      # metadata:
      #   name: efk-logging
      

      2-elasticsearch-svc.yaml

      kind: Service
      apiVersion: v1
      metadata:
        name: elasticsearch
        # namespace: efk-logging
        labels:
          app: elasticsearch
      spec:
        selector:
          app: elasticsearch
        clusterIP: None
        ports:
          - port: 9200
            name: rest
          - port: 9300
            name: inter-node
      
      

      3-elasticsearch-sts.yaml

      apiVersion: apps/v1
      kind: StatefulSet
      metadata:
        name: es-cluster
        # namespace: efk-logging
      spec:
        serviceName: elasticsearch
        replicas: 3
        selector:
          matchLabels:
            app: elasticsearch
        template:
          metadata:
            labels:
              app: elasticsearch
          spec:
            containers:
            - name: elasticsearch
              image: docker.elastic.co/elasticsearch/elasticsearch:7.2.0
              resources:
                  limits:
                    cpu: 1000m
                  requests:
                    cpu: 100m
              ports:
              - containerPort: 9200
                name: rest
                protocol: TCP
              - containerPort: 9300
                name: inter-node
                protocol: TCP
              volumeMounts:
              - name: elasticsearch-data
                mountPath: /usr/share/elasticsearch/data
              env:
                - name: cluster.name
                  value: k8s-logs
                - name: node.name
                  valueFrom:
                    fieldRef:
                      fieldPath: metadata.name
                - name: discovery.seed_hosts #list of master eligible nodes that will seed node discovery process
                  value: "es-cluster-0.elasticsearch,es-cluster-1.elasticsearch,es-cluster-2.elasticsearch"
                - name: cluster.initial_master_nodes #list of master eligible nodes that will participate in master election
                  value: "es-cluster-0,es-cluster-1,es-cluster-2"
                - name: ES_JAVA_OPTS
                  value: "-Xms512m -Xmx512m"
            initContainers:
            - name: fix-permissions #change owner of group to elasticsearch, defaults to root
              image: busybox
              command: ["sh", "-c", "chown -R 1000:1000 /usr/share/elasticsearch/data"]
              securityContext:
                privileged: true
              volumeMounts:
              - name: elasticsearch-data
                mountPath: /usr/share/elasticsearch/data
            - name: increase-vm-max-map #default OS mmaps counts could be to low
              image: busybox
              command: ["sysctl", "-w", "vm.max_map_count=262144"]
              securityContext:
                privileged: true
            - name: increase-fd-ulimit #file descriptors, only relevant for MacOs and Linux
              image: busybox
              command: ["sh", "-c", "ulimit -n 65536"]
              securityContext:
                privileged: true
        volumeClaimTemplates:
        - metadata:
            name: elasticsearch-data
            labels:
              app: elasticsearch
          spec:
            accessModes: [ "ReadWriteOnce" ]
            resources:
              requests:
                storage: 100Gi
      
      

      4-kibana.yaml

      apiVersion: v1
      kind: Service
      metadata:
        name: kibana
        # namespace: efk-logging
        labels:
          app: kibana
      spec:
        ports:
        - port: 5601
        selector:
          app: kibana
      ---
      apiVersion: apps/v1
      kind: Deployment
      metadata:
        name: kibana
        # namespace: efk-logging
        labels:
          app: kibana
      spec:
        replicas: 1
        selector:
          matchLabels:
            app: kibana
        template:
          metadata:
            labels:
              app: kibana
          spec:
            containers:
            - name: kibana
              image: docker.elastic.co/kibana/kibana:7.2.0
              resources:
                limits:
                  cpu: 1000m
                requests:
                  cpu: 100m
              env:
                - name: ELASTICSEARCH_URL
                  value: http://elasticsearch:9200
              ports:
              - containerPort: 5601
      
      

      5-fluentd.yaml (fail)

      apiVersion: v1
      kind: ServiceAccount
      metadata:
        name: fluentd
        namespace: default
        labels:
          app: fluentd
      ---
      apiVersion: rbac.authorization.k8s.io/v1
      kind: ClusterRole
      metadata:
        name: fluentd
        labels:
          app: fluentd
      rules:
      - apiGroups:
        - ""
        resources:
        - pods
        - namespaces
        verbs:
        - get
        - list
        - watch
      ---
      apiVersion: rbac.authorization.k8s.io/v1
      kind: ClusterRoleBinding
      metadata:
        name: fluentd
      roleRef:
        kind: ClusterRole
        name: fluentd
        apiGroup: rbac.authorization.k8s.io
      subjects:
      - kind: ServiceAccount
        name: fluentd
        namespace: default
      ---
      apiVersion: apps/v1
      kind: DaemonSet
      metadata:
        name: fluentd
        namespace: default
        labels:
          app: fluentd
      spec:
        selector:
          matchLabels:
            app: fluentd
        template:
          metadata:
            labels:
              app: fluentd
          spec:
            serviceAccountName: fluentd
            containers:
            - name: fluentd
              image: fluent/fluentd-kubernetes-daemonset:v1.3.3-debian-elasticsearch-1.8
              env:
                - name:  FLUENT_ELASTICSEARCH_HOST
                  value: "elasticsearch"
                  # value: "elasticsearch.efk-logging.svc.cluster.local"
                - name:  FLUENT_ELASTICSEARCH_PORT
                  value: "9200"
                - name: FLUENT_ELASTICSEARCH_SCHEME
                  value: "http"
                - name: FLUENTD_SYSTEMD_CONF
                  value: "disable"
              # resources:
              #   limits:
              #     memory: 512Mi
              #   requests:
              #     cpu: 100m
              #     memory: 200Mi
              volumeMounts:
              - name: varlog
                mountPath: /var/log
              - name: varlibdockercontainers
                mountPath: /var/lib/docker/containers
                readOnly: true
            terminationGracePeriodSeconds: 30
            volumes:
            - name: varlog
              hostPath:
                path: /var/log
            - name: varlibdockercontainers
              hostPath:
                path: /var/lib/docker/containers
      

      5-fluentd-c.yaml (updated by me)

      ---
      apiVersion: v1
      kind: ServiceAccount
      metadata:
        name: fluentd
        namespace: default
        labels:
          app: fluentd
      
      ---
      apiVersion: rbac.authorization.k8s.io/v1
      kind: ClusterRole
      metadata:
        name: fluentd
        labels:
          app: fluentd
      rules:
      - apiGroups:
        - ""
        resources:
        - pods
        - namespaces
        verbs:
        - get
        - list
        - watch
      
      ---
      kind: ClusterRoleBinding
      apiVersion: rbac.authorization.k8s.io/v1
      metadata:
        name: fluentd
      roleRef:
        kind: ClusterRole
        name: fluentd
        apiGroup: rbac.authorization.k8s.io
      subjects:
      - kind: ServiceAccount
        name: fluentd
        namespace: default
      
      ---
      apiVersion: apps/v1
      kind: DaemonSet
      metadata:
        name: fluentd
        namespace: default
        labels:
          app: fluentd
      spec:
        selector:
          matchLabels:
            app: fluentd
        template:
          metadata:
            labels:
              app: fluentd
          spec:
            serviceAccount: fluentd
            serviceAccountName: fluentd
            tolerations:
            - key: node-role.kubernetes.io/master
              effect: NoSchedule
            containers:
            - name: fluentd
              image: fluent/fluentd-kubernetes-daemonset:v1.4.2-debian-elasticsearch-1.1
              env:
                - name:  FLUENT_ELASTICSEARCH_HOST
                  value: "elasticsearch"
                  # value: "elasticsearch.efk-logging.svc.cluster.local"
                - name:  FLUENT_ELASTICSEARCH_PORT
                  value: "9200"
                - name: FLUENT_ELASTICSEARCH_SCHEME
                  value: "http"
                - name: FLUENTD_SYSTEMD_CONF
                  value: disable
                - name: FLUENT_ELASTICSEARCH_LOGSTASH_FORMAT
                  value: "true"
                - name: FLUENT_ELASTICSEARCH_LOGSTASH_INDEX_NAME
                  value: "logstash"
                - name: FLUENT_CONTAINER_TAIL_EXCLUDE_PATH
                  value: /var/log/containers/fluentd-*
                - name: FLUENT_CONTAINER_TAIL_PARSER_TYPE
                  value: /^(?<time>.+) (?<stream>stdout|stderr) [^ ]* (?<log>.*)$/
                  # value: /^(?<time>.+) (?<stream>stdout|stderr)( (?<logtag>.))? (?<message>.*)$/
      
              # resources:
              #   limits:
              #     memory: 512Mi
              #   requests:
              #     cpu: 100m
              #     memory: 200Mi
              volumeMounts:
              - name: varlog
                mountPath: /var/log
              - name: varlibdockercontainers
                mountPath: /var/lib/docker/containers
                readOnly: true
            terminationGracePeriodSeconds: 30
            volumes:
            - name: varlog
              hostPath:
                path: /var/log
            - name: varlibdockercontainers
              hostPath:
                path: /var/lib/docker/containers
      
      

      6-counter-en.yaml

      apiVersion: v1
      kind: Pod
      metadata:
        name: counter-en
      spec:
        containers:
        - name: count
          image: busybox
          args: [/bin/sh, -c,
                  'i=0; while true; do echo "$i: Hello, are you collecting my data? $(date)"; i=$((i+1)); sleep 5; done']
      

      7-counter-es.yaml

      apiVersion: v1
      kind: Pod
      metadata:
        name: counter-es
      spec:
        containers:
        - name: count
          image: busybox
          args: [/bin/sh, -c,
                  'i=0; while true; do echo "$i: Buenos dias, donde estan mis mensajes?? $(date)"; i=$((i+1)); sleep 10; done']
      

      The clue was this:

      [....]
                - name: FLUENT_ELASTICSEARCH_LOGSTASH_FORMAT
                  value: "true"
                - name: FLUENT_ELASTICSEARCH_LOGSTASH_INDEX_NAME
                  value: "logstash"
                - name: FLUENT_CONTAINER_TAIL_EXCLUDE_PATH
                  value: /var/log/containers/fluentd-*
                - name: FLUENT_CONTAINER_TAIL_PARSER_TYPE
                  value: /^(?<time>.+) (?<stream>stdout|stderr) [^ ]* (?<log>.*)$/
      [....]
      

      Finally I do a port-forward to kibana svc and get a lot of logstash-* index entries in my ElasticSearch to play with them ;)

      Also, I tried with fluent-bit and similar behavior was appreciated. I think that fluentd is very dependent on the kubernetes and docker versions…

      Do not hesitate to write if you think I can help you

      ;)

      by Hanif Jetha
      When running multiple services and applications on a...