question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

Using the Elasticsearch-Kibana-Fluentd logging solution

See original GitHub issue

I noticed the addition enabling the EKF stack. I locally installed the new beta version of microk8s through snap:

$ snap list
Name                  Version            Rev   Tracking  Publisher     Notes
microk8s              v1.13.3            399   beta      canonical✓    classic

And after enabling the EKF stack through:

$ microk8s.enable fluentd

elasticsearch, fluentd, and kibana seem to startup just fine in the cluster.

Now, when I access kibana through:

http://127.0.0.1:8080/api/v1/namespaces/kube-system/services/kibana-logging/proxy/app/kibana#/management/kibana/index?_g=()

I can access the Kibana interface.

However, in the Kibana interface, under the Discover tab, I have the message:

Couldn't find any Elasticsearch data
You'll need to index some data into Elasticsearch before you can create an index pattern. Learn how.

I can click the ‘Check for new data’ button, but to no avail.

I have a few pods which are logging just to stdout (i.e. console.logs from a NodeJS pod). Should fluentd already pick up on that and push this to ES?

Issue Analytics

  • State:closed
  • Created 5 years ago
  • Comments:6 (3 by maintainers)

github_iconTop GitHub Comments

1reaction
ktsakalozoscommented, Feb 11, 2019

Thank you for pointing me to the right direction @WRidder.

You will be able to see the fluentd.conf under /etc/fluent/config.d/ as soon as you get a shell on the fluentd pod with something similar to this:

microk8s.kubectl exec -it -n kube-system   fluentd-es-v2.2.0-xcf5c -- /bin/bash

If I read this configuration correctly fluentd looks under /var/log/containers for container logs. In MicroK8s the logs under /var/log/containers are symlinks to files under /var/log/pods that are in turn symlinks on files under /var/snap/microk8s/common/var/lib/docker/containers. As pointed out in https://github.com/kubernetes/minikube/issues/876#issuecomment-270308041 fluentd needs to follow the symlinks to reach the log files and therefore the right hostpaths have to be mounted on the exact same locations inside the fluentd pod.

0reactions
WRiddercommented, Feb 10, 2019

Since my main concern was the stdout logging of the containers, I changed the volume path definitions of varlibcontainers as follows (as per this comment):

 spec:
      containers:
      - env:
        - name: FLUENTD_ARGS
          value: --no-supervisor -q
        image: k8s.gcr.io/fluentd-elasticsearch:v2.2.0
        imagePullPolicy: IfNotPresent
        name: fluentd-es
        resources:
          limits:
            memory: 500Mi
          requests:
            cpu: 100m
            memory: 200Mi
        terminationMessagePath: /dev/termination-log
        terminationMessagePolicy: File
        volumeMounts:
        - mountPath: /var/log
          name: varlog
        - mountPath: /var/snap/microk8s/common/var/lib/docker/containers
          name: varlibdockercontainers
          readOnly: true
        - mountPath: /etc/fluent/config.d
          name: config-volume
      dnsPolicy: ClusterFirst
      nodeSelector:
        beta.kubernetes.io/fluentd-ds-ready: "true"
      priorityClassName: system-node-critical
      restartPolicy: Always
      schedulerName: default-scheduler
      securityContext: {}
      serviceAccount: fluentd-es
      serviceAccountName: fluentd-es
      terminationGracePeriodSeconds: 30
      volumes:
      - hostPath:
          path: /var/log
          type: ""
        name: varlog
      - hostPath:
          path: /var/snap/microk8s/common/var/lib/docker/containers
          type: ""
        name: varlibdockercontainers
      - configMap:
          defaultMode: 420
          name: fluentd-es-config-v0.1.5
        name: config-volume

This seems to work fine, i’m receiving the outputs from my pods in the logstash-* index in ES.

I’m not entirely sure wat the intent behind the varlog mount is. Is that meant to log the output of the node itself?

Read more comments on GitHub >

github_iconTop Results From Across the Web

How To Set Up an Elasticsearch, Fluentd and Kibana (EFK ...
In this tutorial we'll use Fluentd to collect, transform, and ship log data to the Elasticsearch backend. Fluentd is a popular open-source data ......
Read more >
Kubernetes Logging with Elasticsearch, Fluentd, and Kibana
This tutorial will walk you step-by-step through the process of setting up a logging solution based on Elasticsearch, Fluend and Kibana.
Read more >
Logging for Kubernetes: Fluentd and ElasticSearch - MetricFire
Use Fluentd and ElasticSearch (ES) to log Kubernetes (k8s). Learn about microservices architecture, containers, and logging through code.
Read more >
Fluentd + Elasticsearch + Kibana, your on-premise logging ...
In another post, I explored how you can use an OpenTelemetry-based stack to have traces of the requests which pass through our application....
Read more >
How-To: Set up Fluentd, Elastic search and Kibana in ...
Create a Kubernetes namespace for monitoring tools. Copy · Install Elastic Search using Helm. By default, the chart creates 3 replicas which must ......
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found