question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

Kafka Exporter has **CrashLoopBackOff** and can't recover.

See original GitHub issue

Please use this to only for bug reports. For questions or when you need help, you can use the GitHub Discussions, our #strimzi Slack channel or out user mailing list.

Describe the bug Kafka Exporter has CrashLoopBackOff and can’t recover.

To Reproduce Sometimes, when I create Kafka Cluster and it works. But after a while, the Kafka Exporter has CrashLoopBackOff and is always in this status.

Readiness probe failed: Get "http://10.130.0.49:9404/metrics": context deadline exceeded (Client.Timeout exceeded while awaiting headers)

Expected behavior A clear and concise description of what you expected to happen.

Environment (please complete the following information):

  • Strimzi version: main
  • Installation method: OperatorHub
  • Kubernetes cluster: Openshift 4.9
  • Infrastructure: BareMetal

YAML files and logs

[kafka_exporter] [INFO] 2022/01/04 07:38:01 Starting kafka_exporter (version=1.3.1.redhat-00001, branch=master, revision=eb1f5c4229ce4ca51d64d2034926ce64c60e05e9)
[kafka_exporter] [INFO] 2022/01/04 07:38:01 Build context (go=go1.13, user=worker@pnc-ba-pod-4c2d6e, date=20210708-16:03:34)
[kafka_exporter] [INFO] 2022/01/04 07:38:01 Done Init Clients
[kafka_exporter] [INFO] 2022/01/04 07:38:01 Listening on :9404
[kafka_exporter] [INFO] 2022/01/04 07:38:02 Refreshing client metadata
[kafka_exporter] [INFO] 2022/01/04 07:38:05 concurrent calls detected, waiting for first to finish
[kafka_exporter] [INFO] 2022/01/04 07:38:17 concurrent calls detected, waiting for first to finish
[kafka_exporter] [INFO] 2022/01/04 07:38:32 concurrent calls detected, waiting for first to finish
[kafka_exporter] [INFO] 2022/01/04 07:38:35 concurrent calls detected, waiting for first to finish
[kafka_exporter] [INFO] 2022/01/04 07:38:45 concurrent calls detected, waiting for first to finish
[kafka_exporter] [INFO] 2022/01/04 07:38:47 concurrent calls detected, waiting for first to finish
[kafka_exporter] [INFO] 2022/01/04 07:39:05 concurrent calls detected, waiting for first to finish
[kafka_exporter] [INFO] 2022/01/04 07:39:15 concurrent calls detected, waiting for first to finish
[kafka_exporter] [INFO] 2022/01/04 07:39:15 concurrent calls detected, waiting for first to finish

Issue Analytics

  • State:closed
  • Created 2 years ago
  • Comments:29 (13 by maintainers)

github_iconTop GitHub Comments

1reaction
scholzjcommented, Aug 2, 2022

Triaged on 2.8.2022: This has to be fixed in the Kafka Exporter. Once it has a new release, we can update Strimzi to use it. We should also add a warning to the docs about this problem (=> e.g. something like If you don’t use consumer groups, it will not work … just with more fancy wording 😮). It can be added for example somewhere here: https://strimzi.io/docs/operators/latest/deploying.html#con-metrics-kafka-exporter-lag-str

CC @PaulRMellor ^^^

We need to also re-open the discussion about the Kafka Exporter future since it is a long time since a new release or some fixed issues.

1reaction
scholzjcommented, Jun 10, 2022

I don’t think it really uses the consumer groups. It just reads and decodes the content of it. So it does not trigger its creation.

Read more comments on GitHub >

github_iconTop Results From Across the Web

How to fix "crashLoopBackoff" while creating the kafka container
I'm setting up the kafka and zookeeper cluster with high availability.I have 2 kafka brokers(pod1,pod2) and 3 zookeeper(pod1,pod2,pod3).
Read more >
Kubernetes CrashLoopBackOff Error: What It Is and How to Fix It
This error indicates that a pod failed to start, Kubernetes tried to restart it, and it continued to fail repeatedly. To make sure...
Read more >
You've Encountered CrashLoopBackOff Error – What Now?
CrashLoopBackOff error indicates that the pod is repeatedly starting and crashing. This simply means that the pod is stuck in a crash loop....
Read more >
Understanding Kubernetes CrashLoopBackoff Events
CrashLoopBackOff is a status message that indicates one of your pods is in a constant state of flux—one or more containers are failing...
Read more >
Troubleshooting Ondat Daemonset 'CrashLoopBackOff' Pod ...
The root cause of this is due to the existing Ondat configuration file that is stored at the following location » var/lib/storageos/config.json on...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found