question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

Kafka consumer Metrics values are always NaN

See original GitHub issue

We have upgraded micrometer version to 1.1.0 to get the KafkaConsumerMetrics in our spring boot application v2.1.0.

But apparently, all the kafka consumer metrics are being collected and only has NaN as its value. Few people have reported the issue in StackOverFlow.

You can refer to simple project created by @j-tim. I also have the same issue with my metrics.

What could be the reason? Could you please help us with the issue? Thank you!!

Issue Analytics

  • State:closed
  • Created 5 years ago
  • Reactions:13
  • Comments:5 (2 by maintainers)

github_iconTop GitHub Comments

4reactions
thirunarcommented, Nov 1, 2018

Thank you for the response. Any plan for patch release date?

1reaction
hgnoykecommented, Nov 20, 2018

We are seeing a similar problem when having a consumer linked to a topic where no messages are being recieved (we have a topic on which we send time-travel-events for testing to different apps and sometimes do not use this feature 😉 ).

Here is one of the error messages: failed to send metrics to influx: {"error":"partial write: unable to parse 'kafka_consumer_fetch_latency_max,client_id=consumer-1,[our_custom_tags_left_out],metric_type=gauge value=-∞ 1541775345266': invalid number

Guess we are getting “negative infinity” because no message has been processed yet or this value has to be given to InfluxDB in a different way? For the time being we have switched off

Don’t know whether this is related to this “NaN” issue or should be handled in a different issue. Thanks for looking into this!

Read more comments on GitHub >

github_iconTop Results From Across the Web

Flink kafka metrics randomly given NaN - Stack Overflow
After update flink from 1.10.0 to 1.14.4 some KafkaProducer metrics randomly began to take the value NaN. resources.monitoring.flink.
Read more >
kafka.consumer.bytes.consumed.total=NaN causes Error ...
I see that some metrics are reported with NaN value which causes error with "expected number, but got string" description.
Read more >
[Solved]-Spring Boot 2.1 Micrometer Kafka consumer metric ...
Spring Boot 2.1 Micrometer Kafka consumer metric statistic COUNT is "NaN" · No label/attribute for some Kafka consumer metrics exposed by Spring boot...
Read more >
489: Kafka Consumer Record Latency Metric
When latency is calculated as negative then the metric value will be reported as NaN . UNIX epoch time is always represented as...
Read more >
Custom metric reporters in kafka streams - Google Groups
If they show other values, the issue is likely in your custom metrics reporter. ... are always 0.0 or NaN for the mix,...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found