Kafka consumer Metrics values are always NaN
See original GitHub issueWe have upgraded micrometer version to 1.1.0 to get the KafkaConsumerMetrics
in our spring boot application v2.1.0.
But apparently, all the kafka consumer metrics are being collected and only has NaN
as its value. Few people have reported the issue in StackOverFlow.
You can refer to simple project created by @j-tim. I also have the same issue with my metrics.
What could be the reason? Could you please help us with the issue? Thank you!!
Issue Analytics
- State:
- Created 5 years ago
- Reactions:13
- Comments:5 (2 by maintainers)
Top Results From Across the Web
Flink kafka metrics randomly given NaN - Stack Overflow
After update flink from 1.10.0 to 1.14.4 some KafkaProducer metrics randomly began to take the value NaN. resources.monitoring.flink.
Read more >kafka.consumer.bytes.consumed.total=NaN causes Error ...
I see that some metrics are reported with NaN value which causes error with "expected number, but got string" description.
Read more >[Solved]-Spring Boot 2.1 Micrometer Kafka consumer metric ...
Spring Boot 2.1 Micrometer Kafka consumer metric statistic COUNT is "NaN" · No label/attribute for some Kafka consumer metrics exposed by Spring boot...
Read more >489: Kafka Consumer Record Latency Metric
When latency is calculated as negative then the metric value will be reported as NaN . UNIX epoch time is always represented as...
Read more >Custom metric reporters in kafka streams - Google Groups
If they show other values, the issue is likely in your custom metrics reporter. ... are always 0.0 or NaN for the mix,...
Read more >
Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free
Top Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Thank you for the response. Any plan for patch release date?
We are seeing a similar problem when having a consumer linked to a topic where no messages are being recieved (we have a topic on which we send time-travel-events for testing to different apps and sometimes do not use this feature 😉 ).
Here is one of the error messages:
failed to send metrics to influx: {"error":"partial write: unable to parse 'kafka_consumer_fetch_latency_max,client_id=consumer-1,[our_custom_tags_left_out],metric_type=gauge value=-∞ 1541775345266': invalid number
Guess we are getting “negative infinity” because no message has been processed yet or this value has to be given to InfluxDB in a different way? For the time being we have switched off
Don’t know whether this is related to this “NaN” issue or should be handled in a different issue. Thanks for looking into this!