Too many OFFSET_OUT_OF_RANGE errors
See original GitHub issueI have a 5-broker Kafka cluster with ~100 topics, and I’m using KafkaJS to subscribe to 2 topics (1 with 400 partitions and 1 with 10), and I’m always stuck at "The requested offset is not within the range of offsets maintained by the server"
errors.
These errors occur in an endless loop and I’m not able to consume any messages. I’ve also tried resetting the consumer offsets between runs, and also the fromBeginning
option.
I know this is very little information, but can you help me understand what circumstances can cause this error? I can provide more info about my setup and code.
Issue Analytics
- State:
- Created 5 years ago
- Comments:11 (11 by maintainers)
Top Results From Across the Web
kafka-node consumer receives offsetOutOfRange error
I believe it's possible that the app tried to read messages that are no longer available in Kafka. Kafka deletes the old messages...
Read more >[#KAFKA-10313] Out of range offset errors leading to offset reset
Hi, We have been occasionally noticing offset resets happening on the Kafka consumer because of offset out of range error.
Read more >KafkaError OFFSET_OUT_OF_RANGE error - On-Premise
Hi, I have self-hosted Sentry 21.5.1 and after few months of smooth operations I am seeing error very similar to the issue described...
Read more >Kafka client terminated with OffsetOutOfRangeException
OffsetOutOfRangeException error message. ... Increase the Kafka retention policy of the topic so that it is longer than the time the Spark ...
Read more >Length offset out of range message when posting in Fusion 360
... in Fusion 360 a message appears stating "Error: Length offset out of range." ... Tool offset number is too high for the...
Read more >
Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free
Top Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
@paambaati there are a couple of fixes in master for this issue, you can use master disabling the v0.11 APIs (because of the LZ4 problem):
The
MaxListenersExceededWarning
is a silly bug, it isn’t leaking any memory, it is just that node.js has this value hardcoded to 11 and you probably have a higher concurrency. I will create a new issue and fix this right away. It shouldn’t cause any issues besides the annoying warning.If you are producing in parallel you can also consider using the
producer.sendBatch
:Feel free to re-open this issue if you need. Thanks again for the report, this is great for new users.
For reference, here is the issue about
MaxListenersExceededWarning
#153