Consumer stop consuming after unknown idle time.
See original GitHub issueAfter an unknown idle time in which no new message is received to consumer, the kafka consumer stopped consuming messages. Altough consumer.connectedTime() gives the connected time and consumer.isConnected() also returns true but messages are not received untill the service is restarted.
No broker transport failure or any other type of error is occurred, everything seems to working fine.
Environment Information
- OS: Linux
- Node Version: 10.15.3
- NPM Version: 6.4.1
- node-rdkafka version: 2.7.4
Consumer configurations
'client.id': config.name,
'metadata.broker.list': ['localhost:9092'],
'heartbeat.interval.ms': 5000,
'socket.keepalive.enable': true,
'fetch.wait.max.ms': 100,
'message.max.bytes': 1000,
'enable.auto.commit': false,
'group.id': `${name}_consumer`,
Issue Analytics
- State:
- Created 3 years ago
- Reactions:2
- Comments:8
Top Results From Across the Web
Kafka consumers not consuming after being idle for longtime
Hi I have kafka consumers consuming data. The below command gives me consumer group command timeout. kafka-consumer-groups.sh ...
Read more >Solving My Weird Kafka Rebalancing Problems & Explaining ...
Since the consumer group is not rebalancing, the crashing consumer reads the crash message repeatedly and restarts multiple times. At this ...
Read more >Documentation - Apache Kafka
This may cause unexpected timeouts or delays when using the producer and consumer since Kafka clients will typically retry automatically on unknown topic ......
Read more >How to Stop Unwanted Calls | Consumer Advice
Scammers are calling people and using the names of two companies everyone knows, Apple and Amazon, to rip people off. Here's what you...
Read more >Consumer Acknowledgements and Publisher Confirms
Acknowledging on a different channel will result in an "unknown delivery tag" ... Whether the mechanism is used is decided at the time...
Read more >Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start FreeTop Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Top GitHub Comments
I had the same problem here (version 2.9.0) using the confluent cloud. We (me and my team) noticed that, when this lack of connection occurs, an error is passed to consumer and the method
consumer.assignments
returns an empty array. Probably, this is not the best solution, but seems to work. We check the error object to trigger a “reconnect task”.On Flowing mode this error comes as the first argument of the handler, passed to
consumer.consume(handler)
method. On Non Flowing mode this error comes on eventevent.error
. We also check the consumer’s assignments before to start the reconnect task.Please try with the latest version
2.9.0
. There was a regression in librdkafka1.3.0
that is fixed in1.4.2
:Also having only one broker in production setup is not recommended. You should have at least 3 brokers.