Socket disconnected
See original GitHub issueAfter upgrade to kafka-python 1.3.1, I get these errors 2-5 times an hour:
<BrokerConnection host=HOST_NAME/1.2.3.4 port=9092>: socket disconnected
Based on just the error message, I’m not sure why this happens or does the producer lose messages or not.
Issue Analytics
- State:
- Created 7 years ago
- Reactions:10
- Comments:12 (3 by maintainers)
Top Results From Across the Web
force client disconnect from server with socket.io and nodejs
In my case I wanted to tear down the underlying connection in which case I had to call socket.disconnect(true) as you can see...
Read more >The Socket instance (client-side)
The Socket instance emits three special events: connect; connect_error; disconnect. Please note that since Socket.IO v3, the Socket instance ...
Read more >I keep getting "socket disconnected" message #556 - GitHub
Hi all, I am new to code-server, but I just can't seem to get it to work. I am hosting on a basic...
Read more >Socket.Disconnect(Boolean) Method (System.Net.Sockets)
Closes the socket connection and allows reuse of the socket. ... to stop the send and receive activity, and Disconnect, to close the...
Read more >Socket disconnected before secure TLS connection was ...
Socket disconnected before secure TLS connection was established. By David Gee; at October 25, 2022. TL;DR; When deploying NextJS to Vercel, ...
Read more >
Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free
Top Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
BrokerConnection does not auto-retry, no. Retry logic is upstream and depends on exception type and context.
KafkaProducer retries failed requests up to a certain number of times, configured by
retries
.KafkaConsumer will generally refresh metadata and retry – possibly to a new partition leader – when a broker connection fails.
I think this issue is causing random failures on my end too. I end up with my consumer seemingly healthy but not consuming anything. It shows as “rebalancing” for the broker (which I think is bug with them, it’s dead IMO). And it’s a pain because I don’t have a reliable way to detect it and at the very least spawn a new consumer…