consumer: handler blocks and connection is stale
See original GitHub issueFor my Message handler need quit a long time to process, I enable the message asynchronously as follows:
def handler(message):
message.enable_async()
# process it for a long time
deal_with_task()
message.finish()
and my nsq reader set like this
nsq.Reader(
message_handler=self.handler,
lookupd_http_addresses=NSQ_LOOKUP_HOST,
channel=LISTEN_CHANNEL,
topic=LISTEN_TOPIC,
lookupd_poll_interval=100,
msg_timeout=3600,
max_tries=3,
max_in_flight=1,
)
nsq.run()
The program may get the error like this in some cases:
WARNING:nsq.client:[127.0.0.1:4152] connection is stale (298.42s), closing
ERROR:nsq.async:uncaught exception in data event
Traceback (most recent call last):
File "/usr/local/lib/python2.7/site-packages/nsq/async.py", line 275, in _read_body
self.trigger(event.DATA, conn=self, data=data)
File "/usr/local/lib/python2.7/site-packages/nsq/event.py", line 85, in trigger
ev(*args, **kwargs)
File "/usr/local/lib/python2.7/site-packages/nsq/async.py", line 477, in _on_data
self.send(protocol.nop())
File "/usr/local/lib/python2.7/site-packages/nsq/async.py", line 281, in send
self.stream.write(data)
File "/usr/local/lib/python2.7/site-packages/tornado/iostream.py", line 377, in write
self._check_closed()
File "/usr/local/lib/python2.7/site-packages/tornado/iostream.py", line 885, in _check_closed
raise StreamClosedError(real_error=self.error)
StreamClosedError: Stream is closed
I don’t know what’s wrong with the program. And I am sure that there is no error uncaught while processing the task. The error happens when nsq client is listening on the nsqd. If anyone needs more codes, I can paste here.
Issue Analytics
- State:
- Created 8 years ago
- Comments:7 (3 by maintainers)
Top Results From Across the Web
Solving My Weird Kafka Rebalancing Problems & Explaining ...
While Kafka is rebalancing, all involved consumers' processing is blocked (Incremental rebalancing aims to revoke only partitions that need to ...
Read more >ActiveMQ Artemis, Connections Accumulating - Stack Overflow
Is there a way to cause stale connections to time out in ActiveMQ Artemis? I have a situation where the connections are accumulating...
Read more >13 Common RabbitMQ Mistakes and How to Avoid Them
Don't open a channel each time you are publishing. If you can't have long-lived connections, then make sure to gracefully close the connection....
Read more >Stale file handle error for NFSv3 and NFSV4
This document and the information contained herein may be used solely in connection with the NetApp products discussed in this document. Scan to ......
Read more >Troubleshoot | Insight Agent Documentation - Docs @ Rapid7
For troubleshooting instructions specific to Insight Agent connection diognistics ... An agent is considered stale when it has not checked in to the...
Read more >
Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free
Top Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Yes. The argument to pass to to
nsq.Reader()
ismsg_timeout
.Yes, except for where you’re invoking
finish()
. In general it’s not safe to a call a method that interacts with a Tornado IO loop (asfinish()
does) from a thread other the thread that IO loop is running on (in this case, the thread theReader
is running on).You don’t want to use Python threads directly, but rather via a tornado executor facilities. An example is here. You’ll want to invoke
finish()
in the callback you pass torun_background()
.@alpaker Thanks for reply.