After Publish connections are not returned to the pool
See original GitHub issueWhen i use the pubsub mechanism inside redis, after a message gets published on a channel the connections tend to hog on one particular node in the cluster therby giving me this error
Error 24 connecting to 127.0.0.1:7006. Too many open files.
My cluster has 3 masters and 3 slaves… Im attaching a small example script i used to test this behavior [example.py]. I checked the clients “redis-cli -c -p 7006 client list | wc -l”
Another peculiar thing i noticed was when i do
for f in seq 1 1000000
;do echo $f;redis-cli -c -p 7006 publish channel $f;done
the connections are returned and
example.txt
no connection hogging is seen
Issue Analytics
- State:
- Created 8 years ago
- Comments:6 (4 by maintainers)
Top Results From Across the Web
Connection not returned to tomcat pool - Stack Overflow
I am not explicitly closing the connection object after use.My 'maxActive' parameter is set to 100.The application runs smoothly for some time ...
Read more >Connections not returned to Pool - JBoss.org
The code above is used inside a stateless session EJB. It appears that even by calling session. close() the DB connection used does...
Read more >Connection pool return broken connection after some minutes ...
It looks like 1 connection is always being retained in the connection pool. If you restart the service while the application that uses...
Read more >.NET Framework Connection Pool Limits and the new Azure ...
The post details the specifics of HTTP connection pooling based on the .NET runtime you are using and ways to tune it to...
Read more >Manage database connections | Cloud SQL for MySQL
When your application needs a database connection, it borrows one from its pool temporarily; when the application is finished with the connection, it...
Read more >
Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free
Top Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
The first iteration i did for the pubsub support i found out that the return from a publish command about how many clients that recieved the message was not accurate and it was only taking into account how many clients that was connected to that specific node that the publish command was talking to. Because of this, the compatibility with the existing publish/subscribe API and mechanics that is used by
redis-py
compatible software would be broken andredis-py-cluster
could not act as a drop-in replacement to provide seamless compatibility. The only fix at that time was to have all clients talk to the same node because that would ensure that it would still work as expected.Sinc then some things have changed because i now know that you can use the slot hashing mechanism to distribute the clients on all nodes but still do it in a predictable way and the pub/sub API would work as expected, however then you hit the performance problem that still plagues redis internally and that is described in the docs.
Fixed in unstable and included in release 1.2.0