Concurrency causing failure with connection limit
See original GitHub issueIs there a reason the poolSize for the client is set to concurrency + 2? We have concurrency set to 100 and have many queues open and this is causing Faktory to essentially grind to a halt to the point where using the Web UI does not work. We also use Elixir and the concurrency there is not a problem.
Unless each Redis connection is blocking while the job is running, this seems like overkill.
Issue Analytics
- State:
- Created 4 years ago
- Comments:13 (11 by maintainers)
Top Results From Across the Web
How do I prevent MySQL concurrent connection limit issues?
Discover how to prevent concurrent connection limit issues for your Managed WordPress site's MySQL database. Including how to remove comment spam, ...
Read more >Error - You have exceeded the concurrent connection limit ...
Problem Cause There is a limit on the number of concurrent sessions a server can have or a user can launch. If applications...
Read more >the k8s client hits some concurrent connections limit and it ...
my feeling is that the ClientConn reports that it can handle more connections, but when the caller actually calls the RoundTrip it fails...
Read more >Connection concurrency limit exceeded - Plesk Support
Cause. The maximum number of connections that an SMTP client may make simultaneously is exceeded (default is 50). Resolution. Either of the following ......
Read more >Suddenly an error - Server response: Concurrency limit ...
My app worked fine for weeks, suddenly this morning all users get the same error on start-up. Opening up in studio, this is...
Read more >
Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free
Top Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found

Thanks @jbielick. Since the max time for
FETCHwhen no job exists is 2 seconds and the aquire timeout is 5 seconds, this should not be too much of an issue as long as I keep it aroundconcurrency * 0.5. Appreciate the change!Sidekiq <4 used a single fetch thread which made it very sensitive to Redis network latency. Sidekiq 4+ uses $CONCURRENCY fetchers (each worker thread fetches from Redis directly) to minimize latency. Sidekiq Pro’s super_fetch uses 2 threads for $REASONS.
On Thu, Nov 7, 2019 at 11:23 AM Josh Bielick notifications@github.com wrote: