Default Acquire Timeout
See original GitHub issueRunning the latest faktory version in brew (0.9.0-1) with the latest version of this library (2.2.0) seems to encounter a timeout when calling faktory.work() a multitude of times. This seems to be easy to trigger by calling await faktory.work() (with or without parameters) ~10 or more times on my machine (2016 quad core mbp). I’ve increased the timeout to 20s and that seems to resolve the problem.
A few questions as I’m not entirely sure what’s happening here:
- Why is the default acquireTimeout 10s?
- Should this be increased based on some factor of size (i.e.
10000 * sizealso seems to work)? - Would it be simpler to allow the user of
faktory.work()to input an acquireTimeout if they know this may be an issue? - Leaving out this timeout entirely will default it to infinite - what are the drawbacks?
I think some form of 2 would be the most seamless to the consumer.
Issue Analytics
- State:
- Created 5 years ago
- Comments:9 (5 by maintainers)
Top Results From Across the Web
SequelizeConnectionAcquireTim...
SequelizeConnectionAcquireTimeoutError means all connections currently in the pool were in-use and a request to access a connection took more ...
Read more >SequlizeJS connection get timeout frequently - Stack Overflow
Most of the operations are transactions. We get an error. SequelizeConnectionAcquireTimeoutError: Operation timeout. This is our config object.
Read more >JDBC Connection Pool Settings (Sun Java System Application ...
Max Wait Time: Amount of time the caller (the code requesting a connection) will wait before getting a connection timeout. The default is...
Read more >The AgroalDatasource class
validationTimeout (Duration) - The interval between background validation checks. The default is zero, meaning background validation is not performed.
Read more >Set the connection timeout when using Node.js - Google Cloud
'acquireTimeoutMillis' is the number of milliseconds before a timeout occurs when acquiring a // connection from the pool. This is slightly different from ......
Read more >
Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free
Top Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found

What I’ve got going on a feature branch is child processes as described originally. This seems to work fine as desired. I do however agree that simplifying the queue structure may be beneficial in the long run especially if rate limiting ends up making its way to faktory core.
On Tue, Nov 6, 2018 at 8:20 PM Josh Bielick notifications@github.com wrote:
I don’t think that level of specificity in your setup will be something that this library solves for you. In fact, it’s already possible, right? I think you just need to spin up separate processes for your different worker pools:
process 1:
process 2
process 3
process 4
etc…
All the building blocks are there if I understand correctly—you’d just rather have this library handle how many workers are running per queue? I don’t think that’s something that this client should do—it just makes the code more complex for doing something that’s already possible with simpler, smaller building blocks.
In my experience, the fewer queues you can have, the better off you’ll be. I think this is generally what people strive for in sidekiq as well. If the only reason you need this functionality is because you want rate limiting, I’d say that’s a misuse of the concurrency setting—it isn’t intended for rate limiting and you could find a distributed great rate limiting library to help out here. Moreover, rate limiting in Sidekiq Enterprise is wonderfully implemented and it’s likely that functionality will make its way to Faktory at some point. For the time being, I’d like to help you find a way to run these worker pools as you need to without adding a configuration that defines concurrency per-queue because I think that’s a very narrow use-case.