question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

Default Acquire Timeout

See original GitHub issue

Running the latest faktory version in brew (0.9.0-1) with the latest version of this library (2.2.0) seems to encounter a timeout when calling faktory.work() a multitude of times. This seems to be easy to trigger by calling await faktory.work() (with or without parameters) ~10 or more times on my machine (2016 quad core mbp). I’ve increased the timeout to 20s and that seems to resolve the problem.

https://github.com/jbielick/faktory_worker_node/blob/0934553b83189ab6f10239801df60d042290ee25/lib/connection-pool.js#L19

A few questions as I’m not entirely sure what’s happening here:

  1. Why is the default acquireTimeout 10s?
  2. Should this be increased based on some factor of size (i.e. 10000 * size also seems to work)?
  3. Would it be simpler to allow the user of faktory.work() to input an acquireTimeout if they know this may be an issue?
  4. Leaving out this timeout entirely will default it to infinite - what are the drawbacks?

I think some form of 2 would be the most seamless to the consumer.

Issue Analytics

  • State:closed
  • Created 5 years ago
  • Comments:9 (5 by maintainers)

github_iconTop GitHub Comments

1reaction
vigandhicommented, Nov 7, 2018

What I’ve got going on a feature branch is child processes as described originally. This seems to work fine as desired. I do however agree that simplifying the queue structure may be beneficial in the long run especially if rate limiting ends up making its way to faktory core.

On Tue, Nov 6, 2018 at 8:20 PM Josh Bielick notifications@github.com wrote:

I don’t think that level of specificity in your setup will be something that this library solves for you. In fact, it’s already possible, right? I think you just need to spin up separate processes for your different worker pools:

process 1:

concurrency: 1, queues: [‘accounting’]

process 2

concurrency: 20, queues: [‘stripe’]

process 3

concurrency: 10, queues: [‘shopify’]

process 4

concurrency: 15, queues: [‘amazon’]

etc…

All the building blocks are there if I understand correctly—you’d just rather have this library handle how many workers are running per queue? I don’t think that’s something that this client should do—it just makes the code more complex for doing something that’s already possible with simpler, smaller building blocks.

In my experience, the fewer queues you can have, the better off you’ll be. I think this is generally what people strive for in sidekiq as well. If the only reason you need this functionality is because you want rate limiting, I’d say that’s a misuse of the concurrency setting—it isn’t intended for rate limiting and you could find a distributed great rate limiting library to help out here. Moreover, rate limiting in Sidekiq Enterprise is wonderfully implemented and it’s likely that functionality will make its way to Faktory at some point. For the time being, I’d like to help you find a way to run these worker pools as you need to without adding a configuration that defines concurrency per-queue because I think that’s a very narrow use-case.

— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/jbielick/faktory_worker_node/issues/20#issuecomment-436469367, or mute the thread https://github.com/notifications/unsubscribe-auth/AGaqqHexO-sipBHc0V8z0yN0wVByvLEAks5usjV4gaJpZM4YAF0F .

0reactions
jbielickcommented, Nov 7, 2018

I don’t think that level of specificity in your setup will be something that this library solves for you. In fact, it’s already possible, right? I think you just need to spin up separate processes for your different worker pools:

process 1:

concurrency: 1, queues: ['accounting']

process 2

concurrency: 20, queues: ['stripe']

process 3

concurrency: 10, queues: ['shopify']

process 4

concurrency: 15, queues: ['amazon']

etc…

All the building blocks are there if I understand correctly—you’d just rather have this library handle how many workers are running per queue? I don’t think that’s something that this client should do—it just makes the code more complex for doing something that’s already possible with simpler, smaller building blocks.

In my experience, the fewer queues you can have, the better off you’ll be. I think this is generally what people strive for in sidekiq as well. If the only reason you need this functionality is because you want rate limiting, I’d say that’s a misuse of the concurrency setting—it isn’t intended for rate limiting and you could find a distributed great rate limiting library to help out here. Moreover, rate limiting in Sidekiq Enterprise is wonderfully implemented and it’s likely that functionality will make its way to Faktory at some point. For the time being, I’d like to help you find a way to run these worker pools as you need to without adding a configuration that defines concurrency per-queue because I think that’s a very narrow use-case.

Read more comments on GitHub >

github_iconTop Results From Across the Web

SequelizeConnectionAcquireTim...
SequelizeConnectionAcquireTimeoutError means all connections currently in the pool were in-use and a request to access a connection took more ...
Read more >
SequlizeJS connection get timeout frequently - Stack Overflow
Most of the operations are transactions. We get an error. SequelizeConnectionAcquireTimeoutError: Operation timeout. This is our config object.
Read more >
JDBC Connection Pool Settings (Sun Java System Application ...
Max Wait Time: Amount of time the caller (the code requesting a connection) will wait before getting a connection timeout. The default is...
Read more >
The AgroalDatasource class
validationTimeout (Duration) - The interval between background validation checks. The default is zero, meaning background validation is not performed.
Read more >
Set the connection timeout when using Node.js - Google Cloud
'acquireTimeoutMillis' is the number of milliseconds before a timeout occurs when acquiring a // connection from the pool. This is slightly different from ......
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found