Limited number of consumers per process
See original GitHub issueSince the driver uses Nan::AsyncQueueWorker
for background job scheduling, we end up using built-in libuv
thread pool. That means, that by default there’s only 4 threads in the pool, and this number couldn’t be increased more then to 128 threads.
Each consumer in the flowing mode submits a ConsumerConsumeLoop
job, which is blocking, so it occupies one background thread completely - this means that number of consumers per process is limited to 4 (3 actually, since we need to save at least 1 thread from the pool for other tasks)
Another possible way to use up all the threads in the pool is to call consume(cb)
in non-flowing mode many-many times on a topic without any messages coming - each call would create a ConsumerConsume
work that will occupy the thread from the pool until the message is there, so it will block all the other operations which could be going on (producing, metadata requests etc etc etc).
I’m wondering if you think this might be a problem? In our use-case we don’t fork a worker per consumer group, but instead create all of the consumers in every worker process and rely on kafka rebalancing for assigning individual partitions to workers since it’s easier and better for failover (every worker is replaceable but any other one)
Do you think this will hit you too at some point? I’ve created this issue mostly to get the understanding on your thoughts on this.
Issue Analytics
- State:
- Created 7 years ago
- Comments:8 (1 by maintainers)
Top GitHub Comments
Thank you @webmakersteve ! That unclogged my pipes. Much easier than rewriting my tests to use IPC. 👍
Before queueing any asynchronous work, set
process.env.UV_THREADPOOL_SIZE
to a value at least 2 greater than the number of consumers you plan to have running concurrently.