Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

Limited number of consumers per process

See original GitHub issue

Since the driver uses Nan::AsyncQueueWorker for background job scheduling, we end up using built-in libuv thread pool. That means, that by default there’s only 4 threads in the pool, and this number couldn’t be increased more then to 128 threads.

Each consumer in the flowing mode submits a ConsumerConsumeLoop job, which is blocking, so it occupies one background thread completely - this means that number of consumers per process is limited to 4 (3 actually, since we need to save at least 1 thread from the pool for other tasks)

Another possible way to use up all the threads in the pool is to call consume(cb) in non-flowing mode many-many times on a topic without any messages coming - each call would create a ConsumerConsume work that will occupy the thread from the pool until the message is there, so it will block all the other operations which could be going on (producing, metadata requests etc etc etc).

I’m wondering if you think this might be a problem? In our use-case we don’t fork a worker per consumer group, but instead create all of the consumers in every worker process and rely on kafka rebalancing for assigning individual partitions to workers since it’s easier and better for failover (every worker is replaceable but any other one)

Do you think this will hit you too at some point? I’ve created this issue mostly to get the understanding on your thoughts on this.

Issue Analytics

  • State:closed
  • Created 7 years ago
  • Comments:8 (1 by maintainers)

github_iconTop GitHub Comments

jdconleycommented, May 9, 2017

Thank you @webmakersteve ! That unclogged my pipes. Much easier than rewriting my tests to use IPC. 👍

webmakerstevecommented, May 9, 2017

Before queueing any asynchronous work, set process.env.UV_THREADPOOL_SIZE to a value at least 2 greater than the number of consumers you plan to have running concurrently.

Read more comments on GitHub >

github_iconTop Results From Across the Web

In Apache Kafka why can't there be more consumer instances ...
Messaging systems often work around this by having a notion of "exclusive consumer" that allows only one process to consume from a queue, ......
Read more >
Chapter 4. Kafka Consumers: Reading Data from Kafka
Each consumer only sees his own assignment—the leader is the only client process that has the full list of consumers in the group...
Read more >
How to parallelise Kafka consumers | by Jhansi Karee - Medium
A. Number of consumers within a group can at max be as many number of partitions. Kafka can at max assign one partition...
Read more >
5 Common Pitfalls When Using Apache Kafka - Confluent
To put that into perspective, Linux typically limits the number of file descriptors to 1,024 per process. Higher chance of partition ...
Read more >
7 mistakes when using Apache Kafka | by Michał Matłoka
If we define 2 partitions, then 2 consumers from the same group can consume those messages. What is more, if we define too...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Post

No results found

github_iconTop Related Hashnode Post

No results found