question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

Lock contention on high load

See original GitHub issue

From a comment on issue #177

Since it only happens in specific load conditions, it’s hard for me to reproduce it right away, but looking at the logs, I think it’s from a producer.send call. You can find the > following snippet on how I’m initializing the producer and how I’m calling the send function.

// Incoming data ... Can be considered something like this:
for (let i = 0; i < 100000; i++)
    queueMessage(message, 'specific-topic');

// Producer chunk ...
const kafkaClient = new Kafka({
    clientId: `SomeID`,
    brokers: ['10.240.0.6:9092', '10.240.0.7:9092', '10.240.0.20:9092'],
    connectionTimeout: 5000,
    requestTimeout: 60000,
    maxInFlightRequests: 200,
    retry: {
        initialRetryTime: 1000,
        retries: 5
    }
});

const producer = kafkaClient.producer();

async function queueMessage(message, topic, retries = 0) {
    try {
        await producer.send({
            topic,
            // TODO -> KafkaJS probable bug? key=null should be present
            messages: [{key: null, value: JSON.stringify(message)}],
            acks: 1,
            compression: kafkaUtils.SNAPPY
        });
    } catch (e) {
        if (retries < 2)
            setTimeout(() => queueMessage(message, topic, retries + 1), 2000 * (retries + 1));
    }
}

I’m not using any Promise.all, and the flow of incoming messages is a stream of small messages, so I can’t actually batch them together and send them, it’s more like streaming a lot of > messages in realtime, for a messaging application.

And I’m using KafkaJS 1.4.7.

@AlirezaSadeghi I moved the investigation to a new issue

Issue Analytics

  • State:closed
  • Created 5 years ago
  • Reactions:1
  • Comments:9 (6 by maintainers)

github_iconTop GitHub Comments

1reaction
tulioscommented, Apr 1, 2019

@AlirezaSadeghi 1.5.2 was released with the fix.

1reaction
AlirezaSadeghicommented, Mar 11, 2019

That would be great! I’ll be all out helping fix this issue because it’s really reducing the reliability of the messaging mechanism.

About the consumers and producers, yes, they are in the same process, there’s basically one consumer and one producer in the same process. (i.e app.js starts up the producer first, and then starts the consumer, consumer starts consuming messages and sending them, and then messages start coming back to the process and here, producer starts sending them back to a kafka topic)

And initially, I was using 1.4.7, I’m giving 1.5.0 a try now to see how it behaves. I’ll also increase those two again, but they are already 60 seconds or so.

Read more comments on GitHub >

github_iconTop Results From Across the Web

Understanding Lock Contention in WPA
Understanding Lock Contention in Windows Performance Analyzer (WPA) ... Running: This thread is currently on the CPU doing, performing work ...
Read more >
Performance score: Lock Contention (.NET) [How-To]
A high rate of lock contentions leads to wasted Cpu cycles due to spinlock processing, and elevated context switching overhead when threads begin...
Read more >
Lock contention in JDBC connection pool when under high load
The lock contention in the JDBC Connection pool may render as the inability to connect to the database from the Web UI or...
Read more >
Reducing Lock Contention | Performance and Scalability
With many threads competing for the global lock, the chance that two threads want the lock at the same time increases, resulting in...
Read more >
Debugging Lock Contention Performance Issues in C# .NET
In this article, we're going to talk about Lock contention, which is a state where one thread is waiting for another while trying...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found