Possible memory leak while sending a message with headers
See original GitHub issueHi Guys
Environment Information - OS [centos AWS 4.14.181-142.260.amzn2.x86_64]: - Node Version [v10.11.0]: - NPM Version [6.4.1]: - C++ Toolchain [g++ (GCC) 7.3.1 20180712 (Red Hat 7.3.1-9)]: - node-rdkafka version [2.9.1]:
Steps to Reproduce
We have a system running on the EKS(AWS) platform. The node is running in the pod with the cluster mode enabled. The system has been running for half a year without any mem problem. Recently we started observing a memory leak. Per our code analysis we managed to isolate the problem which apparently is caused when we add a header to a Kafka message. That’s the only code diff we have, notice the problem is replicated even with header null.
To replicate the issue:
Create a simple Producer client and send a message with a header (no matter what value, event null reproduces the mem leak). The memory grows very slow but constantly, our scale is ~250 messages per second. We are using node exporter Prometheus to collect the node statistic.
The chart shows the short period of time, but if you let him run for a few days. the memory doesn’t stop growing.
node-rdkafka Configuration Settings
we have two MSK Kafka brokers with TLS connections
Producer:
this.kproducer = new Kafka.Producer({
'metadata.broker.list': this.producers.toString(),
'message.send.max.retries': 10,
'retry.backoff.ms': 1000,
'compression.codec': 'snappy',
'linger.ms': this.linger_ms,
'security.protocol': 'ssl',
'ssl.ca.location': 'kafka_ssl/ssl.ca',
'socket.keepalive.enable': true,
'dr_cb': process.env.DEBUG
});
The code for sending a message
Without mem leak:
return this.kproducer.produce(
// Topic to send the message to
this.successTopics,
// optionally we can manually specify a partition for the message
// this defaults to -1 - which will use librdkafka's default partitioner (consistent random for keyed messages, random for unkeyed messages)
murmur.murmur2(event.id) % this.partitions,
// Message to send. Must be a buffer
Buffer.from(JSON.stringify(event)),
// for keyed messages, we also specify the key - note that this field is optional
"",
// you can send a timestamp here. If your broker version supports it,
// it will get added. Otherwise, we default to 0
Date.now(),
// you can send an opaque token here, which gets passed along
// to your delivery reports
);
With mem leak
return this.kproducer.produce(
// Topic to send the message to
this.successTopics,
// optionally we can manually specify a partition for the message
// this defaults to -1 - which will use librdkafka's default partitioner (consistent random for keyed messages, random for unkeyed messages)
murmur.murmur2(message.id) % this.partitions,
// Message to send. Must be a buffer
Buffer.from(JSON.stringify(event)),
// for keyed messages, we also specify the key - note that this field is optional
"",
// you can send a timestamp here. If your broker version supports it,
// it will get added. Otherwise, we default to 0
Date.now(),
// you can send an opaque token here, which gets passed along
// to your delivery reports
null,
null, // note, the header could be [{"header-name":"header-value"}]
);
I tried many experiments sending different header values with different payload sizes, it seems like it has no effect on the rate of memory growth, what is important is by the mere fact that you send a header.
I tried troubleshooting it using heap dump snapshots, although without big success.
Additional context
Issue Analytics
- State:
- Created 3 years ago
- Comments:8 (2 by maintainers)
Top GitHub Comments
Hi iradul, I’m still in the middle of running a long duration test, although at first glace,it seems like the use of undefined value as opaque works. The test has been running for a day and I don’t observe any significant memory raise. I think the issue could be close. Thanks a lot for your assistance, very appreciate it
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs.