MaxPollRecords Not Available on the ConsumerConfig
See original GitHub issueDescription
We have a long running process tied to our records that can sometimes result in a Application maximum pool interval exceeded
message being thrown. There is currently a MaxPoolIntervalsMS
property on the ConsumerConfig
that we can set to help with this issue, but we only encounter this when getting a large amount of records at once. Based on some articles and documentation I have read there is a configuration option called max.poll.records
that can be set to limit the amount of records per poll. The default is 500, but I would like to try changing this to a smaller value to see if this helps with the error. The issue I have is there is not a property exposed on the ConsumerConfig
for the max.pool.records
setting. Is this by design or am I missing something?
Issue Analytics
- State:
- Created 3 years ago
- Comments:6 (3 by maintainers)
Top GitHub Comments
it’s a design choice that mimics the librdkafka API. you could easily create a batch consume method if you want it that simply consumes up to N messages then returns them in a collection. it’s not a design flaw, or performance limitation - librdkafka has higher consume throughput than the java client.
max.poll.records
isn’t relevant to the C# consumer (or any other librdkafka based client) because messages are delivered one at a time to the application. in the java client, they are delivered in batches.