fetchOffsets after resetOffsets returns -1 for all partitions
See original GitHub issueIt seems fetchOffsets
after resetOffsets
keeps returning -1 to all partitions, even though there are previously stored offsets.
In principle, what I would like to do, is to output the current offsets to logs when a new consumer is added. In our use case, we should always start from the latest offset.
Here’s the code I used for testing (kafkajs version 1.4.4, kafka version 1.0.1):
const client = new kafka.Kafka({
brokers: ['127.0.0.1:9092'],
clientId: 'test-clientId'
});
const admin = client.admin();
await admin.connect();
await admin.resetOffsets({ groupId: 'test-groupId', topic: 'test-topic' });
// ALL PARTITION OFFSETS EQUAL -1
console.log(
'FETCH OFFSETS',
await admin.fetchOffsets({
groupId: 'test-groupId',
topic: 'test-topic'
})
);
const consumer = client.consumer({
groupId: 'test-groupId'
});
await consumer.subscribe({ topic: 'test-topic' });
await consumer.connect();
await consumer.run({
eachMessage: async ({ message, partition }) => {
console.log('RECEIVED MESSAGE', partition, message.offset);
}
});
const producer = client.producer();
await producer.connect();
await producer.send({
messages: [{ key: '', value: 'Hello world' }],
topic: 'test-topic'
});
await producer.disconnect();
await consumer.disconnect();
await admin.resetOffsets({ groupId: 'test-groupId', topic: 'test-topic' });
// ALL PARTITION OFFSETS STILL EQUAL -1
console.log(
'FETCH OFFSETS',
await admin.fetchOffsets({
groupId: 'test-groupId',
topic: 'test-topic'
})
);
await admin.disconnect();
If it helps, here’s the stdout after running the previous code block:
{"level":"INFO","timestamp":"2018-11-03T09:37:01.697Z","logger":"kafkajs","message":"[Consumer] Starting","groupId":"test-groupId"}
{"level":"INFO","timestamp":"2018-11-03T09:37:01.698Z","logger":"kafkajs","message":"[ConsumerGroup] Pausing fetching from 1 topics","topics":["test-topic"]}
{"level":"INFO","timestamp":"2018-11-03T09:37:01.712Z","logger":"kafkajs","message":"[Runner] Consumer has joined the group","groupId":"test-groupId","memberId":"test-clientId-70e47b46-abb4-4ce9-bc69-7b60c289d4ee","leaderId":"test-clientId-70e47b46-abb4-4ce9-bc69-7b60c289d4ee","isLeader":true,"memberAssignment":{"test-topic":[2,1,0]},"duration":14}
{"level":"INFO","timestamp":"2018-11-03T09:37:07.748Z","logger":"kafkajs","message":"[Consumer] Stopped","groupId":"test-groupId"}
FETCH OFFSETS [ { partition: 0, offset: '-1' },
{ partition: 1, offset: '-1' },
{ partition: 2, offset: '-1' } ]
{"level":"INFO","timestamp":"2018-11-03T09:37:07.759Z","logger":"kafkajs","message":"[Consumer] Starting","groupId":"test-groupId"}
{"level":"INFO","timestamp":"2018-11-03T09:37:07.767Z","logger":"kafkajs","message":"[Runner] Consumer has joined the group","groupId":"test-groupId","memberId":"test-clientId-91a4b8ee-da97-49c8-afae-c0db0c71f70e","leaderId":"test-clientId-91a4b8ee-da97-49c8-afae-c0db0c71f70e","isLeader":true,"memberAssignment":{"test-topic":[2,1,0]},"duration":8}
RECEIVED MESSAGE 0 15
{"level":"INFO","timestamp":"2018-11-03T09:37:08.821Z","logger":"kafkajs","message":"[Consumer] Stopped","groupId":"test-groupId"}
{"level":"INFO","timestamp":"2018-11-03T09:37:08.824Z","logger":"kafkajs","message":"[Consumer] Starting","groupId":"test-groupId"}
{"level":"INFO","timestamp":"2018-11-03T09:37:08.824Z","logger":"kafkajs","message":"[ConsumerGroup] Pausing fetching from 1 topics","topics":["test-topic"]}
{"level":"INFO","timestamp":"2018-11-03T09:37:08.830Z","logger":"kafkajs","message":"[Runner] Consumer has joined the group","groupId":"test-groupId","memberId":"test-clientId-d2bd49d8-31f4-4561-af03-41adabf9be62","leaderId":"test-clientId-d2bd49d8-31f4-4561-af03-41adabf9be62","isLeader":true,"memberAssignment":{"test-topic":[2,1,0]},"duration":6}
{"level":"INFO","timestamp":"2018-11-03T09:37:14.846Z","logger":"kafkajs","message":"[Consumer] Stopped","groupId":"test-groupId"}
FETCH OFFSETS [ { partition: 0, offset: '-1' },
{ partition: 1, offset: '-1' },
{ partition: 2, offset: '-1' } ]
Issue Analytics
- State:
- Created 5 years ago
- Comments:9 (5 by maintainers)
Top Results From Across the Web
How to batch fetch all partition offsets of one Kafka topic in ...
The records returned by poll already include their offsets. You could get them by invoking ConsumerRecord.offset(). Is it what you want? – ...
Read more >Admin Client
The admin client hosts all the cluster operations, such as: `createTopics`, ... fetchOffsets returns the consumer group offset for a list of topics....
Read more >How to change or reset consumer offset in Kafka?
A consumer can be assigned to consume multiple partitions. Consumer offset is managed at the partition level per consumer group.
Read more >kgo
Over the plan, we remove all partitions that migrated from one member (where it was assigned) to a new member (where it is...
Read more >KIP-709: Extend OffsetFetch requests to accept multiple ...
Starting in version 1, the broker supports fetching offsets from the ... like to fetch offsets for, or null to fetch offsets for...
Read more >
Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free
Top Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
@kpala issue #205 has your feature request, I’m closing this one for now. Thanks again
@kpala I think you are not giving enough time for your consumers to process the message, you are sending the message and disconnecting right after it. Another thing, the only way you have to know when the consumer has joined the group is listening to the instrumentation event
GROUP_JOIN
, something like: