Could not reset offsets to earliest to get the stream from beginning for KTable
See original GitHub issueHey,
I am using spring cloud kafka binder to read the data to KStream. While reading the data from one of the topic, i need to read from beginning.
I have tried to set kafka offset reset and start offset properties. But, could not find any references.
Could you please help me providing any sample application.yaml to reset the offset, so that i can consume messages from topic from the beginning
Adding application.yaml that i have used:
spring.cloud.stream.bindings.input:
destination: input-topic1
consumer:
useNativeDecoding: true
headerMode: raw
spring.cloud.stream.bindings.output:
destination: output-topic
producer:
useNativeDecoding: true
headerMode: raw
spring.cloud.stream.bindings.beginningInput:
destination: beginning-topic
consumer:
useNativeDecoding: true
headerMode: raw
spring.cloud.stream.kafka.streams.bindings.input:
consumer:
keySerde: org.apache.kafka.common.serialization.Serdes$StringSerde
valueSerde: org.apache.kafka.common.serialization.Serdes$StringSerde
spring.cloud.stream.kafka.streams.bindings.output:
producer:
keySerde: org.apache.kafka.common.serialization.Serdes$StringSerde
valueSerde: org.apache.kafka.common.serialization.Serdes$StringSerde
spring.cloud.stream.kafka.streams.bindings.beginningInput:
consumer:
keySerde: org.apache.kafka.common.serialization.Serdes$StringSerde
valueSerde: org.apache.kafka.common.serialization.Serdes$StringSerde
resetOffsets: true
startOffset: earliest
spring.cloud.stream.kafka.streams.binder:
brokers: 127.0.0.1
zkNodes: 127.0.0.1
configuration:
default.key.serde: org.apache.kafka.common.serialization.Serdes$StringSerde
default.value.serde: org.apache.kafka.common.serialization.Serdes$StringSerde
commit.interval.ms: 1000
I would like to know if there is any error in the configuration i have used
Issue Analytics
- State:
- Created 5 years ago
- Reactions:8
- Comments:11 (7 by maintainers)
Top Results From Across the Web
Why do the offsets of the consumer-group (app-id) of my Kafka ...
The offset reset just and always happens (after restarting) with the KTABLE-SUPPRESS-STATE-STORE internal topic created by the Kafka Stream API.
Read more >Data Reprocessing with the Streams API in Kafka - Confluent
Thus, as a first step to reprocess data, the committed offsets need to be reset. This can be accomplished as follows: Write a...
Read more >View and reset consumer group offsets - Aiven documentation
You might want to reset the consumer group offset when the topic parsing needs to start ... --to-earliest : resets the offset to...
Read more >Configuring a Streams Application - Apache Kafka
APPLICATION_ID_CONFIG, "my-first-streams-application"); settings.put(StreamsConfig. ... auto.offset.reset, Global Consumer, none (cannot be changed).
Read more >How to restart a structured streaming query from last written ...
Therefore you cannot modify the checkpoint directory. As an alternative, copy and update the offset with the input records and store this in...
Read more >Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start FreeTop Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Top GitHub Comments
@forgetaboutme As stated several times above; that property has no meaning (it is ignored by the kafka client) when there is already a committed offset for the consumer group; it only applies the first time a consumer consumes from a partition, or if it never commits an offset, or if the offsets have expired (by default, 7 days after the last consumer left the group with Brokers 2.1 and above).
@forgetaboutme This issue is very old and closed. Could you please start a new GH issue with the same details above? By “suggestions”, I meant the ones from @orchesio on May/2018 above, but happy to look into it further, if you can create a new issue with some context around it.