KafkaStreamsWordCountApplication failing with java.lang.ClassCastException: kafka.streams.word.count.KafkaStreamsWordCountApplication$WordCount cannot be cast to java.lang.String
See original GitHub issueAfter the latest upgrade of the libraries to Spring Boot 2.2.0-SNAPSHOT and <spring-cloud-stream.version>Horsham.BUILD-SNAPSHOT</spring-cloud-stream.version>
the KafkaStreamsWordCountApplication fails, as can be seen in the tests:
Caused by: java.lang.ClassCastException: kafka.streams.word.count.KafkaStreamsWordCountApplication$WordCount cannot be cast to java.lang.String
at org.apache.kafka.common.serialization.StringSerializer.serialize(StringSerializer.java:28)
at org.apache.kafka.common.serialization.Serializer.serialize(Serializer.java:60)
at org.apache.kafka.streams.processor.internals.RecordCollectorImpl.send(RecordCollectorImpl.java:162)
at org.apache.kafka.streams.processor.internals.RecordCollectorImpl.send(RecordCollectorImpl.java:102)
at org.apache.kafka.streams.processor.internals.SinkNode.process(SinkNode.java:89)
... 32 more
The error happens after this map operation, before writing to the sink:
.map((key, value) -> new KeyValue<>(null, new WordCount(key.key(), value, new Date(key.window().start()), new Date(key.window().end()))));
I’ve tried with a previous version that worked and the topology included a mapvalues node that converted the WordCount object value to a byte array,:
The current topology doesn’t show that mapvalues processor anymore:
Issue Analytics
- State:
- Created 4 years ago
- Comments:13
Top Results From Across the Web
Kafka Streams tip: java.lang.ClassCastException
ClassCastException in your Kafka Streams app which goes something like [B cannot be cast to java.lang.String , just double check if you've specified...
Read more >Kafka Streams - java.lang.ClassCastException: - Stack Overflow
I have written a Kafka streams application to calculate sum up count of transaction per person. Definition of Transaction POJO:
Read more >java.lang.String cannot be cast to kafka.streams.KeyValue
Suppose the KTable is aq. I do something of this sort:- KTable aggregatedStream = aq.groupBy((key, value)->value).count();
Read more >Kafka Streams Interactive Queries | Confluent Documentation
WordCountInteractiveQueriesExample: This application continuously counts the occurrences of words based on text data that is consumed in real-time from a Kafka ...
Read more >[#KAFKA-8317] ClassCastException using KTable.suppress()
ClassCastException : org.apache.kafka.streams.kstream.Windowed cannot be cast to java.lang.String at org.apache.kafka.common.serialization.
Read more >Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start FreeTop Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Top GitHub Comments
@codependent New changes went into core binder for native Serde changes and inferring
Serdes
on input/output as much as possible. I cleaned up all the Kafka Streams samples to reflect the changes. I ran your Kotlin sample locally and it runs as expected.Here is the gist of the Serde changes that went into the master. These changes will only be available on the 3.0 (Horsham) line of Spring Cloud Stream and won’t be backported.
useNativeDecoding
anduseNativeEncoding
tofalse
to force the conversion done by the framework.Serdes
used on key/value types if those types map to one of the following:If none of these matches, it defaults to
JsonSerde
provided by spring-kafka. If the application explicitly provides aSerde
on the binding, then that gets precedence always. If the application usesSerde
objects like Confluent’s Avro Serde, then that has to be set explicitly on the configuration.These changes will also significantly reduce the topology depth of Spring Cloud Stream/Kafka Streams applications that you once reported on SO.
Once you confirm that things are working as expected, we can close this issue.
That sounds like a good plan. Feel free to close this and open another one.