Kafka Streams same input and output destination causes already defined error
See original GitHub issueWhen I define Function with same input and output bindings, I get bean with that name has already been defined
. Using spring-cloud Hoxton.SR1
.
application.yml
spring:
cloud:
stream:
function:
definition: eventStream
bindings:
eventStream-in-0:
destination: events-topic
eventStream-out-0:
destination: events-topic
@Bean
public Function<KStream<UUID, Event>, KStream<UUID, Event>> eventStream() {
return kStream -> kStream
.map(...);
}
Issue Analytics
- State:
- Created 4 years ago
- Comments:5 (4 by maintainers)
Top Results From Across the Web
Spring Boot Kafka Streams - Binding Issue - Stack Overflow
It's not clear, however, why you have input and output bindings to the same destination (not that that would cause the problem you...
Read more >Enabling Exactly-Once in Kafka Streams | Confluent
In this case, the application will usually retry sending the messages again since it does not know if the message has been successfully...
Read more >Configuring a Streams Application - Apache Kafka
This specifies the replication factor of internal topics that Kafka Streams creates when local states are used or a stream is repartitioned for...
Read more >Spring Cloud Stream Kafka Binder Reference Guide
The Apache Kafka Binder implementation maps each destination to an Apache Kafka topic. The consumer group maps directly to the same Apache Kafka...
Read more >Kafka Streams work allocation - Medium
More specifically, Kafka Streams creates a fixed number of stream tasks based on the input stream partitions for the application, ...
Read more >
Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free
Top Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
@sobychacko We were implementing simple Saga pattern with orchestrator and decided to use one topic for multiple types of messages to ensure sequential processing for all participants in case of failure or lags if it were on multiple topics. Orchestrator logic handles state transitions and ignores whatever might cause any cycles. Clients simply never listen or ignore response types so they never cycle. We bypassed this using
.to
operator for the moment.I still think this should work because @StreamListener based handlers can achieve this binding. I will provide simple example as soon as I have some spare time.
Closing due to no activity.