Error messages are never sent to the DLQ
See original GitHub issueHi,
I created a Spring Cloud Stream application using the Kafka Binder, here is my yml configuration file:
spring:
cloud:
stream:
kafka:
binder:
brokers: localhost
consumerProperties:
schema.registry.url: http://localhost:8081
key.deserializer: org.apache.kafka.common.serialization.StringDeserializer
value.deserializer: io.confluent.kafka.streams.serdes.avro.SpecificAvroDeserializer
producerProperties:
schema.registry.url: http://localhost:8081
key.serializer: org.apache.kafka.common.serialization.StringSerializer
value.serializer: io.confluent.kafka.streams.serdes.avro.SpecificAvroSerializer
bindings:
input:
consumer:
startOffset: earliest
enableDlq: true
dlqName: dlq
bindings:
input:
destination: foo
group: fooGroup
consumer:
maxAttempts: 1
useNativeDecoding: true
output:
destination: bar
producer:
useNativeEncoding: true
function:
definition: processor
Everything works great but the DLQ. While inspecting the broker I can see that it gets created yet no messages are sent to the topic. Here are the different processors I tried:
@Bean
public Function<EventA, EventB> processor() {
return input -> {
throw new RuntimeException("FAIL");
};
}
@Bean
public Function<Flux<EventA>, Flux<EventB>> processor() {
return input -> input.map(i -> {
throw new RuntimeException("FAIL");
});
}
Libraries used:
compile group: 'org.springframework.cloud', name: 'spring-cloud-stream', version: '2.2.0.RELEASE'
compile group: 'org.springframework.cloud', name: 'spring-cloud-stream-binder-kafka', version: '2.2.0.RELEASE'
compile group: 'org.springframework.cloud', name: 'spring-cloud-stream-schema', version: '2.2.0.RELEASE'
Did I miss something?
Issue Analytics
- State:
- Created 4 years ago
- Comments:10 (6 by maintainers)
Top Results From Across the Web
Failed events not sent to Dead letter queue? - Stack Overflow
Lambda reads data from Kinesis synchronously. DLQ is used only for asynchronous invocations of Lambda.
Read more >Amazon SQS dead-letter queues - AWS Documentation
Amazon SQS supports dead-letter queues (DLQ), which other queues (source queues) can target for messages that can't be processed (consumed) successfully.
Read more >What happens when a message cannot be delivered? - IBM
When a message cannot be delivered, the MCA can process it in several ways. It can try again, it can return-to-sender, or it...
Read more >What to DO when a MESSAGE FAILS PROCESSING? SQS ...
Dead Letter Queues ( DLQ ) are very handy for keeping track of all the messages that failed processing. Attach a DLQ to...
Read more >Error Handling via Dead Letter Queue in Apache Kafka
This article focuses on the data streaming platform Apache Kafka. The main reason for putting a message into a DLQ in Kafka is...
Read more >Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start FreeTop Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Top GitHub Comments
@antoin-m Apologies for the late response. I was able to triage your issue. It looks like you are missing some configuration. You are using native de-serialization (
useNativeDecoding: true
) and thus when you write the failed messages to DLQ, you need to use a corresponding serializer. In this case, you need to useio.confluent.kafka.streams.serdes.avro.SpecificAvroSerializer
for DLQ. Here is an example of the configuration:If you are not using native decoding, then you don’t need to set configuration on
dlqProducerProperties
as it is handled by the framework using a converter.I will use this issue to add some additional docs for this. Thank you!
Ah; sorry; I was referring to specific consumer/producer bindings, not the producer properties at the binder level.
Yes, I think this is a reasonable request, but it could be a breaking change so it will probably have to be done in the next release.