Nacking after republishing with DLQ configured via policy
See original GitHub issueHello,
I have weird behavior when using the dead-letter-exchange policy and republish strategy together.
I created a queue using the config:
vhosts: {
'/': {
connection: {
url: config.get('messageBus')?.url,
options: {
heartbeat: 1,
},
socketOptions: {
timeout: 5000,
},
retry: {
factor: 2,
strategy: 'exponential',
delay: 1000,
max: 10,
},
},
exchanges: ['exchange', 'dead-letters-exchange'],
queues: {
'queue': {},
'dead-letters-queue': {},
},
bindings: [
'exchange[key] -> queue',
'dead-letters-exchange[key] -> dead-letters-queue',
],
///.....
},
}
To avoid complex redeployments I decided to configure dead letter exchange using policy:
Pattern | queue
dead-letter-exchange: dead-letters-exchange
Issue:
And when I do the nacking with the default strategy or with { strategy: 'nack' }
it works, and I can see message in the dead-letters-queue
.
But when I use this configuration:
const RECOVERY_STRATEGY: Recovery[] = [{ strategy: 'republish', defer: 15000, attempts: 3 }, { strategy: 'nack' }];
ackOrNack(err, RECOVERY_STRATEGY);
It retries 3 times and the message just disappears. Could you please take a look? Maybe I’m missing something or can help to fix this by contributing.
Thank you in advance!
Issue Analytics
- State:
- Created 2 years ago
- Reactions:1
- Comments:14 (8 by maintainers)
Top Results From Across the Web
RabbitMQ dead letter exchange never getting messages
The problem is that if your dead letter exchange is setup as DIRECT you must specify a dead letter routing key. If you...
Read more >FAQ: When and how to use the RabbitMQ Dead Letter ...
Setting up a RabbitMQ dead letter exchange and a dead letter queue allows orphaned messages to be stored and processed. This article examines ......
Read more >RabbitMQ - Dead letter recent messages after queue size limit
I have configured Dead letter exchange for all my queues based on queue limit - https://www.rabbitmq.com/dlx.html. I have created a policy, with maximum ......
Read more >Apache Kafka Reference Guide - Quarkus
Connectors are configured to map incoming messages to a specific channel (consumed by the application) and collect outgoing messages sent to a specific...
Read more >RabbitMQ Consumer Retry Mechanism Tutorial - DZone
This means none of the message, including the header, properties, and body, can be altered by an application unless republished as a new...
Read more >Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start FreeTop Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Top GitHub Comments
I think you will need to bind each DLQ twice. Once with the routing key, and once with the queue name. That way if the message is nacked by your application before being republished, or times out due to a message-ttl, it will be routed to the dead letter exchange using the routing key, but if it is republished then nacked by the recovery strategy, the queue name will be used as the routing key.
@cressie176 thank you a lot for your input, it helps a lot to understand better how it works under the hood. We applied the workaround you proposed and bind “queue” as a key.
Thanks again.