Retrying Redis connection attempts keeps going to infinite when MaxRetriesPerRequestError is handled
See original GitHub issueioredis doesn’t emit MaxRetriesPerRequestError when Redis error is down!~
ioredis ver: 4.17.3
This is not working in my production APP so I have prepared small test project to check if I am able to repeat the same and indeed I can. Code:
const Redis = require("ioredis");
function connectToRedis(redisProps) {
const redisDefaultProps = {
host: "127.0.0.1",
port: "6379",
db: 1,
maxRetriesPerRequest: 20,
retryStrategy(times) {
console.warn(`Retrying redis connection: attempt ${times}`);
return Math.min(times * 500, 2000);
},
};
const g_redis = new Redis({ ...redisDefaultProps, ...redisProps });
g_redis.on("connecting", () => {
console.log("Connecting to Redis.");
});
g_redis.on("connect", () => {
console.log("Success! Redis connection established.");
});
g_redis.on("error", (err) => {
if (err.code === "ECONNREFUSED") {
console.warn(`Could not connect to Redis: ${err.message}.`);
} else if (err.name === "MaxRetriesPerRequestError") {
console.error(`Critical Redis error: ${err.message}. Shutting down.`);
process.exit(1);
} else {
console.error(`Redis encountered an error: ${err.message}.`);
}
});
}
connectToRedis();
I have tried to setup different versions of Redis and the problem appears no matter which I use. I have also tried to setup Redis on different hosts and I get the same error.
Scenario:
- Start Redis server
- Start the app with node index.js
- Connection is established
- Go to Redis server host and shutdown the Redis process with ‘kill -9 Redis_PID’ or stop the Redis service with ‘sudo systemctl stop redis’ (CentOS 7.5).
- ioredis detects following and is starting retryStrategy described in above code:
Retrying redis connection: attempt 18
Could not connect to Redis: connect ECONNREFUSED 127.0.0.1:6379.
Bug: The strategy keeps going. No MaxRetriesPerRequestError is emitted. The app is not stopped.
Could not connect to Redis: connect ECONNREFUSED 127.0.0.1:6379.
Retrying redis connection: attempt 27
Expected behavior: retryStrategy reaches maxRetriesPerRequest limit and emits MaxRetriesPerRequestError.
Issue Analytics
- State:
- Created 3 years ago
- Reactions:4
- Comments:19 (1 by maintainers)
Top Results From Across the Web
Retrying Redis connection attempts keeps going to infinite ...
Retrying Redis connection attempts keeps going to infinite when MaxRetriesPerRequestError is handled.
Read more >Redis (ioredis) - Unable to catch connection error in order to ...
According to the "Auto-reconnect" section of the docs, ioredis will automatically try to reconnect when the connection to Redis is lost (or, ...
Read more >Error detection and handling with Redis - IBM Developer
Cloud native designs for handling Redis retry logic ... library will attempt to reconnect to a Redis service when the connection is lost....
Read more >Recommended Express integration #982 - Issuehunt
We're instantiating the ioredis connection like so: ... so we need infinite retries but also failing the command immediately if not ... Sorry...
Read more >Redis client handling
How the Redis server manages client connections. This document provides information about how Redis handles clients at the network layer level: connections, ...
Read more >Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start FreeTop Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Top GitHub Comments
There is indeed something fundamentally wrong around ioredis connection.
Following this sample
If there is an error with the connection to the redis server then it will fall in the catch statement, then it will keep looping forever throwing unhandled errors. Making an async application looping forever. The only way to quit the app at this point is to call process.exit. That makes error management with this library very hard and unreliable.
Any updates here? I’m facing the same issue. 😔