Getting lots of timeout when trying to read from Redis
See original GitHub issueBug Report
We are using lettuce 5.1.0.RELEASE and see lots of timeouts while querying redis. We are able to replicate this through test cases. We have installed redis on local and have put 5000 messages and while doing lrange we see timeout exceptions.
Stack trace Command timed out after 1 minute(s) io.lettuce.core.RedisCommandTimeoutException: Command timed out after 1 minute(s) at io.lettuce.core.ExceptionFactory.createTimeoutException(ExceptionFactory.java:51) at io.lettuce.core.LettuceFutures.awaitOrCancel(LettuceFutures.java:114) at io.lettuce.core.FutureSyncInvocationHandler.handleInvocation(FutureSyncInvocationHandler.java:69) at io.lettuce.core.internal.AbstractInvocationHandler.invoke(AbstractInvocationHandler.java:80) at com.sun.proxy.$Proxy41.lrange(Unknown Source)
Input Code
Here is how redis client is created
RedisCommands [String, String] conn = RedisClient.create(url) .connect(UisPrimeRedisMessageCodec.INSTANCE).sync();
where UisPrimeRedisMessageCodec implements RedisCodec<String, RedisMessage>
Command fired is (which gives a timeout)
conn.lrange(stripHyphensFromTraceId(traceId), 0 , -1)
private def stripHyphensFromTraceId(traceId: UUID) = traceId.toString.replace("-", "").toLowerCase(Locale.ENGLISH)
Environment
- Lettuce version(s): 5.1.0.RELEASE
- Redis version: default.redis5.0.cluster.on (in-sync) –
In production there is cluster redis so the connection goes as I am giving example of firing a delete here.
public GenericObjectPool<StatefulConnection<String, RedisMessage>> microMessageConnectionPool(ServiceConfiguration config) {
final GenericObjectPoolConfig objectPoolConfig = new GenericObjectPoolConfig();
objectPoolConfig.setMaxTotal(config.getRedisConfig().getMicroMessageConnectionPool());
objectPoolConfig.setBlockWhenExhausted(true);
return ConnectionPoolSupport.createGenericObjectPool(() -> messageServerConn(config), objectPoolConfig);
}
public StatefulConnection messageServerConn(ServiceConfiguration config) {
if ( config.getRedisConfig().getClusterModeEnabled() ) {
return createRedisClusterClient(config.getRedisConfig().getClusterMessageServerUrl())
.connect(UisPrimeRedisMessageCodec.INSTANCE);
} else {
return createRedisClient(config.getRedisConfig().getMessageServerUrl())
.connect(UisPrimeRedisMessageCodec.INSTANCE);
}
}
private RedisClusterClient createRedisClusterClient(String url) {
final ClusterTopologyRefreshOptions topologyRefreshOptions = ClusterTopologyRefreshOptions.builder()
.enablePeriodicRefresh(Duration.ofMinutes(5))
.enableAllAdaptiveRefreshTriggers()
.build();
final RedisClusterClient messagesRedis = RedisClusterClient.create(url);
messagesRedis.setOptions(ClusterClientOptions.builder()
.autoReconnect(true)
.pingBeforeActivateConnection(true)
.topologyRefreshOptions(topologyRefreshOptions)
.build());
return messagesRedis;
}
public void deleteMicroMessagesForTraceIdsAsync(String... traceIds) {
RedisFuture future = null;
try {
final StatefulConnection connection = microMessageConnectionPool.borrowObject();
final Timer.Context deleteTimerContext = redisMetrics.getMicroMessageDeleteLatency().time();
final BaseRedisAsyncCommands<String, RedisMessage> microMessageAsyncCommands = getMessageAsyncCommand(connection);
if (microMessageAsyncCommands instanceof RedisAdvancedClusterAsyncCommands) {
future = ((RedisAdvancedClusterAsyncCommands) microMessageAsyncCommands).unlink(traceIds);
} else if (microMessageAsyncCommands instanceof RedisAsyncCommands) {
future = ((RedisAsyncCommands) microMessageAsyncCommands).unlink(traceIds);
}
future.whenComplete((result, ex) -> {
microMessageConnectionPool.returnObject(connection);
deleteTimerContext.stop();
if (ex == null) {
redisMetrics.getMicroMessageDeleteSuccess().inc(traceIds.length);
} else {
redisMetrics.getMicroMessageDeleteFailure().inc(traceIds.length);
LOGGER.error("Failed to delete traceIds", ex);
}
});
} catch (Exception e) {
LOGGER.error("Failed in deleting micro messages from Redis. Mostly due to not able to get the redis connection from pool" + e);
}
}
Exception at production are
Failed to delete traceIds ! io.lettuce.core.RedisCommandTimeoutException: Command timed out after 60 second(s) ! at io.lettuce.core.ExceptionFactory.createTimeoutException(ExceptionFactory.java:51) ! at io.lettuce.core.protocol.CommandExpiryWriter.lambda$potentiallyExpire$0(CommandExpiryWriter.java:167) ! at io.netty.util.concurrent.PromiseTask$RunnableAdapter.call(PromiseTask.java:38) ! at io.netty.util.concurrent.ScheduledFutureTask.run(ScheduledFutureTask.java:127) ! at io.netty.util.concurrent.DefaultEventExecutor.run(DefaultEventExecutor.java:66) ! at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:884) ! at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ! at java.lang.Thread.run(Thread.java:745)
Issue Analytics
- State:
- Created 5 years ago
- Comments:8 (4 by maintainers)
Top GitHub Comments
We have removed batching and now have quite a good control on timeouts which we handle through retry.
Closing this ticket as we do not seem to find a Lettuce-base issue.