question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

Getting lots of timeout when trying to read from Redis

See original GitHub issue

Bug Report

We are using lettuce 5.1.0.RELEASE and see lots of timeouts while querying redis. We are able to replicate this through test cases. We have installed redis on local and have put 5000 messages and while doing lrange we see timeout exceptions.

Stack trace Command timed out after 1 minute(s) io.lettuce.core.RedisCommandTimeoutException: Command timed out after 1 minute(s) at io.lettuce.core.ExceptionFactory.createTimeoutException(ExceptionFactory.java:51) at io.lettuce.core.LettuceFutures.awaitOrCancel(LettuceFutures.java:114) at io.lettuce.core.FutureSyncInvocationHandler.handleInvocation(FutureSyncInvocationHandler.java:69) at io.lettuce.core.internal.AbstractInvocationHandler.invoke(AbstractInvocationHandler.java:80) at com.sun.proxy.$Proxy41.lrange(Unknown Source)

Input Code

Here is how redis client is created

RedisCommands [String, String] conn = RedisClient.create(url) .connect(UisPrimeRedisMessageCodec.INSTANCE).sync();

where UisPrimeRedisMessageCodec implements RedisCodec<String, RedisMessage>

Command fired is (which gives a timeout)

conn.lrange(stripHyphensFromTraceId(traceId), 0 , -1)

 private def stripHyphensFromTraceId(traceId: UUID) = traceId.toString.replace("-", "").toLowerCase(Locale.ENGLISH)

Environment

  • Lettuce version(s): 5.1.0.RELEASE
  • Redis version: default.redis5.0.cluster.on (in-sync) –

In production there is cluster redis so the connection goes as I am giving example of firing a delete here.

public GenericObjectPool<StatefulConnection<String, RedisMessage>> microMessageConnectionPool(ServiceConfiguration config) {
        final GenericObjectPoolConfig objectPoolConfig = new GenericObjectPoolConfig();
        objectPoolConfig.setMaxTotal(config.getRedisConfig().getMicroMessageConnectionPool());
        objectPoolConfig.setBlockWhenExhausted(true);

        return ConnectionPoolSupport.createGenericObjectPool(() -> messageServerConn(config), objectPoolConfig);
    }

public StatefulConnection messageServerConn(ServiceConfiguration config) {
        if ( config.getRedisConfig().getClusterModeEnabled() ) {
            return createRedisClusterClient(config.getRedisConfig().getClusterMessageServerUrl())
                    .connect(UisPrimeRedisMessageCodec.INSTANCE);
        } else {
            return createRedisClient(config.getRedisConfig().getMessageServerUrl())
                    .connect(UisPrimeRedisMessageCodec.INSTANCE);
        }
    }
private RedisClusterClient createRedisClusterClient(String url) {
        final ClusterTopologyRefreshOptions topologyRefreshOptions = ClusterTopologyRefreshOptions.builder()
                .enablePeriodicRefresh(Duration.ofMinutes(5))
                .enableAllAdaptiveRefreshTriggers()
                                .build();
        final RedisClusterClient messagesRedis = RedisClusterClient.create(url);
        messagesRedis.setOptions(ClusterClientOptions.builder()
                .autoReconnect(true)
                .pingBeforeActivateConnection(true)
                .topologyRefreshOptions(topologyRefreshOptions)
                .build());
        return messagesRedis;
    }
public void deleteMicroMessagesForTraceIdsAsync(String... traceIds) {
    RedisFuture future = null;
    try {
      final StatefulConnection connection = microMessageConnectionPool.borrowObject();
      final Timer.Context deleteTimerContext = redisMetrics.getMicroMessageDeleteLatency().time();
      final BaseRedisAsyncCommands<String, RedisMessage> microMessageAsyncCommands = getMessageAsyncCommand(connection);
      if (microMessageAsyncCommands instanceof RedisAdvancedClusterAsyncCommands) {
        future = ((RedisAdvancedClusterAsyncCommands) microMessageAsyncCommands).unlink(traceIds);
      } else if (microMessageAsyncCommands instanceof RedisAsyncCommands) {
        future = ((RedisAsyncCommands) microMessageAsyncCommands).unlink(traceIds);
      }

      future.whenComplete((result, ex) -> {
        microMessageConnectionPool.returnObject(connection);
        deleteTimerContext.stop();
        if (ex == null) {
          redisMetrics.getMicroMessageDeleteSuccess().inc(traceIds.length);
        } else {
          redisMetrics.getMicroMessageDeleteFailure().inc(traceIds.length);
          LOGGER.error("Failed to delete traceIds", ex);
        }
      });
    } catch (Exception e) {
      LOGGER.error("Failed in deleting micro messages from Redis. Mostly due to not able to get the redis connection from pool" + e);
    }
  }

Exception at production are

Failed to delete traceIds ! io.lettuce.core.RedisCommandTimeoutException: Command timed out after 60 second(s) ! at io.lettuce.core.ExceptionFactory.createTimeoutException(ExceptionFactory.java:51) ! at io.lettuce.core.protocol.CommandExpiryWriter.lambda$potentiallyExpire$0(CommandExpiryWriter.java:167) ! at io.netty.util.concurrent.PromiseTask$RunnableAdapter.call(PromiseTask.java:38) ! at io.netty.util.concurrent.ScheduledFutureTask.run(ScheduledFutureTask.java:127) ! at io.netty.util.concurrent.DefaultEventExecutor.run(DefaultEventExecutor.java:66) ! at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:884) ! at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ! at java.lang.Thread.run(Thread.java:745)

Issue Analytics

  • State:closed
  • Created 5 years ago
  • Comments:8 (4 by maintainers)

github_iconTop GitHub Comments

3reactions
priyankaexpcommented, Jan 23, 2019

We have removed batching and now have quite a good control on timeouts which we handle through retry.

0reactions
mp911decommented, Jan 23, 2019

Closing this ticket as we do not seem to find a Lettuce-base issue.

Read more comments on GitHub >

github_iconTop Results From Across the Web

Troubleshoot Redis timeouts - Sitecore Documentation
Redis timeouts ( RedisTimeoutException ) occur due to various conditions caused by either the server or the client.
Read more >
Why Am I Seeing a Timeout Error When Reading Data from ...
Symptom. When you read data from Redis, timeout error "redis server response timeout (3000ms) occurred after 3 retry attempts" is returned.
Read more >
Troubleshoot Azure Cache for Redis latency and timeouts
Learn how to resolve common latency and timeout issues with Azure Cache for Redis, such as Redis server patching and timeout exceptions.
Read more >
How to solve a Redis timeout on client side ... - Stack Overflow
It seems to me that the timeouts are a design decision on their part based on the idea that they don't want to...
Read more >
Are you getting network or CPU bound? | StackExchange.Redis
Redis uses a single TCP connection and can only read one response at a time. Even though the first operation timed out, it...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found