question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

Constant Unable to send command! errors with pingConnectionInterval config

See original GitHub issue

Expected behavior

Once we added pingConnectionInterval = 1000 configuration, randomly after a period of time the server is unable to perform any operation on redis. Having the following error on every request: org.redisson.client.WriteRedisConnectionException: Unable to send command! Node source: NodeSource [slot=null, addr=null, redisClient=null, redirect=null, entry=MasterSlaveEntry [masterEntry=[freeSubscribeConnectionsAmount=0, freeSubscribeConnectionsCounter=49, freeConnectionsAmount=8, freeConnectionsCounter=64, freezed=false, freezeReason=null, client=[addr=redis://100.127.5.63:6379], nodeType=MASTER, firstFail=0]]], connection: [id: 0x06781db8, L:0.0.0.0/0.0.0.0:47286 ! R:100.127.15.237/100.127.15.237:6379], command: (HMGET), params: [dev:operator:TENANT_DETAILS_CACHE_KEY_V4, PooledUnsafeDirectByteBuf(ridx: 0, widx: 43, cap: 256), PooledUnsafeDirectByteBuf(ridx: 0, widx: 28, cap: 256)] at org.redisson.command.CommandAsyncService.checkWriteFuture(CommandAsyncService.java:664) ~[redisson-3.7.5.jar:?] at org.redisson.command.CommandAsyncService.access$100(CommandAsyncService.java:84) ~[redisson-3.7.5.jar:?] at org.redisson.command.CommandAsyncService$9$1.operationComplete(CommandAsyncService.java:622) ~[redisson-3.7.5.jar:?] at org.redisson.command.CommandAsyncService$9$1.operationComplete(CommandAsyncService.java:619) ~[redisson-3.7.5.jar:?] at io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:511) ~[netty-common-4.1.27.Final.jar:4.1.27.Final] at io.netty.util.concurrent.DefaultPromise.notifyListenersNow(DefaultPromise.java:485) ~[netty-common-4.1.27.Final.jar:4.1.27.Final] at io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:424) ~[netty-common-4.1.27.Final.jar:4.1.27.Final] at io.netty.util.concurrent.DefaultPromise.tryFailure(DefaultPromise.java:121) ~[netty-common-4.1.27.Final.jar:4.1.27.Final] at io.netty.channel.AbstractChannel$AbstractUnsafe.safeSetFailure(AbstractChannel.java:987) ~[netty-transport-4.1.27.Final.jar:4.1.27.Final] at io.netty.channel.AbstractChannel$AbstractUnsafe.write(AbstractChannel.java:869) ~[netty-transport-4.1.27.Final.jar:4.1.27.Final] at io.netty.channel.DefaultChannelPipeline$HeadContext.write(DefaultChannelPipeline.java:1391) ~[netty-transport-4.1.27.Final.jar:4.1.27.Final] at io.netty.channel.AbstractChannelHandlerContext.invokeWrite0(AbstractChannelHandlerContext.java:738) ~[netty-transport-4.1.27.Final.jar:4.1.27.Final] at io.netty.channel.AbstractChannelHandlerContext.invokeWrite(AbstractChannelHandlerContext.java:730) ~[netty-transport-4.1.27.Final.jar:4.1.27.Final] at io.netty.channel.AbstractChannelHandlerContext.access$1900(AbstractChannelHandlerContext.java:38) ~[netty-transport-4.1.27.Final.jar:4.1.27.Final] at io.netty.channel.AbstractChannelHandlerContext$AbstractWriteTask.write(AbstractChannelHandlerContext.java:1081) ~[netty-transport-4.1.27.Final.jar:4.1.27.Final] at io.netty.channel.AbstractChannelHandlerContext$WriteAndFlushTask.write(AbstractChannelHandlerContext.java:1128) ~[netty-transport-4.1.27.Final.jar:4.1.27.Final] at io.netty.channel.AbstractChannelHandlerContext$AbstractWriteTask.run(AbstractChannelHandlerContext.java:1070) ~[netty-transport-4.1.27.Final.jar:4.1.27.Final] at io.netty.util.concurrent.AbstractEventExecutor.safeExecute(AbstractEventExecutor.java:163) ~[netty-common-4.1.27.Final.jar:4.1.27.Final] at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:404) ~[netty-common-4.1.27.Final.jar:4.1.27.Final] at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:464) ~[netty-transport-4.1.27.Final.jar:4.1.27.Final] at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:884) ~[netty-common-4.1.27.Final.jar:4.1.27.Final] at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ~[netty-common-4.1.27.Final.jar:4.1.27.Final] at java.lang.Thread.run(Thread.java:844) [?:?] Caused by: java.nio.channels.ClosedChannelException at io.netty.channel.AbstractChannel$AbstractUnsafe.write(...)(Unknown Source) ~[netty-transport-4.1.27.Final.jar:4.1.27.Final]

Actual behavior

Steps to reproduce or test case

Redis version

3.0.7

Redisson version

3.7.5

Redisson configuration

redis cluster with sentinels pingConnectionInterval = 1000

Issue Analytics

  • State:closed
  • Created 5 years ago
  • Reactions:3
  • Comments:18 (5 by maintainers)

github_iconTop GitHub Comments

2reactions
archmishacommented, Nov 3, 2018

The issue reproduced on the latest version 3.9.0 Unable to send command! Node source: NodeSource [slot=null, addr=null, redisClient=null, redirect=null, entry=MasterSlaveEntry [masterEntry=[freeSubscribeConnectionsAmount=0, freeSubscribeConnectionsCounter=49, freeConnectionsAmount=16, freeConnectionsCounter=64, freezed=false, freezeReason=null, client=[addr=redis://100.65.5.107:6379], nodeType=MASTER, firstFail=0]]], connection: RedisConnection@183817154 [redisClient=[addr=redis://100.65.15.190:6379], channel=[id: 0x2f13b04f, L:0.0.0.0/0.0.0.0:55402]], command: (HMGET), command params: [jlr:METADATA_CACHE_KEY, "CI_JOB_NAMES", "TEST_EXECUTION_OS", "TEST_EXECUTION_NAMES", "LATEST_TEST_EXECUTION_TIME", "TEST_EXECUTION_TAGS", "TEST_EXECUTION_DEVICES_V2", "LATEST_CI_JOB_TIME"] after 3 retry attempts org.redisson.client.WriteRedisConnectionException: Unable to send command! Node source: NodeSource [slot=null, addr=null, redisClient=null, redirect=null, entry=MasterSlaveEntry [masterEntry=[freeSubscribeConnectionsAmount=0, freeSubscribeConnectionsCounter=49, freeConnectionsAmount=16, freeConnectionsCounter=64, freezed=false, freezeReason=null, client=[addr=redis://100.65.5.107:6379], nodeType=MASTER, firstFail=0]]], connection: RedisConnection@183817154 [redisClient=[addr=redis://100.65.15.190:6379], channel=[id: 0x2f13b04f, L:0.0.0.0/0.0.0.0:55402]], command: (HMGET), command params: [jlr:METADATA_CACHE_KEY, "CI_JOB_NAMES", "TEST_EXECUTION_OS", "TEST_EXECUTION_NAMES", "LATEST_TEST_EXECUTION_TIME", "TEST_EXECUTION_TAGS", "TEST_EXECUTION_DEVICES_V2", "LATEST_CI_JOB_TIME"] after 3 retry attempts at org.redisson.command.CommandAsyncService.checkWriteFuture(CommandAsyncService.java:837) ~[redisson-3.9.0.jar:?] at org.redisson.command.CommandAsyncService.access$200(CommandAsyncService.java:92) ~[redisson-3.9.0.jar:?] at org.redisson.command.CommandAsyncService$11$1.operationComplete(CommandAsyncService.java:794) ~[redisson-3.9.0.jar:?] at org.redisson.command.CommandAsyncService$11$1.operationComplete(CommandAsyncService.java:791) ~[redisson-3.9.0.jar:?] at io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:511) ~[netty-common-4.1.30.Final.jar:4.1.30.Final] at io.netty.util.concurrent.DefaultPromise.notifyListenersNow(DefaultPromise.java:485) ~[netty-common-4.1.30.Final.jar:4.1.30.Final] at io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:424) ~[netty-common-4.1.30.Final.jar:4.1.30.Final] at io.netty.util.concurrent.DefaultPromise.tryFailure(DefaultPromise.java:121) ~[netty-common-4.1.30.Final.jar:4.1.30.Final] at io.netty.channel.AbstractChannel$AbstractUnsafe.safeSetFailure(AbstractChannel.java:987) ~[netty-transport-4.1.30.Final.jar:4.1.30.Final] at io.netty.channel.AbstractChannel$AbstractUnsafe.write(AbstractChannel.java:869) ~[netty-transport-4.1.30.Final.jar:4.1.30.Final] at io.netty.channel.DefaultChannelPipeline$HeadContext.write(DefaultChannelPipeline.java:1391) ~[netty-transport-4.1.30.Final.jar:4.1.30.Final] at io.netty.channel.AbstractChannelHandlerContext.invokeWrite0(AbstractChannelHandlerContext.java:738) ~[netty-transport-4.1.30.Final.jar:4.1.30.Final] at io.netty.channel.AbstractChannelHandlerContext.invokeWrite(AbstractChannelHandlerContext.java:730) ~[netty-transport-4.1.30.Final.jar:4.1.30.Final] at io.netty.channel.AbstractChannelHandlerContext.access$1900(AbstractChannelHandlerContext.java:38) ~[netty-transport-4.1.30.Final.jar:4.1.30.Final] at io.netty.channel.AbstractChannelHandlerContext$AbstractWriteTask.write(AbstractChannelHandlerContext.java:1081) ~[netty-transport-4.1.30.Final.jar:4.1.30.Final] at io.netty.channel.AbstractChannelHandlerContext$WriteAndFlushTask.write(AbstractChannelHandlerContext.java:1128) ~[netty-transport-4.1.30.Final.jar:4.1.30.Final] at io.netty.channel.AbstractChannelHandlerContext$AbstractWriteTask.run(AbstractChannelHandlerContext.java:1070) ~[netty-transport-4.1.30.Final.jar:4.1.30.Final] at io.netty.util.concurrent.AbstractEventExecutor.safeExecute(AbstractEventExecutor.java:163) ~[netty-common-4.1.30.Final.jar:4.1.30.Final] at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:404) ~[netty-common-4.1.30.Final.jar:4.1.30.Final] at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:462) ~[netty-transport-4.1.30.Final.jar:4.1.30.Final] at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:897) ~[netty-common-4.1.30.Final.jar:4.1.30.Final] at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ~[netty-common-4.1.30.Final.jar:4.1.30.Final] at java.lang.Thread.run(Thread.java:844) [?:?] Caused by: java.nio.channels.ClosedChannelException at io.netty.channel.AbstractChannel$AbstractUnsafe.write(...)(Unknown Source) ~[netty-transport-4.1.30.Final.jar:4.1.30.Final]

1reaction
abel98commented, Nov 29, 2018

This may help. We had the similar issue when upgraded to 3.9.1 from 3.7.1 and Redis_version:4.0.9 with Sentinel configuration. The root cause was bad configuration of the Redis sentinel cluster, that resulted in slave nodes disconnecting and reconnecting from time to time.

4653734 29 Nov 2018 17:56:29 +0000 [qtp204349222-106] ERROR o.r.c.pool.SlaveConnectionPool - slave 10.86.145.36/10.86.145.36:6379 has been disconnected after 180000 ms interval since moment of the first failed connection
4653735 29 Nov 2018 17:56:29 +0000 [qtp204349222-106] INFO  o.r.connection.MasterSlaveEntry - master 10.86.145.37/10.86.145.37:6379 used as slave
4653735 29 Nov 2018 17:56:29 +0000 [qtp204349222-106] ERROR o.r.c.pool.SlaveConnectionPool - slave 10.86.145.38/10.86.145.38:6379 has been disconnected after 180000 ms interval since moment of the first failed connection
4654631 29 Nov 2018 17:56:30 +0000 [redisson-netty-1-1] INFO  o.r.connection.MasterSlaveEntry - master 10.86.145.37/10.86.145.37:6379 excluded from slaves
4654631 29 Nov 2018 17:56:30 +0000 [redisson-netty-1-1] INFO  o.r.c.SentinelConnectionManager - slave: 10.86.145.36:6379 has up
4654631 29 Nov 2018 17:56:30 +0000 [redisson-netty-1-1] INFO  o.r.c.SentinelConnectionManager - slave: 10.86.145.38:6379 has up

However after this, every time redisson readonly command to slave nodes would fail, for example:

org.redisson.client.WriteRedisConnectionException: Unable to send command! Node source: NodeSource [slot=null, addr=null, redisClient=null, redirect=null, entry=MasterSlaveEntry [masterEntry=[freeSubscribeConnectionsAmount=1, freeSubscribeConnectionsCounter=50, freeConnectionsAmount=32, freeConnectionsCounter=100, freezed=false, freezeReason=null, client=[addr=redis://10.86.145.37:6379], nodeType=MASTER, firstFail=0]]], connection: RedisConnection@375432129 [redisClient=[addr=redis://10.86.145.38:6379], channel=[id: 0x153f9d20, L:0.0.0.0/0.0.0.0:54252]], command: (EXISTS), command params: [XXX:AnalysisQueue:validity] after 5 retry attempts
        at org.redisson.command.CommandAsyncService.checkWriteFuture(CommandAsyncService.java:837)
        at org.redisson.command.CommandAsyncService.access$200(CommandAsyncService.java:92)
        at org.redisson.command.CommandAsyncService$11$1.operationComplete(CommandAsyncService.java:794)
        at org.redisson.command.CommandAsyncService$11$1.operationComplete(CommandAsyncService.java:791)
        at io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:511)
        at io.netty.util.concurrent.DefaultPromise.notifyListenersNow(DefaultPromise.java:485)
        at io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:424)
        at io.netty.util.concurrent.DefaultPromise.tryFailure(DefaultPromise.java:121)
        at io.netty.channel.AbstractChannel$AbstractUnsafe.safeSetFailure(AbstractChannel.java:987)
        at io.netty.channel.AbstractChannel$AbstractUnsafe.write(AbstractChannel.java:869)
        at io.netty.channel.DefaultChannelPipeline$HeadContext.write(DefaultChannelPipeline.java:1391)
        at io.netty.channel.AbstractChannelHandlerContext.invokeWrite0(AbstractChannelHandlerContext.java:738)
        at io.netty.channel.AbstractChannelHandlerContext.invokeWrite(AbstractChannelHandlerContext.java:730)
        at io.netty.channel.AbstractChannelHandlerContext.access$1700(AbstractChannelHandlerContext.java:38)
        at io.netty.channel.AbstractChannelHandlerContext$AbstractWriteTask.write(AbstractChannelHandlerContext.java:1104)
        at io.netty.channel.AbstractChannelHandlerContext$WriteAndFlushTask.write(AbstractChannelHandlerContext.java:1151)
        at io.netty.channel.AbstractChannelHandlerContext$AbstractWriteTask.run(AbstractChannelHandlerContext.java:1075)
        at io.netty.util.concurrent.AbstractEventExecutor.safeExecute(AbstractEventExecutor.java:163)
        at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:404)
        at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:466)
        at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:897)
        at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
        at java.lang.Thread.run(Thread.java:748)
Caused by: java.nio.channels.ClosedChannelException: null
        at io.netty.channel.AbstractChannel$AbstractUnsafe.write(...)(Unknown Source)

So it looks like the 10.86.145.38 channel is closed, but the slave node 10.86.145.38 is connected, which creates this problem.

Once we fixed the Redis Sentinel configuration issue, we did not see this Redisson problem anymore.

Please note that with 3.7.1 we did not have these issues/

Read more comments on GitHub >

github_iconTop Results From Across the Web

Redisson - 华为云
ping connection interval. Configuring this parameter will increase Redis load. Set a value based on the number of connections.
Read more >
RuntimeException: class not found CLASSNAME while using ...
If I se below property in redisson config, then it fixes the error. useThreadClassLoader: false. For detail please see documentation.
Read more >
What does 2>/dev/null mean? - Ask Ubuntu
Run command , append stdout and stderr to a log file. ... /dev/null is the standard Linux device where you send output that...
Read more >
https://raw.githubusercontent.com/redisson/redisso...
countExists()` method throws errors in cluster mode Fixed - Spring Data Redis ... reconnected if init command failed to send Fixed - `keepAlive`...
Read more >
mrniko/redisson - Gitter
when the provider has failed to find a cached codec, it will try to ... registerCodec(JsonJacksonCodec.class, codec); final Config config = new Config()...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found