Unexpected end of stream.
See original GitHub issuepackage com;
import org.apache.commons.pool2.impl.GenericObjectPoolConfig;
import redis.clients.jedis.Jedis; import redis.clients.jedis.JedisPool; import redis.clients.jedis.JedisPoolConfig;
public class TestJedis {
public static void main(String[] args) {
GenericObjectPoolConfig config = new JedisPoolConfig();
JedisPool pool = new JedisPool(config, "127.0.0.1");
try (Jedis jedis = pool.getResource()) {
jedis.set("test", "aaa");
System.out.println(jedis.get("test"));
}
pool.close();
}
}
i get exception : Exception in thread “main” redis.clients.jedis.exceptions.JedisConnectionException: Unexpected end of stream. at redis.clients.util.RedisInputStream.ensureFill(RedisInputStream.java:198) at redis.clients.util.RedisInputStream.readByte(RedisInputStream.java:40) at redis.clients.jedis.Protocol.process(Protocol.java:132) at redis.clients.jedis.Protocol.read(Protocol.java:196) at redis.clients.jedis.Connection.readProtocolWithCheckingBroken(Connection.java:288) at redis.clients.jedis.Connection.getStatusCodeReply(Connection.java:187) at redis.clients.jedis.Jedis.set(Jedis.java:66) at com.TestJedis.main(TestJedis.java:15)
redis version : 3.0.2 jedis version : 2.7.2
jedis.conf :
Redis configuration file example
daemonize yes pidfile /var/run/redis.pid port 6379 tcp-backlog 511 timeout 0 tcp-keepalive 60
loglevel notice logfile /Applications/service/redis-3.0.2/log/redis_6379.log
databases 16
########################## SNAPSHOTTING
save 900 1 save 300 10 save 60 10000
stop-writes-on-bgsave-error yes
rdbcompression yes rdbchecksum yes
dbfilename redis_6379.rdb
dir /Applications/service/redis-3.0.2/db
########################### REPLICATION
slave-serve-stale-data yes
slave-read-only yes
repl-diskless-sync no
repl-diskless-sync-delay 5
repl-disable-tcp-nodelay no
slave-priority 100
############################# LIMITS
maxmemory 128mb
maxmemory-policy volatile-lru
######################## APPEND ONLY MODE
appendonly no
appendfilename “appendonly.aof” appendfsync everysec
no-appendfsync-on-rewrite no
auto-aof-rewrite-percentage 100 auto-aof-rewrite-min-size 64mb aof-load-truncated yes
########################## LUA SCRIPTING
lua-time-limit 5000
########################## REDIS CLUSTER
slowlog-log-slower-than 10000 slowlog-max-len 128
########################## LATENCY MONITOR
latency-monitor-threshold 0
####################### EVENT NOTIFICATION
notify-keyspace-events “”
######################### ADVANCED CONFIG
hash-max-ziplist-entries 512 hash-max-ziplist-value 64
list-max-ziplist-entries 512 list-max-ziplist-value 64
set-max-intset-entries 512
zset-max-ziplist-entries 128 zset-max-ziplist-value 64
hll-sparse-max-bytes 3000
activerehashing yes
client-output-buffer-limit normal 0 0 0 client-output-buffer-limit slave 256mb 64mb 60 client-output-buffer-limit pubsub 32mb 8mb 60
hz 10
aof-rewrite-incremental-fsync yes
Issue Analytics
- State:
- Created 8 years ago
- Comments:18
I was also getting the same issue when I was using embedded redis. This thread has useful information.
Iam also still facing this problem while doing:
My redis-server timeout is set to 120 and the TCP keepalive is set to 60 (However my linux kernel settings for TCP socket opts are different).
The data I receive is about 450K Hash values - thats why I set the scan params is set to 50K count.
I cannot really reproduce this issue. But it is happening from time to time from different clients (which are not having the exception at the same time).