Multiple SortedSet in Redis sink not working
See original GitHub issueIssue Guidelines
Please review these questions before submitting any issue?
What version of the Stream Rector are you reporting this issue for?
1.0.0
Are you running the correct version of Kafka/Confluent for the Stream reactor release?
Yes, kafka 2.12-1.0.0
Have you consulted our FAQs page first?
Yes, I have.
Do you have a supported version of the data source/sink .i.e Cassandra 3.0.9?
I hope so but the exception occurs when connect’s configuration is parsed before trying to connect to redis.
Have you read the docs?
Yes, I have.
What is the expected behaviour?
Connect to redis instance.
What was observed?
An exception occurs when a kcql from manual (SELECT temperature, humidity FROM sensorsTopic PK sensorID STOREAS SortedSet(score=timestamp)) is used:
[2018-05-08 15:05:56,586] ERROR WorkerSinkTask{id=redis-signals-sink-0} Task threw an uncaught and unrecoverable exception (org.apache.kafka.connect.runtime.WorkerTask)
java.lang.NullPointerException
at com.datamountaineer.streamreactor.connect.redis.sink.RedisSinkTask$$anonfun$4.apply(RedisSinkTask.scala:66)
at com.datamountaineer.streamreactor.connect.redis.sink.RedisSinkTask$$anonfun$4.apply(RedisSinkTask.scala:66)
at scala.collection.TraversableLike$$anonfun$filterImpl$1.apply(TraversableLike.scala:248)
at scala.collection.immutable.Set$Set1.foreach(Set.scala:94)
at scala.collection.TraversableLike$class.filterImpl(TraversableLike.scala:247)
at scala.collection.TraversableLike$class.filter(TraversableLike.scala:259)
at scala.collection.AbstractTraversable.filter(Traversable.scala:104)
at com.datamountaineer.streamreactor.connect.redis.sink.RedisSinkTask.start(RedisSinkTask.scala:66)
at org.apache.kafka.connect.runtime.WorkerSinkTask.initializeAndStart(WorkerSinkTask.java:267)
at org.apache.kafka.connect.runtime.WorkerSinkTask.execute(WorkerSinkTask.java:163)
at org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:170)
at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:214)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
[2018-05-08 15:05:56,589] ERROR WorkerSinkTask{id=redis-sink-0} Task is being killed and will not recover until manually restarted (org.apache.kafka.connect.runtime.WorkerTask)
I can run sink in cache and sorted set mode but not in multiple sorted set mode. When trying to set a prefix: INSERT INTO SENSOR- SELECT temperature, humidity FROM sensorsTopic PK sensorID STOREAS SortedSet(score=timestamp) there is a different exception:
java.lang.AssertionError: assertion failed: They keyword PK (Primary Key) is not supported in Redis INSERT_SS mode. Please review the KCQL syntax of connector
at scala.Predef$.assert(Predef.scala:170)
at com.datamountaineer.streamreactor.connect.redis.sink.writer.RedisInsertSortedSet$$anonfun$3.apply(RedisInsertSortedSet.scala:50)
at com.datamountaineer.streamreactor.connect.redis.sink.writer.RedisInsertSortedSet$$anonfun$3.apply(RedisInsertSortedSet.scala:45)
at scala.collection.immutable.Set$Set1.foreach(Set.scala:94)
at com.datamountaineer.streamreactor.connect.redis.sink.writer.RedisInsertSortedSet.<init>(RedisInsertSortedSet.scala:45)
at com.datamountaineer.streamreactor.connect.redis.sink.RedisSinkTask$$anonfun$start$2.apply(RedisSinkTask.scala:77)
at com.datamountaineer.streamreactor.connect.redis.sink.RedisSinkTask$$anonfun$start$2.apply(RedisSinkTask.scala:75)
at scala.Option.map(Option.scala:146)
at com.datamountaineer.streamreactor.connect.redis.sink.RedisSinkTask.start(RedisSinkTask.scala:75)
at org.apache.kafka.connect.runtime.WorkerSinkTask.initializeAndStart(WorkerSinkTask.java:267)
at org.apache.kafka.connect.runtime.WorkerSinkTask.execute(WorkerSinkTask.java:163)
at org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:170)
at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:214)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
What is your Connect cluster configuration (connect-avro-distributed.properties)?
I’ve tried it in a standalone mode.
What is your connector properties configuration (my-connector.properties)?
name=redis-sink connector.class=com.datamountaineer.streamreactor.connect.redis.sink.RedisSinkConnector tasks.max=1 topics=sensorsTopic connect.redis.host=127.0.0.1 connect.redis.port=6379 connect.redis.kcql=SELECT temperature, humidity FROM sensorsTopic PK sensorID STOREAS SortedSet(score=timestamp)
Issue Analytics
- State:
- Created 5 years ago
- Comments:5 (2 by maintainers)
Top GitHub Comments
Alternatively you can use the following dependencies in kafka-connect-redis/build.gradle and rebuild the shadowJar.
Where I used
link4jVersion = "1.8.0"
I’m not sure if this is the desired solution, however, so I have not submitted this as a PR.
I would add this dependency in the main project gradle such that all sinks/sources get this