question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

ERROR Commit of WorkerSinkTask {...} offsets failed due to exception while flushing: (org.apache.kafka.connect.runtime.WorkerSinkTask:288)

See original GitHub issue

I’m generating messages from Python to Kafka and having issues when running the kudu connector. Specifically, I can see the output from python is being written properly by running ./kafka-avro-console-consumer, however, when I run the kudu sink I am receiving the following error.

[2016-11-14 20:06:09,297] INFO KuduSinkConfig values:
	connect.kudu.export.route.query = INSERT INTO claims_test SELECT * FROM test.python.msg
	connect.kudu.max.retires = 20
	connect.kudu.sink.retry.interval = 60000
	connect.kudu.sink.batch.size = 1
	connect.kudu.sink.schema.registry.url = http://192.168.233.3:8081
	connect.kudu.master = 10.12.225.75:7051
	connect.kudu.sink.error.policy = throw
	connect.kudu.sink.bucket.size = 3
 (com.datamountaineer.streamreactor.connect.kudu.config.KuduSinkConfig:178)
[2016-11-14 20:06:09,297] INFO Connecting to Kudu Master at 10.12.225.75:7051 (com.datamountaineer.streamreactor.connect.kudu.sink.KuduWriter$:41)
[2016-11-14 20:06:09,301] INFO Finished starting connectors and tasks (org.apache.kafka.connect.runtime.distributed.DistributedHerder:769)
[2016-11-14 20:06:09,305] INFO Initialising Kudu writer (com.datamountaineer.streamreactor.connect.kudu.sink.KuduWriter:49)
[2016-11-14 20:06:09,314] INFO Found schemas for test.python.msg (com.datamountaineer.streamreactor.connect.schemas.SchemaRegistry$:55)
[2016-11-14 20:06:09,315] INFO Sink task WorkerSinkTask{id=test.python.msg-0} finished initialization and start (org.apache.kafka.connect.runtime.WorkerSinkTask:208)
[2016-11-14 20:06:09,419] INFO Discovered coordinator 192.168.233.3:9092 (id: 2147483646 rack: null) for group connect-test.python.msg. (org.apache.kafka.clients.consumer.internals.AbstractCoordinator:528)
[2016-11-14 20:06:09,421] INFO Revoking previously assigned partitions [] for group connect-test.python.msg (org.apache.kafka.clients.consumer.internals.ConsumerCoordinator:292)
[2016-11-14 20:06:09,422] INFO (Re-)joining group connect-test.python.msg (org.apache.kafka.clients.consumer.internals.AbstractCoordinator:349)
[2016-11-14 20:06:09,433] INFO Successfully joined group connect-test.python.msg with generation 1 (org.apache.kafka.clients.consumer.internals.AbstractCoordinator:457)
[2016-11-14 20:06:09,433] INFO Setting newly assigned partitions [test.python.msg-0] for group connect-test.python.msg (org.apache.kafka.clients.consumer.internals.ConsumerCoordinator:231)
[2016-11-14 20:06:09,449] ERROR Commit of WorkerSinkTask{id=test.python.msg-0} offsets failed due to exception while flushing: (org.apache.kafka.connect.runtime.WorkerSinkTask:288)
java.lang.NullPointerException
	at scala.collection.convert.Wrappers$JListWrapper.iterator(Wrappers.scala:88)
	at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
	at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
	at scala.collection.TraversableLike$class.filterImpl(TraversableLike.scala:258)
	at scala.collection.TraversableLike$class.filter(TraversableLike.scala:270)
	at scala.collection.AbstractTraversable.filter(Traversable.scala:104)
	at com.datamountaineer.streamreactor.connect.kudu.sink.KuduWriter.flush(KuduWriter.scala:179)
	at com.datamountaineer.streamreactor.connect.kudu.sink.KuduSinkTask$$anonfun$flush$2.apply(KuduSinkTask.scala:88)
	at com.datamountaineer.streamreactor.connect.kudu.sink.KuduSinkTask$$anonfun$flush$2.apply(KuduSinkTask.scala:88)
	at scala.Option.foreach(Option.scala:257)
	at com.datamountaineer.streamreactor.connect.kudu.sink.KuduSinkTask.flush(KuduSinkTask.scala:88)
	at org.apache.kafka.connect.runtime.WorkerSinkTask.commitOffsets(WorkerSinkTask.java:286)
	at org.apache.kafka.connect.runtime.WorkerSinkTask.closePartitions(WorkerSinkTask.java:432)
	at org.apache.kafka.connect.runtime.WorkerSinkTask.execute(WorkerSinkTask.java:146)
	at org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:140)
	at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:175)
	at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
	at java.lang.Thread.run(Thread.java:745)
[2016-11-14 20:06:09,449] ERROR Rewinding offsets to last committed offsets (org.apache.kafka.connect.runtime.WorkerSinkTask:289)
[2016-11-14 20:06:09,450] ERROR Commit of WorkerSinkTask{id=test.python.msg-0} offsets threw an unexpected exception:  (org.apache.kafka.connect.runtime.WorkerSinkTask:180)
java.lang.NullPointerException
	at scala.collection.convert.Wrappers$JListWrapper.iterator(Wrappers.scala:88)
	at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
	at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
	at scala.collection.TraversableLike$class.filterImpl(TraversableLike.scala:258)
	at scala.collection.TraversableLike$class.filter(TraversableLike.scala:270)
	at scala.collection.AbstractTraversable.filter(Traversable.scala:104)
	at com.datamountaineer.streamreactor.connect.kudu.sink.KuduWriter.flush(KuduWriter.scala:179)
	at com.datamountaineer.streamreactor.connect.kudu.sink.KuduSinkTask$$anonfun$flush$2.apply(KuduSinkTask.scala:88)
	at com.datamountaineer.streamreactor.connect.kudu.sink.KuduSinkTask$$anonfun$flush$2.apply(KuduSinkTask.scala:88)
	at scala.Option.foreach(Option.scala:257)
	at com.datamountaineer.streamreactor.connect.kudu.sink.KuduSinkTask.flush(KuduSinkTask.scala:88)
	at org.apache.kafka.connect.runtime.WorkerSinkTask.commitOffsets(WorkerSinkTask.java:286)
	at org.apache.kafka.connect.runtime.WorkerSinkTask.closePartitions(WorkerSinkTask.java:432)
	at org.apache.kafka.connect.runtime.WorkerSinkTask.execute(WorkerSinkTask.java:146)
	at org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:140)
	at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:175)
	at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
	at java.lang.Thread.run(Thread.java:745)
[2016-11-14 20:06:09,451] ERROR Task test.python.msg-0 threw an uncaught and unrecoverable exception (org.apache.kafka.connect.runtime.WorkerTask:142)
org.apache.kafka.connect.errors.DataException: Failed to deserialize data to Avro:
	at io.confluent.connect.avro.AvroConverter.toConnectData(AvroConverter.java:109)
	at org.apache.kafka.connect.runtime.WorkerSinkTask.convertMessages(WorkerSinkTask.java:357)
	at org.apache.kafka.connect.runtime.WorkerSinkTask.poll(WorkerSinkTask.java:226)
	at org.apache.kafka.connect.runtime.WorkerSinkTask.iteration(WorkerSinkTask.java:170)
	at org.apache.kafka.connect.runtime.WorkerSinkTask.execute(WorkerSinkTask.java:142)
	at org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:140)
	at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:175)
	at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
	at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.kafka.common.errors.SerializationException: Error retrieving Avro schema for id 1
Caused by: io.confluent.kafka.schemaregistry.client.rest.exceptions.RestClientException: Subject not found. io.confluent.rest.exceptions.RestNotFoundException: Subject not found.
io.confluent.rest.exceptions.RestNotFoundException: Subject not found.
	at io.confluent.kafka.schemaregistry.rest.exceptions.Errors.subjectNotFoundException(Errors.java:50)
	at io.confluent.kafka.schemaregistry.rest.resources.SubjectsResource.lookUpSchemaUnderSubject(SubjectsResource.java:79)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:498)
	at org.glassfish.jersey.server.model.internal.ResourceMethodInvocationHandlerFactory$1.invoke(ResourceMethodInvocationHandlerFactory.java:81)
	at org.glassfish.jersey.server.model.internal.AbstractJavaResourceMethodDispatcher$1.run(AbstractJavaResourceMethodDispatcher.java:144)
	at org.glassfish.jersey.server.model.internal.AbstractJavaResourceMethodDispatcher.invoke(AbstractJavaResourceMethodDispatcher.java:161)
	at org.glassfish.jersey.server.model.internal.JavaResourceMethodDispatcherProvider$VoidOutInvoker.doDispatch(JavaResourceMethodDispatcherProvider.java:143)
	at org.glassfish.jersey.server.model.internal.AbstractJavaResourceMethodDispatcher.dispatch(AbstractJavaResourceMethodDispatcher.java:99)
	at org.glassfish.jersey.server.model.ResourceMethodInvoker.invoke(ResourceMethodInvoker.java:389)
	at org.glassfish.jersey.server.model.ResourceMethodInvoker.apply(ResourceMethodInvoker.java:347)
	at org.glassfish.jersey.server.model.ResourceMethodInvoker.apply(ResourceMethodInvoker.java:102)
	at org.glassfish.jersey.server.ServerRuntime$2.run(ServerRuntime.java:308)
	at org.glassfish.jersey.internal.Errors$1.call(Errors.java:271)
	at org.glassfish.jersey.internal.Errors$1.call(Errors.java:267)
	at org.glassfish.jersey.internal.Errors.process(Errors.java:315)
	at org.glassfish.jersey.internal.Errors.process(Errors.java:297)
	at org.glassfish.jersey.internal.Errors.process(Errors.java:267)
	at org.glassfish.jersey.process.internal.RequestScope.runInScope(RequestScope.java:317)
	at org.glassfish.jersey.server.ServerRuntime.process(ServerRuntime.java:291)
	at org.glassfish.jersey.server.ApplicationHandler.handle(ApplicationHandler.java:1140)
	at org.glassfish.jersey.servlet.WebComponent.service(WebComponent.java:403)
	at org.glassfish.jersey.servlet.ServletContainer.service(ServletContainer.java:386)
	at org.glassfish.jersey.servlet.ServletContainer.service(ServletContainer.java:334)
	at org.glassfish.jersey.servlet.ServletContainer.service(ServletContainer.java:221)
	at org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:808)
	at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:587)
	at org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:221)
	at org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1127)
	at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:515)
	at org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
	at org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1061)
	at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
	at org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:110)
	at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:97)
	at org.eclipse.jetty.server.handler.StatisticsHandler.handle(StatisticsHandler.java:159)
	at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:97)
	at org.eclipse.jetty.server.Server.handle(Server.java:499)
	at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:310)
	at org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:257)
	at org.eclipse.jetty.io.AbstractConnection$2.run(AbstractConnection.java:540)
	at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:635)
	at org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:555)
	at java.lang.Thread.run(Thread.java:745)
; error code: 40401
	at io.confluent.kafka.schemaregistry.client.rest.RestService.sendHttpRequest(RestService.java:164)
	at io.confluent.kafka.schemaregistry.client.rest.RestService.httpRequest(RestService.java:181)
	at io.confluent.kafka.schemaregistry.client.rest.RestService.lookUpSubjectVersion(RestService.java:209)
	at io.confluent.kafka.schemaregistry.client.rest.RestService.lookUpSubjectVersion(RestService.java:200)
	at io.confluent.kafka.schemaregistry.client.rest.RestService.lookUpSubjectVersion(RestService.java:194)
	at io.confluent.kafka.schemaregistry.client.CachedSchemaRegistryClient.getVersionFromRegistry(CachedSchemaRegistryClient.java:68)
	at io.confluent.kafka.schemaregistry.client.CachedSchemaRegistryClient.getVersion(CachedSchemaRegistryClient.java:151)
	at io.confluent.kafka.serializers.AbstractKafkaAvroDeserializer.deserialize(AbstractKafkaAvroDeserializer.java:149)
	at io.confluent.kafka.serializers.AbstractKafkaAvroDeserializer.deserializeWithSchemaAndVersion(AbstractKafkaAvroDeserializer.java:190)
	at io.confluent.connect.avro.AvroConverter$Deserializer.deserialize(AvroConverter.java:130)
	at io.confluent.connect.avro.AvroConverter.toConnectData(AvroConverter.java:99)
	at org.apache.kafka.connect.runtime.WorkerSinkTask.convertMessages(WorkerSinkTask.java:357)
	at org.apache.kafka.connect.runtime.WorkerSinkTask.poll(WorkerSinkTask.java:226)
	at org.apache.kafka.connect.runtime.WorkerSinkTask.iteration(WorkerSinkTask.java:170)
	at org.apache.kafka.connect.runtime.WorkerSinkTask.execute(WorkerSinkTask.java:142)
	at org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:140)
	at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:175)
	at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
	at java.lang.Thread.run(Thread.java:745)
[2016-11-14 20:06:09,452] ERROR Task is being killed and will not recover until manually restarted (org.apache.kafka.connect.runtime.WorkerTask:143)
[2016-11-14 20:06:09,454] INFO Stopping Kudu sink. (com.datamountaineer.streamreactor.connect.kudu.sink.KuduSinkTask:82)
[2016-11-14 20:06:09,454] INFO Closing Kudu Session and Client (com.datamountaineer.streamreactor.connect.kudu.sink.KuduWriter:162)
[2016-11-14 20:06:09,454] ERROR Task test.python.msg-0 threw an uncaught and unrecoverable exception during shutdown (org.apache.kafka.connect.runtime.WorkerTask:123)
java.lang.NullPointerException
	at scala.collection.convert.Wrappers$JListWrapper.iterator(Wrappers.scala:88)
	at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
	at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
	at scala.collection.TraversableLike$class.filterImpl(TraversableLike.scala:258)
	at scala.collection.TraversableLike$class.filter(TraversableLike.scala:270)
	at scala.collection.AbstractTraversable.filter(Traversable.scala:104)
	at com.datamountaineer.streamreactor.connect.kudu.sink.KuduWriter.flush(KuduWriter.scala:179)
	at com.datamountaineer.streamreactor.connect.kudu.sink.KuduWriter.close(KuduWriter.scala:163)
	at com.datamountaineer.streamreactor.connect.kudu.sink.KuduSinkTask$$anonfun$stop$1.apply(KuduSinkTask.scala:83)
	at com.datamountaineer.streamreactor.connect.kudu.sink.KuduSinkTask$$anonfun$stop$1.apply(KuduSinkTask.scala:83)
	at scala.Option.foreach(Option.scala:257)
	at com.datamountaineer.streamreactor.connect.kudu.sink.KuduSinkTask.stop(KuduSinkTask.scala:83)
	at org.apache.kafka.connect.runtime.WorkerSinkTask.close(WorkerSinkTask.java:126)
	at org.apache.kafka.connect.runtime.WorkerTask.doClose(WorkerTask.java:121)
	at org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:146)
	at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:175)
	at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
	at java.lang.Thread.run(Thread.java:745)

kakfa-avro-console-consumer output:

root@fae10f5c7182:/confluent-3.0.1/bin# ./kafka-avro-console-consumer --zookeeper 192.168.233.3:2181 --bootstrap-server 192.168.233.3:9092 --property schema.registry.url=http://192.168.233.3:8081 --topic test.python.msg
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/confluent-3.0.1/share/java/kafka-serde-tools/slf4j-log4j12-1.7.6.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/confluent-3.0.1/share/java/confluent-common/slf4j-log4j12-1.7.6.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/confluent-3.0.1/share/java/schema-registry/slf4j-log4j12-1.7.6.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
{"id":433,"random":"foo"}
{"id":196,"random":"foo"}
^CProcessed a total of 2 messages

kudu-sink.properties file:

name=test.python.msg
connector.class=com.datamountaineer.streamreactor.connect.kudu.sink.KuduSinkConnector
tasks.max=1
connect.kudu.export.route.query = INSERT INTO claims_test SELECT * FROM test.python.msg
connect.kudu.master=10.12.225.75:7051
topics=test.python.msg
connect.kudu.sink.error.policy=throw
connect.kudu.sink.bucket.size=3
connect.kudu.sink.schema.registry.url=http://192.168.233.3:8081

Can confirm the schema is being set properly. Also, the kudu-sink example works fine when I generate data from the cli. What would be causing an issue with processing this stream?

Issue Analytics

  • State:closed
  • Created 7 years ago
  • Comments:9

github_iconTop GitHub Comments

1reaction
andrewstevensoncommented, Nov 14, 2016

Yes try from source.

0reactions
brigglemancommented, Nov 15, 2016

flush error appears to be resolved. still encountering errors with any data not sent through cli. but the flushing error is gone.

trace in case related.

[2016-11-15 18:37:16,423] INFO Connecting to Kudu Master at 10.12.225.75:7051 (com.datamountaineer.streamreactor.connect.kudu.sink.KuduWriter$:44)
[2016-11-15 18:37:16,425] INFO Finished starting connectors and tasks (org.apache.kafka.connect.runtime.distributed.DistributedHerder:769)
[2016-11-15 18:37:16,428] INFO Initialising Kudu writer (com.datamountaineer.streamreactor.connect.kudu.sink.KuduWriter:52)
[2016-11-15 18:37:16,456] INFO Sink task WorkerSinkTask{id=test.python.msg-0} finished initialization and start (org.apache.kafka.connect.runtime.WorkerSinkTask:208)
[2016-11-15 18:37:16,559] INFO Discovered coordinator 192.168.233.3:9092 (id: 2147483646 rack: null) for group connect-test.python.msg. (org.apache.kafka.clients.consumer.internals.AbstractCoordinator:528)
[2016-11-15 18:37:16,562] INFO Revoking previously assigned partitions [] for group connect-test.python.msg (org.apache.kafka.clients.consumer.internals.ConsumerCoordinator:292)
[2016-11-15 18:37:16,563] INFO (Re-)joining group connect-test.python.msg (org.apache.kafka.clients.consumer.internals.AbstractCoordinator:349)
[2016-11-15 18:37:16,566] INFO Successfully joined group connect-test.python.msg with generation 1 (org.apache.kafka.clients.consumer.internals.AbstractCoordinator:457)
[2016-11-15 18:37:16,567] INFO Setting newly assigned partitions [test.python.msg-0] for group connect-test.python.msg (org.apache.kafka.clients.consumer.internals.ConsumerCoordinator:231)
[2016-11-15 18:37:16,616] INFO WorkerSinkTask{id=test.python.msg-0} Committing offsets (org.apache.kafka.connect.runtime.WorkerSinkTask:261)
[2016-11-15 18:37:16,632] ERROR Task test.python.msg-0 threw an uncaught and unrecoverable exception (org.apache.kafka.connect.runtime.WorkerTask:142)
org.apache.kafka.connect.errors.DataException: Failed to deserialize data to Avro:
    at io.confluent.connect.avro.AvroConverter.toConnectData(AvroConverter.java:109)
    at org.apache.kafka.connect.runtime.WorkerSinkTask.convertMessages(WorkerSinkTask.java:357)
    at org.apache.kafka.connect.runtime.WorkerSinkTask.poll(WorkerSinkTask.java:226)
    at org.apache.kafka.connect.runtime.WorkerSinkTask.iteration(WorkerSinkTask.java:170)
    at org.apache.kafka.connect.runtime.WorkerSinkTask.execute(WorkerSinkTask.java:142)
    at org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:140)
    at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:175)
    at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
    at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.kafka.common.errors.SerializationException: Error retrieving Avro schema for id 1
Caused by: io.confluent.kafka.schemaregistry.client.rest.exceptions.RestClientException: Schema not found io.confluent.rest.exceptions.RestNotFoundException: Schema not found
io.confluent.rest.exceptions.RestNotFoundException: Schema not found
    at io.confluent.kafka.schemaregistry.rest.exceptions.Errors.schemaNotFoundException(Errors.java:58)
    at io.confluent.kafka.schemaregistry.rest.resources.SchemasResource.getSchema(SchemasResource.java:68)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:498)
    at org.glassfish.jersey.server.model.internal.ResourceMethodInvocationHandlerFactory$1.invoke(ResourceMethodInvocationHandlerFactory.java:81)
    at org.glassfish.jersey.server.model.internal.AbstractJavaResourceMethodDispatcher$1.run(AbstractJavaResourceMethodDispatcher.java:144)
    at org.glassfish.jersey.server.model.internal.AbstractJavaResourceMethodDispatcher.invoke(AbstractJavaResourceMethodDispatcher.java:161)
    at org.glassfish.jersey.server.model.internal.JavaResourceMethodDispatcherProvider$TypeOutInvoker.doDispatch(JavaResourceMethodDispatcherProvider.java:205)
    at org.glassfish.jersey.server.model.internal.AbstractJavaResourceMethodDispatcher.dispatch(AbstractJavaResourceMethodDispatcher.java:99)
    at org.glassfish.jersey.server.model.ResourceMethodInvoker.invoke(ResourceMethodInvoker.java:389)
    at org.glassfish.jersey.server.model.ResourceMethodInvoker.apply(ResourceMethodInvoker.java:347)
    at org.glassfish.jersey.server.model.ResourceMethodInvoker.apply(ResourceMethodInvoker.java:102)
    at org.glassfish.jersey.server.ServerRuntime$2.run(ServerRuntime.java:308)
    at org.glassfish.jersey.internal.Errors$1.call(Errors.java:271)
    at org.glassfish.jersey.internal.Errors$1.call(Errors.java:267)
    at org.glassfish.jersey.internal.Errors.process(Errors.java:315)
    at org.glassfish.jersey.internal.Errors.process(Errors.java:297)
    at org.glassfish.jersey.internal.Errors.process(Errors.java:267)
    at org.glassfish.jersey.process.internal.RequestScope.runInScope(RequestScope.java:317)
    at org.glassfish.jersey.server.ServerRuntime.process(ServerRuntime.java:291)
    at org.glassfish.jersey.server.ApplicationHandler.handle(ApplicationHandler.java:1140)
    at org.glassfish.jersey.servlet.WebComponent.service(WebComponent.java:403)
    at org.glassfish.jersey.servlet.ServletContainer.service(ServletContainer.java:386)
    at org.glassfish.jersey.servlet.ServletContainer.service(ServletContainer.java:334)
    at org.glassfish.jersey.servlet.ServletContainer.service(ServletContainer.java:221)
    at org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:808)
    at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:587)
    at org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:221)
    at org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1127)
    at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:515)
    at org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
    at org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1061)
    at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
    at org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:110)
    at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:97)
    at org.eclipse.jetty.server.handler.StatisticsHandler.handle(StatisticsHandler.java:159)
    at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:97)
    at org.eclipse.jetty.server.Server.handle(Server.java:499)
    at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:310)
    at org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:257)
    at org.eclipse.jetty.io.AbstractConnection$2.run(AbstractConnection.java:540)
    at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:635)
    at org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:555)
    at java.lang.Thread.run(Thread.java:745)
; error code: 40403
    at io.confluent.kafka.schemaregistry.client.rest.RestService.sendHttpRequest(RestService.java:164)
    at io.confluent.kafka.schemaregistry.client.rest.RestService.httpRequest(RestService.java:181)
    at io.confluent.kafka.schemaregistry.client.rest.RestService.getId(RestService.java:317)
    at io.confluent.kafka.schemaregistry.client.rest.RestService.getId(RestService.java:310)
    at io.confluent.kafka.schemaregistry.client.CachedSchemaRegistryClient.getSchemaByIdFromRegistry(CachedSchemaRegistryClient.java:62)
    at io.confluent.kafka.schemaregistry.client.CachedSchemaRegistryClient.getBySubjectAndID(CachedSchemaRegistryClient.java:117)
    at io.confluent.kafka.serializers.AbstractKafkaAvroDeserializer.deserialize(AbstractKafkaAvroDeserializer.java:121)
    at io.confluent.kafka.serializers.AbstractKafkaAvroDeserializer.deserializeWithSchemaAndVersion(AbstractKafkaAvroDeserializer.java:190)
    at io.confluent.connect.avro.AvroConverter$Deserializer.deserialize(AvroConverter.java:130)
    at io.confluent.connect.avro.AvroConverter.toConnectData(AvroConverter.java:99)
    at org.apache.kafka.connect.runtime.WorkerSinkTask.convertMessages(WorkerSinkTask.java:357)
    at org.apache.kafka.connect.runtime.WorkerSinkTask.poll(WorkerSinkTask.java:226)
    at org.apache.kafka.connect.runtime.WorkerSinkTask.iteration(WorkerSinkTask.java:170)
    at org.apache.kafka.connect.runtime.WorkerSinkTask.execute(WorkerSinkTask.java:142)
    at org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:140)
    at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:175)
    at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
    at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)
[2016-11-15 18:37:16,641] ERROR Task is being killed and will not recover until manually restarted (org.apache.kafka.connect.runtime.WorkerTask:143)
Read more comments on GitHub >

github_iconTop Results From Across the Web

ERROR Commit of WorkerSinkTask {...} offsets failed due to ...
ERROR Commit of WorkerSinkTask {...} offsets failed due to exception while flushing: (org.apache.kafka.connect.runtime.WorkerSinkTask:288) # ...
Read more >
Kafka Connect - Failed to commit offsets and flush
The error indicates that there are a lot of messages buffered and cannot be flushed before the timeout is reached. To address this...
Read more >
ERROR Commit of WorkerSinkTask id + offsets threw an ...
org.apache.kafka.clients.consumer.CommitFailedException: Commit cannot be completed since the group has already rebalanced and assigned the ...
Read more >
Kafka Connect – Offset commit errors (I) - El Javi
In this post, we discuss common errors when committing offsets for ... Failed to commit offsets (org.apache.kafka.connect.runtime.
Read more >
Kafka connector is failing due to ' OffsetStorageWriter is ...
Kafka connector is failing due to ' OffsetStorageWriter is already flushing' error. Status: Assignee: Priority: Resolution: Resolved. Unassigned.
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found