question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

Kafka connect HBase: java.lang.NoClassDefFoundError: Could not initialize class org.apache.hadoop.hbase.util.ByteStringer

See original GitHub issue
  1. confluent verion is 3.3.0 stream-reactor-0.3.0-3.3.0.tar.gz remote hbase version is cdh-5.8.0

  2. Add hbase-site.xml to solve remote hbase connecting issue https://github.com/Landoop/stream-reactor/issues/96#issuecomment-345184072

jar -uvf kafka-connect-hbase-0.3.0-3.3.0-all.jar hbase-site.xml
  1. Restart confluent connect
bin/confluent stop conenct
bin/confluent start connect
  1. Start connector task example http://docs.datamountaineer.com/en/latest/hbase.html
➜  bin/connect-cli create hbase-sink < conf/hbase-sink.properties
#Connector name=`hbase-sink`
name=person-hbase-test
connector.class=com.datamountaineer.streamreactor.connect.hbase.HbaseSinkConnector
tasks.max=1
topics=hbase-topic
connect.hbase.column.family=d
connect.hbase.kcql=INSERT INTO person SELECT * FROM hbase-topic PK firstName, lastName
#task ids: 0
  1. Error log:
...skipping...
        at org.apache.kafka.connect.runtime.WorkerSinkTask.poll(WorkerSinkTask.java:251)
        at org.apache.kafka.connect.runtime.WorkerSinkTask.iteration(WorkerSinkTask.java:180)
        at org.apache.kafka.connect.runtime.WorkerSinkTask.execute(WorkerSinkTask.java:148)
        at org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:146)
        at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:190)
        at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
        at java.util.concurrent.FutureTask.run(FutureTask.java:266)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
        at java.lang.Thread.run(Thread.java:745)
[2017-11-17 17:12:10,949] ERROR Task is being killed and will not recover until manually restarted (org.apache.kafka.connect.runtime.WorkerSinkTask:456)
[2017-11-17 17:12:10,949] ERROR Task hbase-sink-test-0 threw an uncaught and unrecoverable exception (org.apache.kafka.connect.runtime.WorkerTask:148)
org.apache.kafka.connect.errors.ConnectException: Exiting WorkerSinkTask due to unrecoverable exception.
        at org.apache.kafka.connect.runtime.WorkerSinkTask.deliverMessages(WorkerSinkTask.java:457)
        at org.apache.kafka.connect.runtime.WorkerSinkTask.poll(WorkerSinkTask.java:251)
        at org.apache.kafka.connect.runtime.WorkerSinkTask.iteration(WorkerSinkTask.java:180)
        at org.apache.kafka.connect.runtime.WorkerSinkTask.execute(WorkerSinkTask.java:148)
        at org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:146)
        at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:190)
        at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
        at java.util.concurrent.FutureTask.run(FutureTask.java:266)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
        at java.lang.Thread.run(Thread.java:745)
[2017-11-17 17:12:10,950] ERROR Task is being killed and will not recover until manually restarted (org.apache.kafka.connect.runtime.WorkerTask:149)
[2017-11-17 17:12:10,950] INFO Stopping Hbase sink. (com.datamountaineer.streamreactor.connect.hbase.HbaseSinkTask:95)
[2017-11-17 17:12:10,956] ERROR Task hbase-sink-test-0 threw an uncaught and unrecoverable exception during shutdown (org.apache.kafka.connect.runtime.WorkerTask:127)
java.lang.NoClassDefFoundError: Could not initialize class org.apache.hadoop.hbase.util.ByteStringer
        at org.apache.hadoop.hbase.protobuf.RequestConverter.buildRegionSpecifier(RequestConverter.java:1037)
        at org.apache.hadoop.hbase.protobuf.RequestConverter.buildGetRowOrBeforeRequest(RequestConverter.java:142)
        at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1590)
        at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.locateRegionInMeta(ConnectionManager.java:1398)
        at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.locateRegion(ConnectionManager.java:1199)
        at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:410)
        at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:359)
        at org.apache.hadoop.hbase.client.BufferedMutatorImpl.backgroundFlushCommits(BufferedMutatorImpl.java:238)
        at org.apache.hadoop.hbase.client.BufferedMutatorImpl.flush(BufferedMutatorImpl.java:190)
        at org.apache.hadoop.hbase.client.HTable.flushCommits(HTable.java:1498)
        at org.apache.hadoop.hbase.client.HTable.close(HTable.java:1534)
        at com.datamountaineer.streamreactor.connect.hbase.writers.HbaseWriter$$anonfun$close$1.apply(HbaseWriter.scala:111)
        at com.datamountaineer.streamreactor.connect.hbase.writers.HbaseWriter$$anonfun$close$1.apply(HbaseWriter.scala:111)
        at scala.collection.immutable.Map$Map1.foreach(Map.scala:116)
        at com.datamountaineer.streamreactor.connect.hbase.writers.HbaseWriter.close(HbaseWriter.scala:111)
        at com.datamountaineer.streamreactor.connect.hbase.HbaseSinkTask$$anonfun$stop$1.apply(HbaseSinkTask.scala:96)
        at com.datamountaineer.streamreactor.connect.hbase.HbaseSinkTask$$anonfun$stop$1.apply(HbaseSinkTask.scala:96)
        at scala.Option.foreach(Option.scala:257)
        at com.datamountaineer.streamreactor.connect.hbase.HbaseSinkTask.stop(HbaseSinkTask.scala:96)
        at org.apache.kafka.connect.runtime.WorkerSinkTask.close(WorkerSinkTask.java:131)
        at org.apache.kafka.connect.runtime.WorkerTask.doClose(WorkerTask.java:125)
        at org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:152)
        at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:190)
        at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
        at java.util.concurrent.FutureTask.run(FutureTask.java:266)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
        at java.lang.Thread.run(Thread.java:745)
...skipping...
        at org.apache.kafka.connect.runtime.WorkerSinkTask.poll(WorkerSinkTask.java:251)
        at org.apache.kafka.connect.runtime.WorkerSinkTask.iteration(WorkerSinkTask.java:180)
        at org.apache.kafka.connect.runtime.WorkerSinkTask.execute(WorkerSinkTask.java:148)
        at org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:146)
        at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:190)
        at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
        at java.util.concurrent.FutureTask.run(FutureTask.java:266)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
        at java.lang.Thread.run(Thread.java:745)
[2017-11-17 17:12:10,949] ERROR Task is being killed and will not recover until manually restarted (org.apache.kafka.connect.runtime.WorkerSinkTask:456)

Issue Analytics

  • State:closed
  • Created 6 years ago
  • Comments:12

github_iconTop GitHub Comments

1reaction
artishipcommented, Nov 22, 2017

@scriperdjq @stheppi I think it’s better to add a hbase.zookeeper.quorum property to task configuration

1reaction
AirTobe91commented, Nov 17, 2017

If I get it right, it sounds like, that connect don’t know your kafka-connect-hbase.jar. When you’re calling “confluent start” it is taking the config from $CONFLUENT_HOME/etc/schema-registry/connect-avro-distributed.properties. At the end of this file, there is a plugin.path variable. This has to contain the $STREAMREACTOR_HOME/libs folder, or just generate a folder and copy your kafka-connect-hbaseXXX.jar in this folder.

E.g.

mkdir ~/kafkaConnectors cp $STREAMREACTOR_HOME/libs/kafka-connect-hbase.jar ~/kafkaConnectors vi $CONFLUENT_HOME/etc/schema-registry/connect-avro-distributed.properties plugin.path=/home/youruser/kafkaConnectors (I had to write /home/youruser because it hasn’t inpreted ~ correctly) confluent stop confluent start

If your now using “confluent log connect”, there should be something stated about your hbase connector. Example for my Cassandra Connector: [2017-11-17 11:13:19,819] INFO Added plugin ‘com.datamountaineer.streamreactor.connect.cassandra.sink.CassandraSinkConnector’ (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:132) [2017-11-17 11:13:19,819] INFO Added plugin ‘com.datamountaineer.streamreactor.connect.cassandra.source.CassandraSourceConnector’ (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:132)

Now try again “bin/connect-cli create hbase-sink < conf/hbase-sink.properties” and it should know you kafka-connect-hbase class.

Read more comments on GitHub >

github_iconTop Results From Across the Web

Could not initialize class org.apache.hadoop.hbase.util ...
Kafka connect HBase : java.lang.NoClassDefFoundError: Could not initialize class org.apache.hadoop.hbase.util.ByteStringer #321.
Read more >
Hbase on HDP 2.5: java.lang.NoClassDefFoundError
NoClassDefFoundError : Could not initialize class org.apache.hadoop.hbase.util.ByteStringer. Labels: Labels: Apache HBase.
Read more >
Could not initialize class org.apache.hadoop.hbase.shaded ...
NoClassDefFoundError - appears when jvm trying to load class, and that class is not found. Or trying to initialize incompatible version.
Read more >
java.lang.NoClassDefFoundError:Could not initialize class org ...
java.lang.NoClassDefFoundError: Could not initialize class org.apache.hadoop.hbase.util.ByteStringer 版本冲突。 protobuf-java 需要是3.0.0 ...
Read more >
回复: flink 1.11 发布sql任务到yarn session报java.lang ...
NoClassDefFoundError : Could not initialize class ... sql(用到hbase connector)提交到yarn session后运行时报: org.apache.hadoop.hbase.
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found