question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

When I try logging in to datahub its says invalid username, I am unable to bootstrap datahub

See original GitHub issue

When I run python mce_cli.py produce -d bootstrap_mce.dat the following docker logs show


schema-registry         | [2019-11-07 21:32:21,330] INFO Wait to catch up until the offset of the last message at 1 (io.confluent.kafka.schemaregistry.storage.KafkaStore)
schema-registry         | [2019-11-07 21:32:21,500] INFO 192.168.128.1 - - [07/Nov/2019:21:32:20 +0000] "POST /subjects/MetadataChangeEvent-value/versions HTTP/1.1" 200 8  942 (io.confluent.rest-utils.requests)
broker                  | [2019-11-07 21:32:22,445] INFO Creating topic MetadataChangeEvent with configuration {} and initial partition assignment Map(0 -> ArrayBuffer(1)) (kafka.zk.AdminZkClient)
zookeeper               | [2019-11-07 21:32:22,448] INFO Got user-level KeeperException when processing sessionid:0x100037ed7540004 type:setData cxid:0x2f9 zxid:0xa2 txntype:-1 reqpath:n/a Error Path:/config/topics/MetadataChangeEvent Error:KeeperErrorCode = NoNode for /config/topics/MetadataChangeEvent (org.apache.zookeeper.server.PrepRequestProcessor)
broker                  | [2019-11-07 21:32:22,455] INFO [KafkaApi-1] Auto creation of topic MetadataChangeEvent with 1 partitions and replication factor 1 is successful (kafka.server.KafkaApis)
broker                  | [2019-11-07 21:32:22,461] INFO [Controller id=1] New topics: [Set(MetadataChangeEvent)], deleted topics: [Set()], new partition replica assignment [Map(MetadataChangeEvent-0 -> Vector(1))] (kafka.controller.KafkaController)
broker                  | [2019-11-07 21:32:22,461] INFO [Controller id=1] New partition creation callback for MetadataChangeEvent-0 (kafka.controller.KafkaController)
broker                  | [2019-11-07 21:32:22,461] TRACE [Controller id=1 epoch=1] Changed partition MetadataChangeEvent-0 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger)
broker                  | [2019-11-07 21:32:22,462] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition MetadataChangeEvent-0 from NonExistentReplica to NewReplica (state.change.logger)
broker                  | [2019-11-07 21:32:22,474] TRACE [Controller id=1 epoch=1] Changed partition MetadataChangeEvent-0 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), zkVersion=0) (state.change.logger)
broker                  | [2019-11-07 21:32:22,475] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request PartitionState(controllerEpoch=1, leader=1, leaderEpoch=0, isr=1, zkVersion=0, replicas=1, isNew=true) to broker 1 for partition MetadataChangeEvent-0 (state.change.logger)
broker                  | [2019-11-07 21:32:22,476] TRACE [Controller id=1 epoch=1] Sending UpdateMetadata request PartitionState(controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) to brokers Set(1) for partition MetadataChangeEvent-0 (state.change.logger)
broker                  | [2019-11-07 21:32:22,476] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition MetadataChangeEvent-0 from NewReplica to OnlineReplica (state.change.logger)
broker                  | [2019-11-07 21:32:22,479] TRACE [Broker id=1] Received LeaderAndIsr request PartitionState(controllerEpoch=1, leader=1, leaderEpoch=0, isr=1, zkVersion=0, replicas=1, isNew=true) correlation id 7 from controller 1 epoch 1 for partition MetadataChangeEvent-0 (state.change.logger)
broker                  | [2019-11-07 21:32:22,481] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 7 from controller 1 epoch 1 starting the become-leader transition for partition MetadataChangeEvent-0 (state.change.logger)
broker                  | [2019-11-07 21:32:22,482] INFO [ReplicaFetcherManager on broker 1] Removed fetcher for partitions Set(MetadataChangeEvent-0) (kafka.server.ReplicaFetcherManager)
broker                  | [2019-11-07 21:32:22,497] INFO [Log partition=MetadataChangeEvent-0, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.Log)
broker                  | [2019-11-07 21:32:22,498] INFO [Log partition=MetadataChangeEvent-0, dir=/var/lib/kafka/data] Completed load of log with 1 segments, log start offset 0 and log end offset 0 in 3 ms (kafka.log.Log)
broker                  | [2019-11-07 21:32:22,500] INFO Created log for partition MetadataChangeEvent-0 in /var/lib/kafka/data with properties {compression.type -> producer, message.format.version -> 2.2-IV1, file.delete.delay.ms -> 60000, max.message.bytes -> 1000012, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, message.downconversion.enable -> true, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> [delete], flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 1073741824, retention.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}. (kafka.log.LogManager)
broker                  | [2019-11-07 21:32:22,502] INFO [Partition MetadataChangeEvent-0 broker=1] No checkpointed highwatermark is found for partition MetadataChangeEvent-0 (kafka.cluster.Partition)
broker                  | [2019-11-07 21:32:22,502] INFO Replica loaded for partition MetadataChangeEvent-0 with initial high watermark 0 (kafka.cluster.Replica)
broker                  | [2019-11-07 21:32:22,503] INFO [Partition MetadataChangeEvent-0 broker=1] MetadataChangeEvent-0 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 (kafka.cluster.Partition)
broker                  | [2019-11-07 21:32:22,506] TRACE [Broker id=1] Stopped fetchers as part of become-leader request from controller 1 epoch 1 with correlation id 7 for partition MetadataChangeEvent-0 (last update controller epoch 1) (state.change.logger)
broker                  | [2019-11-07 21:32:22,506] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 7 from controller 1 epoch 1 for the become-leader transition for partition MetadataChangeEvent-0 (state.change.logger)
broker                  | [2019-11-07 21:32:22,510] TRACE [Controller id=1 epoch=1] Received response {error_code=0,partitions=[{topic=MetadataChangeEvent,partition=0,error_code=0}]} for request LEADER_AND_ISR with correlation id 7 sent to broker broker:29092 (id: 1 rack: null) (state.change.logger)
broker                  | [2019-11-07 21:32:22,516] TRACE [Broker id=1] Cached leader info PartitionState(controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition MetadataChangeEvent-0 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 8 (state.change.logger)
broker                  | [2019-11-07 21:32:22,518] TRACE [Controller id=1 epoch=1] Received response {error_code=0} for request UPDATE_METADATA with correlation id 8 sent to broker broker:29092 (id: 1 rack: null) (state.change.logger)`



Issue Analytics

  • State:closed
  • Created 4 years ago
  • Reactions:1
  • Comments:18 (9 by maintainers)

github_iconTop GitHub Comments

3reactions
keremsahin1commented, Nov 23, 2019

Hi @NishitaNarvekar ,

Sorry for late response. Seems like Docker containers which are responsible for initializing/configuring Kafka and Elasticsearch fail because of timeout. These containers are kafka-setup and elasticsearch-setup, respectively. datahub-gms also fails to initialize because of timeout. Below are the relevant lines showing these failures:

elasticsearch-setup | 2019/11/12 21:41:50 Timeout after 1m0s waiting on dependencies to become available: [http://elasticsearch:9200]

datahub-gms | 2019/11/12 21:41:56 Timeout after 1m0s waiting on dependencies to become available: [tcp://mysql:3306 tcp://broker:29092 http://elasticsearch:9200]

[35;1mkafka-setup | [main] ERROR io.confluent.admin.utils.ClusterStatus - Error while getting broker list. kafka-setup | java.util.concurrent.ExecutionException: org.apache.kafka.common.errors.TimeoutException: Timed out waiting for a node assignment. kafka-setup | at org.apache.kafka.common.internals.KafkaFutureImpl.wrapAndThrow(KafkaFutureImpl.java:45) kafka-setup | at org.apache.kafka.common.internals.KafkaFutureImpl.access$000(KafkaFutureImpl.java:32) kafka-setup | at org.apache.kafka.common.internals.KafkaFutureImpl$SingleWaiter.await(KafkaFutureImpl.java:89) kafka-setup | at org.apache.kafka.common.internals.KafkaFutureImpl.get(KafkaFutureImpl.java:260) kafka-setup | at io.confluent.admin.utils.ClusterStatus.isKafkaReady(ClusterStatus.java:149) kafka-setup | at io.confluent.admin.utils.cli.KafkaReadyCommand.main(KafkaReadyCommand.java:150) kafka-setup | Caused by: org.apache.kafka.common.errors.TimeoutException: Timed out waiting for a node assignment.

elasticsearch-setup exited with code 7

kafka-setup exited with code 1

Recently, I pushed a couple of fixes which increases these timeouts to compensate for all environments with different CPU and memory resources. Please pull latest changes and try again if it solves these issues.

0reactions
NishitaNarvekarcommented, Dec 20, 2019

Hi @keremsahin1 I just tried it again today with all the new updates and I was able to login ! Thank you !

Read more comments on GitHub >

github_iconTop Results From Across the Web

[DataHub] Cannot login with VIP Token. · Issue #1496 - GitHub
However, if you're still not able to login with datahub username, you might have missed bootstrapping the Data Hub yet.
Read more >
Unable to login with the master password created for data hub ...
Instance is up and active. But when i tried to connect using the master password set for the first time. It is showing...
Read more >
DataHub Releases
You can now sort Dataset field names alphabetically - this is super handy for finding columns within wide datasets that may not have...
Read more >
Why am I getting an "Username or password is invalid ...
This is the error that appears when a user tries to login to WorldShare Management Services (WMS) with an incorrect user name or...
Read more >
Known Issues - Cogent DataHub
If you run into trouble, re-run the DataHub v10 installer and then uninstall it before installing the earlier version. Media control does not...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found