question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

MirroMaker client authentication issue with external Kafka cluster

See original GitHub issue

Hi,

I’m trying to mirror data between two Kafka cluster running in two different OpenShift clusters. I’m trying the “scram-sha-512” client authentication type but I get the below error in MirrorMaker logs.

To explain better how I did:

Source cluster:

  • Copied the cluster-ca-cert secret from target cluster and created a secret in the namespace where source cluster runs.
  • Copied the Kafka user’s secret alone that contains the password from target cluster and created a secret with the same password in the namespace where source cluster runs.

MirrorMaker 2 config yaml:

apiVersion: kafka.strimzi.io/v1alpha1
kind: KafkaMirrorMaker2
metadata:
  name: my-mirrormaker-1
spec:
  version: 2.4.0
  replicas: 3
  resources:
    requests:
      cpu: "500m"
      memory: 2Gi
    limits:
      cpu: "1.5"
      memory: 2Gi
  jvmOptions:
    "-Xmx": "1g"
    "-Xms": "1g"
  connectCluster: "target-kafka-cluster"
  clusters:
  - alias: "source-kafka-cluster"
    authentication:
      passwordSecret:
        password: password
        secretName: mirrormaker-user-1
      type: scram-sha-512
      username: mirrormaker-user-1
    bootstrapServers: my-cluster-1-kafka-bootstrap-prod-ABC.com:443
    tls:
      trustedCertificates:
      - certificate: ca.crt
        secretName: my-cluster-1-cluster-ca-cert
    config:    
      consumer.exclude.internal.topics: "false"  
  - alias: "target-kafka-cluster"
    authentication:
      passwordSecret:
        password: password
        secretName: target-mirrormaker-password-secret
      type: scram-sha-512
      username: mirrormaker-user-1
    bootstrapServers: my-cluster-1-kafka-bootstrap-prod-XYZ.com:443
    tls:
      trustedCertificates:
      - certificate: ca.crt
        secretName: target-my-cluster-1-cluster-ca-cert
    config:
      producer.max.request.size: 15728640
      config.storage.replication.factor: 3
      offset.storage.replication.factor: 3
      status.storage.replication.factor: 3
  mirrors:
  - sourceCluster: "source-kafka-cluster"
    targetCluster: "target-kafka-cluster"
    sourceConnector:
      config:
        replication.factor: 3
        offset-syncs.topic.replication.factor: 3
        sync.topic.acls.enabled: "false"
    heartbeatConnector:
      config:
        heartbeats.topic.replication.factor: 3
    checkpointConnector:
      config:
        tasks.max: 1
        checkpoints.topic.replication.factor: 3
      tasksMax: 1
      #pause: false  
    topics.blacklist: ""
    groups.blacklist: ""
    topicsPattern: ".*"
    groupsPattern: ".*"
    exclude.internal.topics: "false"

Kafka Cluster config yaml: ( just a snippet of the config )

apiVersion: kafka.strimzi.io/v1beta1
kind: Kafka
metadata:
  name: my-cluster-1
spec:
  kafka:
    version: 2.4.0
    replicas: 3
    listeners:
      external:
        type: route
        authentication:
          type: scram-sha-512
      plain: {}
      tls: 
        authentication:
          type: scram-sha-512
    readinessProbe:
      initialDelaySeconds: 15
      timeoutSeconds: 5

Error from logs:

2020-06-30 09:40:52,297 WARN [AdminClient clientId=adminclient-1] Unexpected error from my-cluster-1-kafka-bootstrap-prod-XYZ.com/10.xx.xx.xx; closing connection (org.apache.kafka.common.network.Selector) [kafka-admin-client-thread | adminclient-1]
java.lang.IllegalArgumentException: Empty key
	at javax.crypto.spec.SecretKeySpec.<init>(SecretKeySpec.java:96)
	at org.apache.kafka.common.security.scram.internals.ScramFormatter.hi(ScramFormatter.java:76)
	at org.apache.kafka.common.security.scram.internals.ScramSaslClient.handleServerFirstMessage(ScramSaslClient.java:192)
	at org.apache.kafka.common.security.scram.internals.ScramSaslClient.evaluateChallenge(ScramSaslClient.java:133)
	at org.apache.kafka.common.security.authenticator.SaslClientAuthenticator.lambda$createSaslToken$1(SaslClientAuthenticator.java:474)
	at java.security.AccessController.doPrivileged(Native Method)
	at javax.security.auth.Subject.doAs(Subject.java:422)
	at org.apache.kafka.common.security.authenticator.SaslClientAuthenticator.createSaslToken(SaslClientAuthenticator.java:474)
	at org.apache.kafka.common.security.authenticator.SaslClientAuthenticator.sendSaslClientToken(SaslClientAuthenticator.java:381)
	at org.apache.kafka.common.security.authenticator.SaslClientAuthenticator.authenticate(SaslClientAuthenticator.java:263)
	at org.apache.kafka.common.network.KafkaChannel.prepare(KafkaChannel.java:173)
	at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:547)
	at org.apache.kafka.common.network.Selector.poll(Selector.java:483)
	at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:540)
	at org.apache.kafka.clients.admin.KafkaAdminClient$AdminClientRunnable.run(KafkaAdminClient.java:1196)
	at java.lang.Thread.run(Thread.java:748)
2020-06-30 09:40:52,298 WARN [AdminClient clientId=adminclient-1] Connection to node -1 (my-cluster-1-kafka-bootstrap-prod-XYZ.com/10.xx.xx.xx:443) terminated during authentication. This may happen due to any of the following reasons: (1) Authentication failed due to invalid credentials with brokers older than 1.0.0, (2) Firewall blocking Kafka TLS traffic (eg it may only allow HTTPS traffic), (3) Transient network issue. (org.apache.kafka.clients.NetworkClient) [kafka-admin-client-thread | adminclient-1]

Kindly help me with what am I doing wrong.

Thanks, Eazhilan

Issue Analytics

  • State:closed
  • Created 3 years ago
  • Comments:6 (3 by maintainers)

github_iconTop GitHub Comments

1reaction
scholzjcommented, Jun 30, 2020

Right. Ok. The docs says that … but it then also says that the KAfkaConnect YAML should be:

    passwordSecret:
      secretName: _<my-secret>_
      password: _<my-password.txt>_

and not just password: password.

Anway, glad you solved it!

0reactions
eazhilan-nagarajancommented, Jun 30, 2020

You got the issue right @scholzj, Adding in some details thinking it might help somebody some day 😄.

I tried to follow the document from Strimzi image echo -n '1f2d1e2e67df' > password.txt oc create secret generic test-secret --from-file=password.txt

Upon extracting it as an yaml, found the structure as below:

apiVersion: v1
data:
  password.txt: MWYyZDFlMmU2N2Rm
kind: Secret
metadata:
  creationTimestamp: "2020-06-30T13:13:01Z"
  name: gen-secret
  namespace: aaa-bbb
type: Opaque

Note: Under data, the key is the name of the file itself “password.txt” which is not what I expected. So copied the above structure, replaced “password.txt ( filename )” to “password ( the expected key name )”.

After this, the MirrorMaker was able to authenticate and it worked. My mistake it was a old document I followed 😁

Thanks again for the quick help.

Read more comments on GitHub >

github_iconTop Results From Across the Web

Authentication using SASL | Confluent Platform 3.2.2
Kafka brokers supports client authentication via SASL. ... In production systems, external authentication servers may implement password ...
Read more >
Configuring TLS/SSL encryption for Kafka MirrorMaker
The Kafka MirrorMaker role supports TLS/SSL encrypted communication with Kafka ... the client authentication used by the source and destination Kafka clusters.
Read more >
Stream Data From an External Kafka Cluster
The data streaming connector for external Kafka data is based on Apache Kafka MirrorMaker and shares many configurations as MirrorMaker. 2.1.1. Basic ...
Read more >
Increase Apache Kafka's resiliency with a multi-Region ...
This post explains how to make Apache Kafka resilient to issues that ... Our Kafka clients interact with the primary Region's MSK cluster....
Read more >
Chapter 4. Configuring Kafka Red Hat AMQ Streams 2.0
Running a multi-node Kafka cluster; 4.6. ... is between the client and the broker, or an external DNS name is being used instead...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found