[Kafka - SSL handshake failed error on 9093 port from producer shell script call within kafka broker POD] ...
See original GitHub issueHi Team,
I am testing a use case of authentication using TLS port 9093 with all the required certificates. However I am receiving SSL handshake, Following are the steps which I followed, need help technically to identify the issue behind.
`1) kubectl get secrets -n kafka-operator1 my-cluster-cluster-ca-cert -o jsonpath=‘{.data.ca.crt}’ | base64 -id > ca.crt
-
kubectl get secrets -n kafka-operator1 my-cluster-cluster-ca -o jsonpath=‘{.data.ca.key}’ | base64 -id > ca.key
-
kubectl get secrets -n kafka-operator1 my-cluster-kafka-brokers -o jsonpath=‘{.data.my-cluster-kafka-0.crt}’ | base64 -id > ca_k0.crt
-
kubectl get secrets -n kafka-operator1 my-cluster-kafka-brokers -o jsonpath=‘{.data.my-cluster-kafka-1.crt}’ | base64 -id > ca_k1.crt
-
kubectl get secrets -n kafka-operator1 my-cluster-kafka-brokers -o jsonpath=‘{.data.my-cluster-kafka-2.crt}’ | base64 -id > ca_k2.crt
-
keytool -keystore client.truststore.p12 -storepass 123456 -noprompt -alias my-cluster-kafka-0 -import -file ca_k0.crt
-
keytool
-import
-file ca_k1.crt
-keystore client.truststore.p12
-alias my-cluster-kafka-1
-storepass 123456
-noprompt \ -
keytool
-import
-file ca_k2.crt
-keystore client.truststore.p12
-alias my-cluster-kafka-2
-storepass 123456
-noprompt \ -
keytool
-import
-file ca.crt
-keystore client.truststore.p12
-alias ca
-storepass 123456
-noprompt
`
All the above commands are doing 1 thing finally, creating client.truststore.p12 which i am placing inside /tmp/ folder and calling the producer.sh as below.
[kafka@my-cluster-kafka-0 kafka]$ ./bin/kafka-console-producer.sh --broker-list my-cluster-kafka-bootstrap.kafka-operator1.svc.cluster.local:9093 --topic happy-topic \
–producer-property security.protocol=SSL
–producer-property ssl.truststore.type=PKCS12
–producer-property ssl.truststore.password=123456
–producer-property ssl.truststore.location=/tmp/prod/client.truststore.p12 OpenJDK 64-Bit Server VM warning: If the number of processors is expected to increase from one, then you should configure the number of parallel GC threads appropriately using -XX:ParallelGCThre ads=N [2020-05-15 16:23:36,698] ERROR [Producer clientId=console-producer] Connection to node -1 (my-cluster-kafka-bootstrap.kafka-operator1.svc.cluster.local/10.12.4.238:9093) failed authentication due to: SSL handshake failed (org.apache.kafka.clients.NetworkClient) [2020-05-15 16:23:36,996] ERROR [Producer clientId=console-producer] Connection to node -1 (my-cluster-kafka-bootstrap.kafka-operator1.svc.cluster.local/10.12.4.238:9093) failed authentication d ue to: SSL handshake failed (org.apache.kafka.clients.NetworkClient) [2020-05-15 16:23:37,626] ERROR [Producer clientId=console-producer] Connection to node -1 (my-cluster-kafka-bootstrap.kafka-operator1.svc.cluster.local/10.12.4.238:9093) failed authentication d ue to: SSL handshake failed (org.apache.kafka.clients.NetworkClient)
Issue Analytics
- State:
- Created 3 years ago
- Comments:27 (13 by maintainers)
Top GitHub Comments
You have 3 listeners configured in your broker:
The
plain
andtls
are supposed to be used from inside the Kubernetes cluster and you can use the bootstrap URLmy-cluster-kafka-bootstrap.kafka-operator1.svc:9093
(fortls
) ormy-cluster-kafka-bootstrap.kafka-operator1.svc:9092
(forplain
- without TLS encryption). So when you connect from inside Kubernetes, you should use one of these.The
external
listener is designed to be used from outside of the Kubernetes cluster. And to connect, you have to (in your case since you usetype: loadbalancer
) use the load balancer address (in your case IP, in other cases it might be DNS name).So you should pick the right address depending where the client is running. You can use the external listener also from inside, but:
tls
interface instead. The SAN list in the certificates also corresponds to this.I’m sure the logs on the brokers or clients will show the username somwhere. But to be honest, I’m not sure where to look out of my head. You might also need to increase the log level (https://strimzi.io/docs/latest/full.html#con-kafka-logging-deployment-configuration-kafka) - I do not think it is printed by default.
TBH, I don’t know. If you connect
This part enabled the TLs client authentication:
So if you want to start with server auth only (which means basically regular TLS encryption), you have to remove it and do the steps 1 and 9 from your original post. The client should be configured well for that.
For the client auth as I said in my first answer … you will need to create the user and the keystore as I described there.