Pods are not initializing for Che and keycloak kubernetes
See original GitHub issueDescribe the bug
keycloak and che are not initializing, they keep in the following stat:
che-6f5989dcc8-cs9k2 0/1 Init:0/2 0 37m
keycloak-6fdbdf45f6-mlmml 0/1 Init:0/1 0 37m
postgres-6c4d6c764c-m9qrn 1/1 Running 0 37m
In the meanwhile, if I am deploying a single-user che on the very same kubernetes environment, it works perfectly!!
Che version
- latest
- nightly
- other: please specify
6.19.0
,6.19.2
,7.0.0-rc-3.x
,6.19.5
Steps to reproduce
kubectl create clusterrolebinding add-on-cluster-admin --clusterrole=cluster-admin --serviceaccount=kube-system:default
kubectl create serviceaccount tiller --namespace kube-system
git clone https://github.com/eclipse/che.git
cd che/deploy/kubernetes/helm/che
kubectl apply -f ./tiller-rbac.yaml
helm init --service-account tiller --wait
helm dependency update
helm upgrade --install che --namespace dev --set cheImage=eclipse/che-server:<version> --set global.multiuser=true,global.cheDomain=<domain> ./
Expected behavior
Pods should be initialized and che environment should be deployed
Runtime
- kubernetes (include output of
kubectl version
) - Openshift (include output of
oc version
) - minikube (include output of
minikube version
andkubectl version
) - minishift (include output of
minishift version
andoc version
) - docker-desktop + K8S (include output of
docker version
andkubectl version
) - other: (please specify)
Client Version: version.Info{Major:"1", Minor:"15", GitVersion:"v1.15.0", GitCommit:"e8462b5b5dc2584fdcd18e6bcfe9f1e4d970a529", GitTreeState:"clean", BuildDate:"2019-06-21T13:09:06Z", GoVersion:"go1.12.6", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"15", GitVersion:"v1.15.0", GitCommit:"e8462b5b5dc2584fdcd18e6bcfe9f1e4d970a529", GitTreeState:"clean", BuildDate:"2019-06-21T13:07:26Z", GoVersion:"go1.12.6", Compiler:"gc", Platform:"linux/amd64"}
Screenshots
Screenshots of one of the pods:
Events of the entire namespace
$ kubectl get events --namespace dev --sort-by='.metadata.creationTimestamp' LAST SEEN TYPE REASON OBJECT MESSAGE
15m Normal Scheduled pod/che-6f5989dcc8-5wq2d Successfully assigned dev/che-6f5989dcc8-5wq2d to x
15m Normal SuccessfulCreate replicaset/postgres-6c4d6c764c Created pod: postgres-6c4d6c764c-jlzpz
15m Normal Scheduled pod/postgres-6c4d6c764c-jlzpz Successfully assigned dev/postgres-6c4d6c764c-jlzpz to x
15m Normal ScalingReplicaSet deployment/keycloak Scaled up replica set keycloak-6fdbdf45f6 to 1
15m Normal CREATE ingress/keycloak-ingress Ingress dev/keycloak-ingress
15m Normal SuccessfulCreate replicaset/che-6f5989dcc8 Created pod: che-6f5989dcc8-5wq2d
15m Normal CREATE ingress/che-ingress Ingress dev/che-ingress
15m Normal CREATE ingress/che-ingress Ingress dev/che-ingress
15m Normal CREATE ingress/keycloak-ingress Ingress dev/keycloak-ingress
15m Normal SuccessfulCreate replicaset/keycloak-6fdbdf45f6 Created pod: keycloak-6fdbdf45f6-q8vd6
15m Normal ScalingReplicaSet deployment/che Scaled up replica set che-6f5989dcc8 to 1
15m Normal Scheduled pod/keycloak-6fdbdf45f6-q8vd6 Successfully assigned dev/keycloak-6fdbdf45f6-q8vd6 to x
61s Warning DNSConfigForming pod/keycloak-6fdbdf45f6-q8vd6 Search Line limits were exceeded, some search paths have been omitted, the applied search line is
15m Normal ScalingReplicaSet deployment/postgres Scaled up replica set postgres-6c4d6c764c to 1
15m Normal Pulled pod/keycloak-6fdbdf45f6-q8vd6 Container image "alpine:3.5" already present on machine
49s Warning DNSConfigForming pod/che-6f5989dcc8-5wq2d Search Line limits were exceeded, some search paths have been omitted, the applied search line is
16s Warning DNSConfigForming pod/postgres-6c4d6c764c-jlzpz Search Line limits were exceeded, some search paths have been omitted, the applied search line is
15m Normal Created pod/che-6f5989dcc8-5wq2d Created container wait-for-postgres
15m Normal Pulling pod/postgres-6c4d6c764c-jlzpz Pulling image "eclipse/che-postgres:nightly"
15m Normal Pulled pod/che-6f5989dcc8-5wq2d Container image "alpine:3.5" already present on machine
15m Normal Created pod/keycloak-6fdbdf45f6-q8vd6 Created container wait-for-postgres
15m Normal Started pod/keycloak-6fdbdf45f6-q8vd6 Started container wait-for-postgres
15m Normal Started pod/che-6f5989dcc8-5wq2d Started container wait-for-postgres
15m Normal Pulled pod/postgres-6c4d6c764c-jlzpz Successfully pulled image "eclipse/che-postgres:nightly"
15m Normal Started pod/postgres-6c4d6c764c-jlzpz Started container postgres
15m Normal Created pod/postgres-6c4d6c764c-jlzpz Created container postgres
15m Warning Unhealthy pod/postgres-6c4d6c764c-jlzpz Readiness probe failed: psql: could not connect to server: Connection refused
Is the server running on host "127.0.0.1" and accepting
TCP/IP connections on port 5432?
14m Normal UPDATE ingress/keycloak-ingress Ingress dev/keycloak-ingress
14m Normal UPDATE ingress/keycloak-ingress Ingress dev/keycloak-ingress
14m Normal UPDATE ingress/che-ingress Ingress dev/che-ingress
14m Normal UPDATE ingress/che-ingress Ingress dev/che-ingress
Describe of one of the pods:
Name: keycloak-6fdbdf45f6-p2rnc
Namespace: dev
Node: x/x.x.x.x
Start Time: Sat, 13 Jul 2019 01:54:40 +1000
Labels: io.kompose.service=keycloak
pod-template-hash=6fdbdf45f6
Annotations: <none>
Status: Pending
IP: x.y.z.e
Controlled By: ReplicaSet/keycloak-6fdbdf45f6
Init Containers:
wait-for-postgres:
Container ID: containerd://edb9932d7cf3a56a3e85580f159a5c99ccd35dba81f098dc3b1a0c9a0092f267
Image: alpine:3.5
Image ID: docker.io/library/alpine@sha256:66952b313e51c3bd1987d7c4ddf5dba9bc0fb6e524eed2448fa660246b3e76ec
Port: <none>
Host Port: <none>
Command:
sh
-c
apk --no-cache add curl jq ; adresses_length=0; until [ $adresses_length -gt 0 ]; do echo waiting for postgres to be ready...; sleep 2; endpoints=`curl -s --cacert /var/run/secrets/kubernetes.io/serviceaccount/ca.crt -H "Authorization: Bearer $(cat /var/run/secrets/kubernetes.io/serviceaccount/token)" https://kubernetes.default/api/v1/namespaces/$POD_NAMESPACE/endpoints/postgres`; adresses_length=`echo $endpoints | jq -r ".subsets[]?.addresses // [] | length"`; done;
State: Running
Started: Sat, 13 Jul 2019 01:54:42 +1000
Ready: False
Restart Count: 0
Environment:
POD_NAMESPACE: dev (v1:metadata.namespace)
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from che-keycloak-token-k2rl9 (ro)
Containers:
keycloak:
Container ID:
Image: eclipse/che-keycloak:nightly
Image ID:
Port: 8080/TCP
Host Port: 0/TCP
Command:
/scripts/kc_realm_user.sh
State: Waiting
Reason: PodInitializing
Ready: False
Restart Count: 0
Limits:
memory: 1536Mi
Requests:
memory: 1Gi
Liveness: tcp-socket :8080 delay=5s timeout=30s period=5s #success=1 #failure=11
Readiness: http-get http://:8080/auth/js/keycloak.js delay=10s timeout=1s period=3s #success=1 #failure=10
Environment:
POSTGRES_PORT_5432_TCP_ADDR: postgres
POSTGRES_PORT_5432_TCP_PORT: 5432
POSTGRES_DATABASE: keycloak
POSTGRES_USER: keycloak
POSTGRES_PASSWORD: keycloak
KEYCLOAK_USER: admin
KEYCLOAK_PASSWORD: admin
CHE_HOST: che-dev.192.168.99.100.nip.io
ROUTING_SUFFIX: 192.168.99.100.nip.io
NAMESPACE: dev
PROTOCOL: http
Mounts:
/opt/jboss/keycloak/standalone/data from keycloak-data (rw)
/opt/jboss/keycloak/standalone/log from keycloak-log (rw)
/var/run/secrets/kubernetes.io/serviceaccount from che-keycloak-token-k2rl9 (ro)
Conditions:
Type Status
Initialized False
Ready False
ContainersReady False
PodScheduled True
Volumes:
keycloak-data:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: keycloak-data
ReadOnly: false
keycloak-log:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: keycloak-log
ReadOnly: false
che-keycloak-token-k2rl9:
Type: Secret (a volume populated by a Secret)
SecretName: che-keycloak-token-k2rl9
Optional: false
QoS Class: Burstable
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 2m3s default-scheduler Successfully assigned dev/keycloak-6fdbdf45f6-p2rnc to x
Normal Pulled 2m2s kubelet, x Container image "alpine:3.5" already present on machine
Normal Created 2m2s kubelet, x Created container wait-for-postgres
Normal Started 2m1s kubelet, x Started container wait-for-postgres
Warning DNSConfigForming 53s (x5 over 2m2s) kubelet, x Search Line limits were exceeded, some search paths have been omitted, the applied search line is: xyx
validated postgres readiness:
$ kubectl get --namespace dev pod/postgres-6c4d6c764c-m9qrn -o jsonpath='{.spec.containers[0].readinessProbe.exec.command}' [bash -c psql -h 127.0.0.1 -U ${POSTGRESQL_USER} -q -d $POSTGRESQL_DATABASE -c "SELECT 1"]
And also running the following command from it’s pod’s exec:
sh-4.2$ psql -h 127.0.0.1 -U pgche -q -d dbche -c 'SELECT 1'
?column?
----------
1
(1 row)
Installation method
- chectl
- che-operator
- minishift-addon
- I don’t know
Client: &version.Version{SemVer:"v2.14.1", GitCommit:"5270352a09c7e8b6e8c9593002a73535276507c0", GitTreeState:"clean"}
Server: &version.Version{SemVer:"v2.14.1", GitCommit:"5270352a09c7e8b6e8c9593002a73535276507c0", GitTreeState:"clean"}
helm upgrade --install che --namespace dev --set cheImage=eclipse/che-server:<version> --set global.multiuser=true --set global.cheDomain=<domain> ./
In either of the pods che or keycloak logs, it would have a single log line with a reference to itself that it’s waiting to be initialized, example:
container "keycloak" in pod "keycloak-6fdbdf45f6-q8vd6" is waiting to start: PodInitializing
Environment
- my computer
- Windows
- Linux
- macOS
- Cloud
- Amazon
- Azure
- GCE
- other (please specify)
- other: please specify Ubuntu server 18.04
Additional context
Issue Analytics
- State:
- Created 4 years ago
- Comments:43 (17 by maintainers)
Top Results From Across the Web
Deploying a Keycloak HA cluster to kubernetes | Pods are not ...
The mechanism is to create a service account, give it permissions to call the API using a RoleBinding and set that account in...
Read more >Installing Che on Minikube with Keycloak as the OIDC provider
You can create a single-node Kubernetes cluster with Minikube to deploy Che and configure it to use Keycloak as the OpenID Connect (OIDC)...
Read more >Chapter 4. Configuring CodeReady Workspaces
app.kubernetes.io/part-of=che.eclipse.org,app.kubernetes.io/component=workspaces-namespace ... This broker runs as an init container on the workspace pod.
Read more >Keycloak for Identity and Access Management & High ...
If you are using Helm 2, you need to initialize Tiller in the Kubernetes cluster as well. helm init --tiller-namespace kube-system --service- ...
Read more >Custom Domain in Google Kubernetes Engine with GCP ...
Components: gcp-gke-dns-istio-letsencrypt-postgresql-keycloak. Kubernetes GCP Cloud DNS GCP Cloud SQL Postgre Istio Let's Encrypt ...
Read more >Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start FreeTop Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Top GitHub Comments
issue on keycloak is due to the latest changes on keycloak image and in helm templates
init container is trying to install curl and jq and receive
so further commands are not working
I didn’t have these as I was using cached helm templates with latest image
Closing in favor of #13870 that has a clear description of the main issue that has been reported here. Thank you @SDAdham and see you on the other side 😃