question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

Kibana (7.17.0 using Helm chart 7.16.3) readinessProbe always fails

See original GitHub issue

Chart version: 7.16.3

Kubernetes version:

1.18.14

Kubernetes provider:

Custom

Helm Version:

3.8.0

helm get release output

helm get all kibana

Output of helm get release
NAME: kibana
LAST DEPLOYED: Mon Feb 14 12:25:58 2022
NAMESPACE: com313
STATUS: deployed
REVISION: 4
TEST SUITE: None
USER-SUPPLIED VALUES:
extraEnvs:
- name: KIBANA_ENCRYPTION_KEY
  valueFrom:
    secretKeyRef:
      key: kibana.encryption.key
      name: kibana
- name: XPACK_ENCRYPTEDSAVEDOBJECTS_ENCRYPTIONKEY
  valueFrom:
    secretKeyRef:
      key: xpack.encryptedsavedobjects.encryptionkey
      name: kibana
- name: ELASTICSEARCH_PASSWORD
  valueFrom:
    secretKeyRef:
      key: kibana_system.password
      name: elastic
imageTag: 7.17.0
kibanaConfig:
  kibana.yml: |
    elasticsearch.username: kibana_system

    xpack.security.enabled: true

    xpack.security.audit.enabled: true
    xpack.security.audit.appender:
      kind: console
      layout:
        kind: json

    xpack.security.secureCookies: true
    xpack.security.sameSiteCookies: "None"

    xpack.actions.enabledActionTypes: [ ".server-log" ]


    newsfeed.enabled: false
    telemetry.enabled: false
    telemetry.optIn: false

    elasticsearch.requestTimeout: 120000
    elasticsearch.pingTimeout: 120000
    elasticsearch.shardTimeout: 120000

    elasticsearch.sniffOnConnectionFault: false
    elasticsearch.sniffOnStart: false
    elasticsearch.sniffInterval: false

    monitoring.ui.container.elasticsearch.enabled: true

    server.publicBaseUrl: https://kibana...

    xpack.discoverEnhanced.actions:
      exploreDataInContextMenu.enabled: true
      exploreDataInChart.enabled: true
replicas: 1

COMPUTED VALUES:
affinity: {}
automountToken: true
elasticsearchHosts: http://elasticsearch-master:9200
elasticsearchURL: ""
envFrom: []
extraContainers: ""
extraEnvs:
- name: KIBANA_ENCRYPTION_KEY
  valueFrom:
    secretKeyRef:
      key: kibana.encryption.key
      name: kibana
- name: XPACK_ENCRYPTEDSAVEDOBJECTS_ENCRYPTIONKEY
  valueFrom:
    secretKeyRef:
      key: xpack.encryptedsavedobjects.encryptionkey
      name: kibana
- name: ELASTICSEARCH_PASSWORD
  valueFrom:
    secretKeyRef:
      key: kibana_system.password
      name: elastic
extraInitContainers: ""
extraVolumeMounts: []
extraVolumes: []
fullnameOverride: ""
healthCheckPath: /app/kibana
hostAliases: []
httpPort: 5601
image: docker.elastic.co/kibana/kibana
imagePullPolicy: IfNotPresent
imagePullSecrets: []
imageTag: 7.17.0
ingress:
  annotations: {}
  className: nginx
  enabled: false
  hosts:
  - host: kibana-example.local
    paths:
    - path: /
  pathtype: ImplementationSpecific
kibanaConfig:
  kibana.yml: |
    elasticsearch.username: kibana_system

    xpack.security.enabled: true

    xpack.security.audit.enabled: true
    xpack.security.audit.appender:
      kind: console
      layout:
        kind: json

    xpack.security.secureCookies: true
    xpack.security.sameSiteCookies: "None"

    xpack.actions.enabledActionTypes: [ ".server-log" ]


    newsfeed.enabled: false
    telemetry.enabled: false
    telemetry.optIn: false

    elasticsearch.requestTimeout: 120000
    elasticsearch.pingTimeout: 120000
    elasticsearch.shardTimeout: 120000

    elasticsearch.sniffOnConnectionFault: false
    elasticsearch.sniffOnStart: false
    elasticsearch.sniffInterval: false

    monitoring.ui.container.elasticsearch.enabled: true

    server.publicBaseUrl: https://kibana...

    xpack.discoverEnhanced.actions:
      exploreDataInContextMenu.enabled: true
      exploreDataInChart.enabled: true
labels: {}
lifecycle: {}
nameOverride: ""
nodeSelector: {}
podAnnotations: {}
podSecurityContext:
  fsGroup: 1000
priorityClassName: ""
protocol: http
readinessProbe:
  failureThreshold: 3
  initialDelaySeconds: 10
  periodSeconds: 10
  successThreshold: 3
  timeoutSeconds: 5
replicas: 1
resources:
  limits:
    cpu: 1000m
    memory: 2Gi
  requests:
    cpu: 1000m
    memory: 2Gi
secretMounts: []
securityContext:
  capabilities:
    drop:
    - ALL
  runAsNonRoot: true
  runAsUser: 1000
serverHost: 0.0.0.0
service:
  annotations: {}
  httpPortName: http
  labels: {}
  loadBalancerIP: ""
  loadBalancerSourceRanges: []
  nodePort: ""
  port: 5601
  type: ClusterIP
serviceAccount: ""
tolerations: []
updateStrategy:
  type: Recreate

HOOKS:
MANIFEST:
---
# Source: kibana/templates/configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
  name: kibana-kibana-config
  labels:
    app: kibana
    release: "kibana"
    heritage: Helm
data:
  kibana.yml: |
    elasticsearch.username: kibana_system

    xpack.security.enabled: true

    xpack.security.audit.enabled: true
    xpack.security.audit.appender:
      kind: console
      layout:
        kind: json

    xpack.security.secureCookies: true
    xpack.security.sameSiteCookies: "None"

    xpack.actions.enabledActionTypes: [ ".server-log" ]


    newsfeed.enabled: false
    telemetry.enabled: false
    telemetry.optIn: false

    elasticsearch.requestTimeout: 120000
    elasticsearch.pingTimeout: 120000
    elasticsearch.shardTimeout: 120000

    elasticsearch.sniffOnConnectionFault: false
    elasticsearch.sniffOnStart: false
    elasticsearch.sniffInterval: false

    monitoring.ui.container.elasticsearch.enabled: true

    server.publicBaseUrl: https://kibana...

    xpack.discoverEnhanced.actions:
      exploreDataInContextMenu.enabled: true
      exploreDataInChart.enabled: true
---
# Source: kibana/templates/service.yaml
apiVersion: v1
kind: Service
metadata:
  name: kibana-kibana
  labels:
    app: kibana
    release: "kibana"
    heritage: Helm
spec:
  type: ClusterIP
  ports:
    - port: 5601
      protocol: TCP
      name: http
      targetPort: 5601
  selector:
    app: kibana
    release: "kibana"
---
# Source: kibana/templates/deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: kibana-kibana
  labels:
    app: kibana
    release: "kibana"
    heritage: Helm
spec:
  replicas: 1
  strategy:
    type: Recreate
  selector:
    matchLabels:
      app: kibana
      release: "kibana"
  template:
    metadata:
      labels:
        app: kibana
        release: "kibana"
      annotations:

        configchecksum: f5a6b004648b7810c930ae6ffaaceb1a14388e3d0c63702dc61cb80d990100f
    spec:
      automountServiceAccountToken: true
      securityContext:
        fsGroup: 1000
      volumes:
        - name: kibanaconfig
          configMap:
            name: kibana-kibana-config
      containers:
      - name: kibana
        securityContext:
          capabilities:
            drop:
            - ALL
          runAsNonRoot: true
          runAsUser: 1000
        image: "docker.elastic.co/kibana/kibana:7.17.0"
        imagePullPolicy: "IfNotPresent"
        env:
          - name: ELASTICSEARCH_HOSTS
            value: "http://elasticsearch-master:9200"
          - name: SERVER_HOST
            value: "0.0.0.0"
          - name: KIBANA_ENCRYPTION_KEY
            valueFrom:
              secretKeyRef:
                key: kibana.encryption.key
                name: kibana
          - name: XPACK_ENCRYPTEDSAVEDOBJECTS_ENCRYPTIONKEY
            valueFrom:
              secretKeyRef:
                key: xpack.encryptedsavedobjects.encryptionkey
                name: kibana
          - name: ELASTICSEARCH_PASSWORD
            valueFrom:
              secretKeyRef:
                key: kibana_system.password
                name: elastic
        readinessProbe:
          failureThreshold: 3
          initialDelaySeconds: 10
          periodSeconds: 10
          successThreshold: 3
          timeoutSeconds: 5
          exec:
            command:
              - sh
              - -c
              - |
                #!/usr/bin/env bash -e

                # Disable nss cache to avoid filling dentry cache when calling curl
                # This is required with Kibana Docker using nss < 3.52
                export NSS_SDB_USE_CACHE=no

                http () {
                    local path="${1}"
                    set -- -XGET -s --fail -L

                    if [ -n "${ELASTICSEARCH_USERNAME}" ] && [ -n "${ELASTICSEARCH_PASSWORD}" ]; then
                      set -- "$@" -u "${ELASTICSEARCH_USERNAME}:${ELASTICSEARCH_PASSWORD}"
                    fi

                    STATUS=$(curl --output /dev/null --write-out "%{http_code}" -k "$@" "http://localhost:5601${path}")
                    if [[ "${STATUS}" -eq 200 ]]; then
                      exit 0
                    fi

                    echo "Error: Got HTTP code ${STATUS} but expected a 200"
                    exit 1
                }

                http "/app/kibana"
        ports:
        - containerPort: 5601
        resources:
          limits:
            cpu: 1000m
            memory: 2Gi
          requests:
            cpu: 1000m
            memory: 2Gi
        volumeMounts:
          - name: kibanaconfig
            mountPath: /usr/share/kibana/config/kibana.yml
            subPath: kibana.yml

Describe the bug:

Kibana never becomes ready even though the application has started correctly within the container.

Steps to reproduce:

  1. Deploy Kibana 7.17.0 to Kubernetes using the 7.16.3 helm chart
  2. Notice that the Kibana Deployment/Pod never becomes “Ready”

Expected behavior:

Kibana Pod/Deployment becomes “Ready” once the applicaiton has started within the container.

Provide logs and/or server output (if relevant):

kubectl describe pod -l app=kibana gives:

Events:
  Type     Reason     Age                    From     Message
  ----     ------     ----                   ----     -------
  Warning  Unhealthy  4m28s (x495 over 86m)  kubelet  Readiness probe failed: Error: Got HTTP code 200 but expected a 200

There is (importantly) also the error: sh: 16: [[: not found output in the response, which I suspect equates to this line of the kibana deployment.yaml - it’s trying to (unnecessarily) use a bash construct to run the readinessProbe test but running in a POSIX sh shell (even though the probe command suggests it should be tyring to use bash).

Any additional context:

The failing if statement within the readinessProbe could simply use POSIX-compatible syntax instead of anything bash specific, e.g.

                    if [ "${STATUS}" -eq 200 ]; then
                      exit 0
                    fi

i.e. single [ and ] for the test instead of [[ ... ]]

Issue Analytics

  • State:closed
  • Created 2 years ago
  • Reactions:2
  • Comments:5 (1 by maintainers)

github_iconTop GitHub Comments

1reaction
anubisg1commented, Aug 29, 2022

is this really fixed?

i am using the new charts for 7.17.1 but it still fails the same way

i’m using simply " helm upgrade --install kibana elastic/kibana -f values.yaml "

can it be that the new released helms are not yet available ?

admin@azure:~/elastic-stack/kibana$ helm repo update
Hang tight while we grab the latest from your chart repositories...
...Successfully got an update from the "ingress-nginx" chart repository
...Successfully got an update from the "elastic" chart repository
Update Complete. ⎈Happy Helming!⎈


helm upgrade --install kibana elastic/kibana -f values.yaml
Release "kibana" does not exist. Installing it now.
NAME: kibana
LAST DEPLOYED: Tue Mar  8 16:09:19 2022
NAMESPACE: default
STATUS: deployed
REVISION: 1
TEST SUITE: None


admin@azure:~/elastic-stack/kibana$ kubectl describe pod kibana-kibana-868d9dbb49-xh6fb
Name:         kibana-kibana-868d9dbb49-xh6fb
Namespace:    default
Priority:     0
Node:         aks-nodepool1-27323697-vmss00000d/172.19.128.8
Start Time:   Tue, 08 Mar 2022 16:09:20 +0000
Labels:       app=kibana
              pod-template-hash=868d9dbb49
              release=kibana
Annotations:  configchecksum: 0ec36dffef3f41598b2f5f5128b90a2931f8e0ce12a0ba842af3f60ef3573dd
Status:       Running
IP:           10.244.4.4
IPs:
  IP:           10.244.4.4
Controlled By:  ReplicaSet/kibana-kibana-868d9dbb49
Containers:
  kibana:
    Container ID:   containerd://e717ea40431c4e86325ccf87bdf0b9066b095a5a3131c1f2032cd7cdfdb08c6d
    Image:          docker.elastic.co/kibana/kibana:7.17.1
    Image ID:       docker.elastic.co/kibana/kibana@sha256:e158b9f7d31f78ca3f36627ec9a0b38658093f6bbabd6c5416935361c7a8a710
    Port:           5601/TCP
    Host Port:      0/TCP
    State:          Running
      Started:      Tue, 08 Mar 2022 16:09:21 +0000
    Ready:          False
    Restart Count:  0
    Limits:
      cpu:     1
      memory:  2Gi
    Requests:
      cpu:      1
      memory:   2Gi
    Readiness:  exec [sh -c #!/usr/bin/env bash -e

# Disable nss cache to avoid filling dentry cache when calling curl
# This is required with Kibana Docker using nss < 3.52
export NSS_SDB_USE_CACHE=no

http () {
    local path="${1}"
    set -- -XGET -s --fail -L

    if [ -n "${ELASTICSEARCH_USERNAME}" ] && [ -n "${ELASTICSEARCH_PASSWORD}" ]; then
      set -- "$@" -u "${ELASTICSEARCH_USERNAME}:${ELASTICSEARCH_PASSWORD}"
    fi

    STATUS=$(curl --output /dev/null --write-out "%{http_code}" -k "$@" "http://localhost:5601${path}")
    if [[ "${STATUS}" -eq 200 ]]; then
      exit 0
    fi

    echo "Error: Got HTTP code ${STATUS} but expected a 200"
    exit 1
}

http "/api/status"
] delay=10s timeout=5s period=10s #success=3 #failure=3
    Environment:
      ELASTICSEARCH_HOSTS:     https://elasticsearch-master:9200
      SERVER_HOST:             0.0.0.0
      NODE_OPTIONS:            --max-old-space-size=1800
      ELASTICSEARCH_USERNAME:  <set to the key 'username' in secret 'elastic-credentials'>  Optional: false
      ELASTICSEARCH_PASSWORD:  <set to the key 'password' in secret 'elastic-credentials'>  Optional: false
      KIBANA_ENCRYPTION_KEY:   <set to the key 'encryptionkey' in secret 'kibana'>          Optional: false
    Mounts:
      /usr/share/kibana/config/certs from elastic-certificate-pem (rw)
      /usr/share/kibana/config/kibana.yml from kibanaconfig (rw,path="kibana.yml")
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-jqt44 (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             False 
  ContainersReady   False 
  PodScheduled      True 
Volumes:
  elastic-certificate-pem:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  elastic-certificate-pem
    Optional:    false
  kibanaconfig:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      kibana-kibana-config
    Optional:  false
  kube-api-access-jqt44:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    ConfigMapOptional:       <nil>
    DownwardAPI:             true
QoS Class:                   Guaranteed
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/memory-pressure:NoSchedule op=Exists
                             node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type     Reason     Age   From               Message
  ----     ------     ----  ----               -------
  Normal   Scheduled  45s   default-scheduler  Successfully assigned default/kibana-kibana-868d9dbb49-xh6fb to aks-nodepool1-27323697-vmss00000d
  Normal   Pulled     45s   kubelet            Container image "docker.elastic.co/kibana/kibana:7.17.1" already present on machine
  Normal   Created    45s   kubelet            Created container kibana
  Normal   Started    45s   kubelet            Started container kibana
  Warning  Unhealthy  26s   kubelet            Readiness probe failed: Error: Got HTTP code 503 but expected a 200
sh: 16: [[: not found
  Warning  Unhealthy  6s (x2 over 16s)  kubelet  Readiness probe failed: Error: Got HTTP code 200 but expected a 200
sh: 16: [[: not found
1reaction
ebuildycommented, Feb 17, 2022

fixed by https://github.com/elastic/helm-charts/commit/c5dbfd4dd88a6eb095aca3ee5fc493ff536d54c0 - will be available at Helm chart version 7.17.0

Read more comments on GitHub >

github_iconTop Results From Across the Web

kibana 8.5.1 · elastic/elastic - Artifact Hub
Official Elastic helm chart for Kibana. ... Elastic Stacks deployed on Kubernetes through Helm charts will still be fully supported under EOL limitations....
Read more >
Elasticsearch pod in Kubernetes failing for v 7.16.1
We are using Helm chart from GitHub - elastic/helm-charts: You know, for Kubernetes to deploy Elasticsearch & Kibana in AWS EKS cluster ...
Read more >
Readiness probe issues on aks with helm elasticsearch ...
i am trying to install via helm elasticsearch on an azure aks this ... Readiness probe failed: Waiting for elasticsearch cluster to become ......
Read more >
quarkusio/quarkus 2.8.0.CR1 on GitHub - NewReleases.io
Final; #24454 - Misc updates in codebase to be a bit more efficient ... #23353 - Bump elasticsearch-opensource-components.version from 7.16.3 to 7.17.0 ......
Read more >
Quarkus: Supersonic Subatomic Java. - JavaRepos
I just saw that the windows build was failing with an error on getting the ... Updates elasticsearch-rest-client from 8.4.2 to 8.4.3.
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found