question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

[elasticsearch] Pod keeps restarting because of Readiness exit 1

See original GitHub issue

Chart version:

elasticsearch-7.4.1

Kubernetes version:

1.15.5-do.0

Kubernetes provider:

Digital Ocean K8s

Helm Version:

Not sure, the one which terraform is using in their provider.

helm get release output

REVISION: 1
RELEASED: Wed Nov  6 17:17:35 2019
CHART: elasticsearch-7.4.1
USER-SUPPLIED VALUES:
esJavaOpts: -Xmx1g -Xms1g
extraInitContainers: |
  - name: create
    image: busybox:1.28
    command: ['mkdir', '-p', '/usr/share/elasticsearch/data/nodes/']
    securityContext:
      runAsUser: 0
    volumeMounts:
     - mountPath: /usr/share/elasticsearch/data
       name: elasticsearch-master
  - name: file-permissions
    image: busybox:1.28
    command: ['chown', '-R', '1000:1000', '/usr/share/elasticsearch/']
    securityContext:
       runAsUser: 0
    volumeMounts:
     - mountPath: /usr/share/elasticsearch/data
       name: elasticsearch-master
nodeSelector:
  doks.digitalocean.com/node-pool: elasticsearch
readinessProbe:
  initialDelaySeconds: 200
resources:
  limits:
    cpu: 1000m
    memory: 2G
  requests:
    cpu: 100m
    memory: 2G
volumeClaimTemplate:
  accessModes:
  - ReadWriteOnce
  resources:
    requests:
      storage: 20G
  storageClassName: do-block-storage

COMPUTED VALUES:
antiAffinity: hard
antiAffinityTopologyKey: kubernetes.io/hostname
clusterHealthCheckParams: wait_for_status=green&timeout=1s
clusterName: elasticsearch
esConfig: {}
esJavaOpts: -Xmx1g -Xms1g
esMajorVersion: ""
extraEnvs: []
extraInitContainers: |
  - name: create
    image: busybox:1.28
    command: ['mkdir', '-p', '/usr/share/elasticsearch/data/nodes/']
    securityContext:
      runAsUser: 0
    volumeMounts:
     - mountPath: /usr/share/elasticsearch/data
       name: elasticsearch-master
  - name: file-permissions
    image: busybox:1.28
    command: ['chown', '-R', '1000:1000', '/usr/share/elasticsearch/']
    securityContext:
       runAsUser: 0
    volumeMounts:
     - mountPath: /usr/share/elasticsearch/data
       name: elasticsearch-master
extraVolumeMounts: ""
extraVolumes: ""
fsGroup: ""
fullnameOverride: ""
httpPort: 9200
image: docker.elastic.co/elasticsearch/elasticsearch
imagePullPolicy: IfNotPresent
imagePullSecrets: []
imageTag: 7.4.1
ingress:
  annotations: {}
  enabled: false
  hosts:
  - chart-example.local
  path: /
  tls: []
initResources: {}
keystore: []
labels: {}
lifecycle: {}
masterService: ""
masterTerminationFix: false
maxUnavailable: 1
minimumMasterNodes: 2
nameOverride: ""
networkHost: 0.0.0.0
nodeAffinity: {}
nodeGroup: master
nodeSelector:
  doks.digitalocean.com/node-pool: elasticsearch
persistence:
  annotations: {}
  enabled: true
podAnnotations: {}
podManagementPolicy: Parallel
podSecurityContext:
  fsGroup: 1000
  runAsUser: 1000
podSecurityPolicy:
  create: false
  name: ""
  spec:
    fsGroup:
      rule: RunAsAny
    privileged: true
    runAsUser:
      rule: RunAsAny
    seLinux:
      rule: RunAsAny
    supplementalGroups:
      rule: RunAsAny
    volumes:
    - secret
    - configMap
    - persistentVolumeClaim
priorityClassName: ""
protocol: http
rbac:
  create: false
  serviceAccountName: ""
readinessProbe:
  failureThreshold: 3
  initialDelaySeconds: 200
  periodSeconds: 10
  successThreshold: 3
  timeoutSeconds: 5
replicas: 3
resources:
  limits:
    cpu: 1000m
    memory: 2G
  requests:
    cpu: 100m
    memory: 2G
roles:
  data: "true"
  ingest: "true"
  master: "true"
schedulerName: ""
secretMounts: []
securityContext:
  capabilities:
    drop:
    - ALL
  runAsNonRoot: true
  runAsUser: 1000
service:
  annotations: {}
  httpPortName: http
  nodePort: ""
  transportPortName: transport
  type: ClusterIP
sidecarResources: {}
sysctlInitContainer:
  enabled: true
sysctlVmMaxMapCount: 262144
terminationGracePeriod: 120
tolerations: []
transportPort: 9300
updateStrategy: RollingUpdate
volumeClaimTemplate:
  accessModes:
  - ReadWriteOnce
  resources:
    requests:
      storage: 20G
  storageClassName: do-block-storage

HOOKS:
---
# elasticsearch-tncit-test
apiVersion: v1
kind: Pod
metadata:
  name: "elasticsearch-tncit-test"
  annotations:
    "helm.sh/hook": test-success
spec:
  containers:
  - name: "elasticsearch-myxea-test"
    image: "docker.elastic.co/elasticsearch/elasticsearch:7.4.1"
    command:
      - "sh"
      - "-c"
      - |
        #!/usr/bin/env bash -e
        curl -XGET --fail 'elasticsearch-master:9200/_cluster/health?wait_for_status=green&timeout=1s'
  restartPolicy: Never
MANIFEST:

---
# Source: elasticsearch/templates/poddisruptionbudget.yaml
apiVersion: policy/v1beta1
kind: PodDisruptionBudget
metadata:
  name: "elasticsearch-master-pdb"
spec:
  maxUnavailable: 1
  selector:
    matchLabels:
      app: "elasticsearch-master"
---
# Source: elasticsearch/templates/service.yaml
kind: Service
apiVersion: v1
metadata:
  name: elasticsearch-master
  labels:
    heritage: "Tiller"
    release: "elasticsearch"
    chart: "elasticsearch"
    app: "elasticsearch-master"
  annotations:
    {}
    
spec:
  type: ClusterIP
  selector:
    heritage: "Tiller"
    release: "elasticsearch"
    chart: "elasticsearch"
    app: "elasticsearch-master"
  ports:
  - name: http
    protocol: TCP
    port: 9200
  - name: transport
    protocol: TCP
    port: 9300
---
# Source: elasticsearch/templates/service.yaml
kind: Service
apiVersion: v1
metadata:
  name: elasticsearch-master-headless
  labels:
    heritage: "Tiller"
    release: "elasticsearch"
    chart: "elasticsearch"
    app: "elasticsearch-master"
  annotations:
    service.alpha.kubernetes.io/tolerate-unready-endpoints: "true"
spec:
  clusterIP: None # This is needed for statefulset hostnames like elasticsearch-0 to resolve
  # Create endpoints also if the related pod isn't ready
  publishNotReadyAddresses: true
  selector:
    app: "elasticsearch-master"
  ports:
  - name: http
    port: 9200
  - name: transport
    port: 9300
---
# Source: elasticsearch/templates/statefulset.yaml
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: elasticsearch-master
  labels:
    heritage: "Tiller"
    release: "elasticsearch"
    chart: "elasticsearch"
    app: "elasticsearch-master"
  annotations:
    esMajorVersion: "7"
spec:
  serviceName: elasticsearch-master-headless
  selector:
    matchLabels:
      app: "elasticsearch-master"
  replicas: 3
  podManagementPolicy: Parallel
  updateStrategy:
    type: RollingUpdate
  volumeClaimTemplates:
  - metadata:
      name: elasticsearch-master
    spec:
      accessModes:
      - ReadWriteOnce
      resources:
        requests:
          storage: 20G
      storageClassName: do-block-storage
      
  template:
    metadata:
      name: "elasticsearch-master"
      labels:
        heritage: "Tiller"
        release: "elasticsearch"
        chart: "elasticsearch"
        app: "elasticsearch-master"
      annotations:
        
    spec:
      securityContext:
        fsGroup: 1000
        runAsUser: 1000
        
      nodeSelector:
        doks.digitalocean.com/node-pool: elasticsearch
        
      affinity:
        podAntiAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
          - labelSelector:
              matchExpressions:
              - key: app
                operator: In
                values:
                - "elasticsearch-master"
            topologyKey: kubernetes.io/hostname
      terminationGracePeriodSeconds: 120
      volumes:
      initContainers:
      - name: configure-sysctl
        securityContext:
          runAsUser: 0
          privileged: true
        image: "docker.elastic.co/elasticsearch/elasticsearch:7.4.1"
        command: ["sysctl", "-w", "vm.max_map_count=262144"]
        resources:
          {}
          

      - name: create
        image: busybox:1.28
        command: ['mkdir', '-p', '/usr/share/elasticsearch/data/nodes/']
        securityContext:
          runAsUser: 0
        volumeMounts:
         - mountPath: /usr/share/elasticsearch/data
           name: elasticsearch-master
      - name: file-permissions
        image: busybox:1.28
        command: ['chown', '-R', '1000:1000', '/usr/share/elasticsearch/']
        securityContext:
           runAsUser: 0
        volumeMounts:
         - mountPath: /usr/share/elasticsearch/data
           name: elasticsearch-master
      
      containers:
      - name: "elasticsearch"
        securityContext:
          capabilities:
            drop:
            - ALL
          runAsNonRoot: true
          runAsUser: 1000
          
        image: "docker.elastic.co/elasticsearch/elasticsearch:7.4.1"
        imagePullPolicy: "IfNotPresent"
        readinessProbe:
          failureThreshold: 3
          initialDelaySeconds: 200
          periodSeconds: 10
          successThreshold: 3
          timeoutSeconds: 5
          
          exec:
            command:
              - sh
              - -c
              - |
                #!/usr/bin/env bash -e
                # If the node is starting up wait for the cluster to be ready (request params: 'wait_for_status=green&timeout=1s' )
                # Once it has started only check that the node itself is responding
                START_FILE=/tmp/.es_start_file

                http () {
                    local path="${1}"
                    if [ -n "${ELASTIC_USERNAME}" ] && [ -n "${ELASTIC_PASSWORD}" ]; then
                      BASIC_AUTH="-u ${ELASTIC_USERNAME}:${ELASTIC_PASSWORD}"
                    else
                      BASIC_AUTH=''
                    fi
                    curl -XGET -s -k --fail ${BASIC_AUTH} http://127.0.0.1:9200${path}
                }

                if [ -f "${START_FILE}" ]; then
                    echo 'Elasticsearch is already running, lets check the node is healthy'
                    http "/"
                else
                    echo 'Waiting for elasticsearch cluster to become cluster to be ready (request params: "wait_for_status=green&timeout=1s" )'
                    if http "/_cluster/health?wait_for_status=green&timeout=1s" ; then
                        touch ${START_FILE}
                        exit 0
                    else
                        echo 'Cluster is not yet ready (request params: "wait_for_status=green&timeout=1s" )'
                        exit 1
                    fi
                fi
        ports:
        - name: http
          containerPort: 9200
        - name: transport
          containerPort: 9300
        resources:
          limits:
            cpu: 1000m
            memory: 2G
          requests:
            cpu: 100m
            memory: 2G
          
        env:
          - name: node.name
            valueFrom:
              fieldRef:
                fieldPath: metadata.name
          - name: cluster.initial_master_nodes
            value: "elasticsearch-master-0,elasticsearch-master-1,elasticsearch-master-2,"
          - name: discovery.seed_hosts
            value: "elasticsearch-master-headless"
          - name: cluster.name
            value: "elasticsearch"
          - name: network.host
            value: "0.0.0.0"
          - name: ES_JAVA_OPTS
            value: "-Xmx1g -Xms1g"
          - name: node.data
            value: "true"
          - name: node.ingest
            value: "true"
          - name: node.master
            value: "true"
        volumeMounts:
          - name: "elasticsearch-master"
            mountPath: /usr/share/elasticsearch/data

Describe the bug:

When pod starts I’ve got error that readiness failed, and then pod exists. Seems to be it’s because your template contains exit 1 when readiness fails, but why? This forces to restart a pod and readiness never happens. The only way I could fix this is to increase readiness timeout up to 200 seconds.

Steps to reproduce:

  1. Deploy your chart with any config from examples
  2. Check logs of the pod which will tell you that readiness fails all over again.

Expected behavior:

AFAIK there’s no need for exit 1 in readiness, pod should just hang there and readiness should retry after some period of time.

Provide logs and/or server output (if relevant):

OpenJDK 64-Bit Server VM warning: Option UseConcMarkSweepGC was deprecated in version 9.0 and will likely be removed in a future release.
{"type": "server", "timestamp": "2019-11-06T14:53:04,902Z", "level": "INFO", "component": "o.e.e.NodeEnvironment", "cluster.name": "elasticsearch", "node.name": "elasticsearch-master-0", "message": "using [1] data paths, mounts [[/usr/share/elasticsearch/data (/dev/disk/by-id/scsi-0DO_Volume_pvc-4d379d22-643b-49d4-becb-ab966508919e)]], net usable_space [16.6gb], net total_space [17.5gb], types [ext4]" }
{"type": "server", "timestamp": "2019-11-06T14:53:04,906Z", "level": "INFO", "component": "o.e.e.NodeEnvironment", "cluster.name": "elasticsearch", "node.name": "elasticsearch-master-0", "message": "heap size [1015.6mb], compressed ordinary object pointers [true]" }
{"type": "server", "timestamp": "2019-11-06T14:53:04,910Z", "level": "INFO", "component": "o.e.n.Node", "cluster.name": "elasticsearch", "node.name": "elasticsearch-master-0", "message": "node name [elasticsearch-master-0], node ID [Gi4HuSwOSxmwVip1J0jnoQ], cluster name [elasticsearch]" }
{"type": "server", "timestamp": "2019-11-06T14:53:04,912Z", "level": "INFO", "component": "o.e.n.Node", "cluster.name": "elasticsearch", "node.name": "elasticsearch-master-0", "message": "version[7.4.1], pid[1], build[default/docker/fc0eeb6e2c25915d63d871d344e3d0b45ea0ea1e/2019-10-22T17:16:35.176724Z], OS[Linux/4.19.0-0.bpo.6-amd64/amd64], JVM[AdoptOpenJDK/OpenJDK 64-Bit Server VM/13/13+33]" }
{"type": "server", "timestamp": "2019-11-06T14:53:04,913Z", "level": "INFO", "component": "o.e.n.Node", "cluster.name": "elasticsearch", "node.name": "elasticsearch-master-0", "message": "JVM home [/usr/share/elasticsearch/jdk]" }
{"type": "server", "timestamp": "2019-11-06T14:53:04,913Z", "level": "INFO", "component": "o.e.n.Node", "cluster.name": "elasticsearch", "node.name": "elasticsearch-master-0", "message": "JVM arguments [-Xms1g, -Xmx1g, -XX:+UseConcMarkSweepGC, -XX:CMSInitiatingOccupancyFraction=75, -XX:+UseCMSInitiatingOccupancyOnly, -Des.networkaddress.cache.ttl=60, -Des.networkaddress.cache.negative.ttl=10, -XX:+AlwaysPreTouch, -Xss1m, -Djava.awt.headless=true, -Dfile.encoding=UTF-8, -Djna.nosys=true, -XX:-OmitStackTraceInFastThrow, -Dio.netty.noUnsafe=true, -Dio.netty.noKeySetOptimization=true, -Dio.netty.recycler.maxCapacityPerThread=0, -Dio.netty.allocator.numDirectArenas=0, -Dlog4j.shutdownHookEnabled=false, -Dlog4j2.disable.jmx=true, -Djava.io.tmpdir=/tmp/elasticsearch-8666035286165417967, -XX:+HeapDumpOnOutOfMemoryError, -XX:HeapDumpPath=data, -XX:ErrorFile=logs/hs_err_pid%p.log, -Xlog:gc*,gc+age=trace,safepoint:file=logs/gc.log:utctime,pid,tags:filecount=32,filesize=64m, -Djava.locale.providers=COMPAT, -Des.cgroups.hierarchy.override=/, -Xmx1g, -Xms1g, -Dio.netty.allocator.type=unpooled, -XX:MaxDirectMemorySize=536870912, -Des.path.home=/usr/share/elasticsearch, -Des.path.conf=/usr/share/elasticsearch/config, -Des.distribution.flavor=default, -Des.distribution.type=docker, -Des.bundled_jdk=true]" }
{"type": "server", "timestamp": "2019-11-06T14:53:10,298Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "elasticsearch", "node.name": "elasticsearch-master-0", "message": "loaded module [aggs-matrix-stats]" }
{"type": "server", "timestamp": "2019-11-06T14:53:10,298Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "elasticsearch", "node.name": "elasticsearch-master-0", "message": "loaded module [analysis-common]" }
{"type": "server", "timestamp": "2019-11-06T14:53:10,299Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "elasticsearch", "node.name": "elasticsearch-master-0", "message": "loaded module [data-frame]" }
{"type": "server", "timestamp": "2019-11-06T14:53:10,299Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "elasticsearch", "node.name": "elasticsearch-master-0", "message": "loaded module [flattened]" }
{"type": "server", "timestamp": "2019-11-06T14:53:10,299Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "elasticsearch", "node.name": "elasticsearch-master-0", "message": "loaded module [frozen-indices]" }
{"type": "server", "timestamp": "2019-11-06T14:53:10,299Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "elasticsearch", "node.name": "elasticsearch-master-0", "message": "loaded module [ingest-common]" }
{"type": "server", "timestamp": "2019-11-06T14:53:10,300Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "elasticsearch", "node.name": "elasticsearch-master-0", "message": "loaded module [ingest-geoip]" }
{"type": "server", "timestamp": "2019-11-06T14:53:10,300Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "elasticsearch", "node.name": "elasticsearch-master-0", "message": "loaded module [ingest-user-agent]" }
{"type": "server", "timestamp": "2019-11-06T14:53:10,300Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "elasticsearch", "node.name": "elasticsearch-master-0", "message": "loaded module [lang-expression]" }
{"type": "server", "timestamp": "2019-11-06T14:53:10,300Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "elasticsearch", "node.name": "elasticsearch-master-0", "message": "loaded module [lang-mustache]" }
{"type": "server", "timestamp": "2019-11-06T14:53:10,300Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "elasticsearch", "node.name": "elasticsearch-master-0", "message": "loaded module [lang-painless]" }
{"type": "server", "timestamp": "2019-11-06T14:53:10,300Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "elasticsearch", "node.name": "elasticsearch-master-0", "message": "loaded module [mapper-extras]" }
{"type": "server", "timestamp": "2019-11-06T14:53:10,301Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "elasticsearch", "node.name": "elasticsearch-master-0", "message": "loaded module [parent-join]" }
{"type": "server", "timestamp": "2019-11-06T14:53:10,301Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "elasticsearch", "node.name": "elasticsearch-master-0", "message": "loaded module [percolator]" }
{"type": "server", "timestamp": "2019-11-06T14:53:10,301Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "elasticsearch", "node.name": "elasticsearch-master-0", "message": "loaded module [rank-eval]" }
{"type": "server", "timestamp": "2019-11-06T14:53:10,301Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "elasticsearch", "node.name": "elasticsearch-master-0", "message": "loaded module [reindex]" }
{"type": "server", "timestamp": "2019-11-06T14:53:10,301Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "elasticsearch", "node.name": "elasticsearch-master-0", "message": "loaded module [repository-url]" }
{"type": "server", "timestamp": "2019-11-06T14:53:10,302Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "elasticsearch", "node.name": "elasticsearch-master-0", "message": "loaded module [search-business-rules]" }
{"type": "server", "timestamp": "2019-11-06T14:53:10,302Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "elasticsearch", "node.name": "elasticsearch-master-0", "message": "loaded module [spatial]" }
{"type": "server", "timestamp": "2019-11-06T14:53:10,302Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "elasticsearch", "node.name": "elasticsearch-master-0", "message": "loaded module [transport-netty4]" }
{"type": "server", "timestamp": "2019-11-06T14:53:10,302Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "elasticsearch", "node.name": "elasticsearch-master-0", "message": "loaded module [vectors]" }
{"type": "server", "timestamp": "2019-11-06T14:53:10,302Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "elasticsearch", "node.name": "elasticsearch-master-0", "message": "loaded module [x-pack-analytics]" }
{"type": "server", "timestamp": "2019-11-06T14:53:10,303Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "elasticsearch", "node.name": "elasticsearch-master-0", "message": "loaded module [x-pack-ccr]" }
{"type": "server", "timestamp": "2019-11-06T14:53:10,303Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "elasticsearch", "node.name": "elasticsearch-master-0", "message": "loaded module [x-pack-core]" }
{"type": "server", "timestamp": "2019-11-06T14:53:10,303Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "elasticsearch", "node.name": "elasticsearch-master-0", "message": "loaded module [x-pack-deprecation]" }
{"type": "server", "timestamp": "2019-11-06T14:53:10,303Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "elasticsearch", "node.name": "elasticsearch-master-0", "message": "loaded module [x-pack-graph]" }
{"type": "server", "timestamp": "2019-11-06T14:53:10,304Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "elasticsearch", "node.name": "elasticsearch-master-0", "message": "loaded module [x-pack-ilm]" }
{"type": "server", "timestamp": "2019-11-06T14:53:10,304Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "elasticsearch", "node.name": "elasticsearch-master-0", "message": "loaded module [x-pack-logstash]" }
{"type": "server", "timestamp": "2019-11-06T14:53:10,304Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "elasticsearch", "node.name": "elasticsearch-master-0", "message": "loaded module [x-pack-ml]" }
{"type": "server", "timestamp": "2019-11-06T14:53:10,305Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "elasticsearch", "node.name": "elasticsearch-master-0", "message": "loaded module [x-pack-monitoring]" }
{"type": "server", "timestamp": "2019-11-06T14:53:10,305Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "elasticsearch", "node.name": "elasticsearch-master-0", "message": "loaded module [x-pack-rollup]" }
{"type": "server", "timestamp": "2019-11-06T14:53:10,305Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "elasticsearch", "node.name": "elasticsearch-master-0", "message": "loaded module [x-pack-security]" }
{"type": "server", "timestamp": "2019-11-06T14:53:10,305Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "elasticsearch", "node.name": "elasticsearch-master-0", "message": "loaded module [x-pack-sql]" }
{"type": "server", "timestamp": "2019-11-06T14:53:10,305Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "elasticsearch", "node.name": "elasticsearch-master-0", "message": "loaded module [x-pack-voting-only-node]" }
{"type": "server", "timestamp": "2019-11-06T14:53:10,306Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "elasticsearch", "node.name": "elasticsearch-master-0", "message": "loaded module [x-pack-watcher]" }
{"type": "server", "timestamp": "2019-11-06T14:53:10,306Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "elasticsearch", "node.name": "elasticsearch-master-0", "message": "no plugins loaded" }
{"type": "server", "timestamp": "2019-11-06T14:53:21,719Z", "level": "INFO", "component": "o.e.x.s.a.s.FileRolesStore", "cluster.name": "elasticsearch", "node.name": "elasticsearch-master-0", "message": "parsed [0] roles from file [/usr/share/elasticsearch/config/roles.yml]" }
{"type": "server", "timestamp": "2019-11-06T14:53:23,236Z", "level": "INFO", "component": "o.e.x.m.p.l.CppLogMessageHandler", "cluster.name": "elasticsearch", "node.name": "elasticsearch-master-0", "message": "[controller/87] [Main.cc@110] controller (64 bit): Version 7.4.1 (Build 973380bdacc5e8) Copyright (c) 2019 Elasticsearch BV" }
{"type": "server", "timestamp": "2019-11-06T14:53:24,639Z", "level": "DEBUG", "component": "o.e.a.ActionModule", "cluster.name": "elasticsearch", "node.name": "elasticsearch-master-0", "message": "Using REST wrapper from plugin org.elasticsearch.xpack.security.Security" }
{"type": "server", "timestamp": "2019-11-06T14:53:25,800Z", "level": "INFO", "component": "o.e.d.DiscoveryModule", "cluster.name": "elasticsearch", "node.name": "elasticsearch-master-0", "message": "using discovery type [zen] and seed hosts providers [settings]" }
{"type": "server", "timestamp": "2019-11-06T14:53:28,212Z", "level": "INFO", "component": "o.e.n.Node", "cluster.name": "elasticsearch", "node.name": "elasticsearch-master-0", "message": "initialized" }
{"type": "server", "timestamp": "2019-11-06T14:53:28,213Z", "level": "INFO", "component": "o.e.n.Node", "cluster.name": "elasticsearch", "node.name": "elasticsearch-master-0", "message": "starting ..." }
{"type": "server", "timestamp": "2019-11-06T14:53:28,542Z", "level": "INFO", "component": "o.e.t.TransportService", "cluster.name": "elasticsearch", "node.name": "elasticsearch-master-0", "message": "publish_address {10.244.3.46:9300}, bound_addresses {[::]:9300}" }
{"type": "server", "timestamp": "2019-11-06T14:53:28,611Z", "level": "INFO", "component": "o.e.b.BootstrapChecks", "cluster.name": "elasticsearch", "node.name": "elasticsearch-master-0", "message": "bound or publishing to a non-loopback address, enforcing bootstrap checks" }
{"type": "server", "timestamp": "2019-11-06T14:53:28,947Z", "level": "INFO", "component": "o.e.c.c.Coordinator", "cluster.name": "elasticsearch", "node.name": "elasticsearch-master-0", "message": "setting initial configuration to VotingConfiguration{{bootstrap-placeholder}-elasticsearch-master-1,Gi4HuSwOSxmwVip1J0jnoQ,FntcS1PtQqmA07fuKdqf8Q}" }
{"type": "server", "timestamp": "2019-11-06T14:53:32,641Z", "level": "INFO", "component": "o.e.c.s.ClusterApplierService", "cluster.name": "elasticsearch", "node.name": "elasticsearch-master-0", "message": "master node changed {previous [], current [{elasticsearch-master-1}{onfBHnjRSeWSvE07eqoIxg}{MjCRAJr7Rbu-1rkKqXPJXQ}{10.244.4.85}{10.244.4.85:9300}{dilm}{ml.machine_memory=1999998976, ml.max_open_jobs=20, xpack.installed=true}]}, added {{elasticsearch-master-1}{onfBHnjRSeWSvE07eqoIxg}{MjCRAJr7Rbu-1rkKqXPJXQ}{10.244.4.85}{10.244.4.85:9300}{dilm}{ml.machine_memory=1999998976, ml.max_open_jobs=20, xpack.installed=true},{elasticsearch-master-2}{FntcS1PtQqmA07fuKdqf8Q}{_3ildTSjSseZnoJHRiKW4w}{10.244.5.59}{10.244.5.59:9300}{dilm}{ml.machine_memory=1999998976, ml.max_open_jobs=20, xpack.installed=true},}, term: 1, version: 13, reason: ApplyCommitRequest{term=1, version=13, sourceNode={elasticsearch-master-1}{onfBHnjRSeWSvE07eqoIxg}{MjCRAJr7Rbu-1rkKqXPJXQ}{10.244.4.85}{10.244.4.85:9300}{dilm}{ml.machine_memory=1999998976, ml.max_open_jobs=20, xpack.installed=true}}" }
{"type": "server", "timestamp": "2019-11-06T14:53:32,720Z", "level": "INFO", "component": "o.e.x.s.a.TokenService", "cluster.name": "elasticsearch", "node.name": "elasticsearch-master-0", "message": "refresh keys" }
{"type": "server", "timestamp": "2019-11-06T14:53:33,004Z", "level": "INFO", "component": "o.e.x.s.a.TokenService", "cluster.name": "elasticsearch", "node.name": "elasticsearch-master-0", "message": "refreshed keys" }
{"type": "server", "timestamp": "2019-11-06T14:53:33,100Z", "level": "INFO", "component": "o.e.h.AbstractHttpServerTransport", "cluster.name": "elasticsearch", "node.name": "elasticsearch-master-0", "message": "publish_address {10.244.3.46:9200}, bound_addresses {[::]:9200}", "cluster.uuid": "5qGtycsKQ_mCV1VNl4Fyng", "node.id": "Gi4HuSwOSxmwVip1J0jnoQ"  }
{"type": "server", "timestamp": "2019-11-06T14:53:33,101Z", "level": "INFO", "component": "o.e.n.Node", "cluster.name": "elasticsearch", "node.name": "elasticsearch-master-0", "message": "started", "cluster.uuid": "5qGtycsKQ_mCV1VNl4Fyng", "node.id": "Gi4HuSwOSxmwVip1J0jnoQ"  }
{"type": "server", "timestamp": "2019-11-06T14:53:33,329Z", "level": "INFO", "component": "o.e.x.m.e.l.LocalExporter", "cluster.name": "elasticsearch", "node.name": "elasticsearch-master-0", "message": "waiting for elected master node [{elasticsearch-master-1}{onfBHnjRSeWSvE07eqoIxg}{MjCRAJr7Rbu-1rkKqXPJXQ}{10.244.4.85}{10.244.4.85:9300}{dilm}{ml.machine_memory=1999998976, ml.max_open_jobs=20, xpack.installed=true}] to setup local exporter [default_local] (does it have x-pack installed?)", "cluster.uuid": "5qGtycsKQ_mCV1VNl4Fyng", "node.id": "Gi4HuSwOSxmwVip1J0jnoQ"  }
{"type": "server", "timestamp": "2019-11-06T14:53:33,563Z", "level": "INFO", "component": "o.e.x.m.e.l.LocalExporter", "cluster.name": "elasticsearch", "node.name": "elasticsearch-master-0", "message": "waiting for elected master node [{elasticsearch-master-1}{onfBHnjRSeWSvE07eqoIxg}{MjCRAJr7Rbu-1rkKqXPJXQ}{10.244.4.85}{10.244.4.85:9300}{dilm}{ml.machine_memory=1999998976, ml.max_open_jobs=20, xpack.installed=true}] to setup local exporter [default_local] (does it have x-pack installed?)", "cluster.uuid": "5qGtycsKQ_mCV1VNl4Fyng", "node.id": "Gi4HuSwOSxmwVip1J0jnoQ"  }
{"type": "server", "timestamp": "2019-11-06T14:53:33,899Z", "level": "INFO", "component": "o.e.x.m.e.l.LocalExporter", "cluster.name": "elasticsearch", "node.name": "elasticsearch-master-0", "message": "waiting for elected master node [{elasticsearch-master-1}{onfBHnjRSeWSvE07eqoIxg}{MjCRAJr7Rbu-1rkKqXPJXQ}{10.244.4.85}{10.244.4.85:9300}{dilm}{ml.machine_memory=1999998976, ml.max_open_jobs=20, xpack.installed=true}] to setup local exporter [default_local] (does it have x-pack installed?)", "cluster.uuid": "5qGtycsKQ_mCV1VNl4Fyng", "node.id": "Gi4HuSwOSxmwVip1J0jnoQ"  }
{"type": "server", "timestamp": "2019-11-06T14:53:34,109Z", "level": "INFO", "component": "o.e.l.LicenseService", "cluster.name": "elasticsearch", "node.name": "elasticsearch-master-0", "message": "license [baaa7636-5068-489a-9caa-ded29f12af51] mode [basic] - valid", "cluster.uuid": "5qGtycsKQ_mCV1VNl4Fyng", "node.id": "Gi4HuSwOSxmwVip1J0jnoQ"  }
{"type": "server", "timestamp": "2019-11-06T14:53:34,111Z", "level": "INFO", "component": "o.e.x.s.s.SecurityStatusChangeListener", "cluster.name": "elasticsearch", "node.name": "elasticsearch-master-0", "message": "Active license is now [BASIC]; Security is disabled", "cluster.uuid": "5qGtycsKQ_mCV1VNl4Fyng", "node.id": "Gi4HuSwOSxmwVip1J0jnoQ"  }
{"type": "server", "timestamp": "2019-11-06T14:53:34,121Z", "level": "INFO", "component": "o.e.x.m.e.l.LocalExporter", "cluster.name": "elasticsearch", "node.name": "elasticsearch-master-0", "message": "waiting for elected master node [{elasticsearch-master-1}{onfBHnjRSeWSvE07eqoIxg}{MjCRAJr7Rbu-1rkKqXPJXQ}{10.244.4.85}{10.244.4.85:9300}{dilm}{ml.machine_memory=1999998976, ml.max_open_jobs=20, xpack.installed=true}] to setup local exporter [default_local] (does it have x-pack installed?)", "cluster.uuid": "5qGtycsKQ_mCV1VNl4Fyng", "node.id": "Gi4HuSwOSxmwVip1J0jnoQ"  }
{"type": "server", "timestamp": "2019-11-06T14:53:34,798Z", "level": "INFO", "component": "o.e.x.m.e.l.LocalExporter", "cluster.name": "elasticsearch", "node.name": "elasticsearch-master-0", "message": "waiting for elected master node [{elasticsearch-master-1}{onfBHnjRSeWSvE07eqoIxg}{MjCRAJr7Rbu-1rkKqXPJXQ}{10.244.4.85}{10.244.4.85:9300}{dilm}{ml.machine_memory=1999998976, ml.max_open_jobs=20, xpack.installed=true}] to setup local exporter [default_local] (does it have x-pack installed?)", "cluster.uuid": "5qGtycsKQ_mCV1VNl4Fyng", "node.id": "Gi4HuSwOSxmwVip1J0jnoQ"  }

Issue Analytics

  • State:closed
  • Created 4 years ago
  • Comments:11 (4 by maintainers)

github_iconTop GitHub Comments

1reaction
fatmcgavcommented, Nov 12, 2019

@holms Thank you for the node info. Nothing jumps out as being a potential issue.

Are you able to attached a kubectl describe and kubectl logs for one of the elastic-master pods? I’m curious to see how long it actually took to get to a healthy state…

For reference, I’ve just tried to deploy a new ES cluster on a GKE cluster with a node pool consisting of 3x 1vCPU/4GB nodes, and the cluster came up at the first time of asking with no pod restarts…

The timeline for one of my test pods looks like:

  • 13:38:27 - Pod Started
  • 13:38:34 - First log event from Elasticsearch
  • 13:39:12 - Cluster ready

During this time, the readinessProbe did fail twice, so it’s possible that the default config might be a bit “aggressive” for lower resourced deployments.

1reaction
fatmcgavcommented, Nov 11, 2019

@holms Thank-you for raising this issue.

From what I can see based on the above logs, it appears that the the Elasticsearch service is taking more than 30 seconds to start up, which is why the pod is failing the readinessProbe check.

Would you be able to provide the output from kubectl describe nodes so we can get an idea of the make-up of the nodes being used in this cluster?

WRT the behaviour of issuing an exit 1 at the end of the readinessProbe command; this is intentional, as this is how Kubernetes knows whether a pod is ready to serve traffic or not. Further details on the expected behaviour can be found here: https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/#define-readiness-probes

Combined with the additional readinessProbe configuration that defines the initial delay for checking readiness (initialDelaySeconds), how often to check for readiness (periodSeconds) and the number of times to check for readiness (failureThreshold) this means that, with the default configuration, if the readinessProbe command doesn’t succeed on the 3rd attempt, which translates to a ~30 seconds pod lifetime, then the pod will be terminated by Kubernetes and re-created.

In order to prevent this pod termination, there are a couple of options:

  1. Resolve the issue that is causing elasticsearch to take more than 30 seconds to become ready.
  2. Tweak the readinessProbe configuration to allow more time for elasticsearch to become healthy, either by adding a bigger initialDelaySeconds, trying more times by increasing the failureThreshold value or checking less frequently by increasing the periodSeconds.

It’s worth adding that the default values shipped with this chart should be considered just that; defaults that may need to be tweaked to better fit the environment within which the chart is being deployed.

Let me know if any of the above isn’t clear, or if we can assist you further.

Read more comments on GitHub >

github_iconTop Results From Across the Web

Kibana keeps restarting with no error - Elastic Discuss
After this message, the pod is being restarted. These restarts happen every ~3-5 minutes. Kibana has resource limits to 1GB and 1cpu and...
Read more >
elasticsearch-data pods are in not ready state because of ...
elasticsearch -data pods are in not ready state because of readiness probe failed.
Read more >
How can I diagnose why a k8s pod keeps restarting?
The first step ( kubectl describe pod ) you've already done. As a next step I suggest checking container logs: kubectl logs <pod_name>...
Read more >
Guide: How to run ELK on Kubernetes with Helm - Coralogix
Note: If you see a status of ContainerCreating on the Pod, then that is likely because Docker is pulling the image still and...
Read more >
Openshift 4.4 elasticsearch-cdm container status is ...
The kibana and the fluid pod appeared normally on the ... NAME READY STATUS RESTARTS AGE cluster-logging-operator-6f958b4644-rl66q 1/1 ...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found