[elasticsearch] kubernetes-kind example: init-container doesn't have permission to chown a volume
See original GitHub issueChart version:
elasticsearch-7.4.1
Kubernetes version:
1.15.5-do.0
Kubernetes provider:
Digital Ocean K8s
Helm Version:
Not sure, the one which terraform is using in their provider.
helm get release
output
Sorry but this output is already with a fix, been running init-container in privilege mode and it worked.
REVISION: 1
RELEASED: Wed Nov 6 17:17:35 2019
CHART: elasticsearch-7.4.1
USER-SUPPLIED VALUES:
esJavaOpts: -Xmx1g -Xms1g
extraInitContainers: |
- name: create
image: busybox:1.28
command: ['mkdir', '-p', '/usr/share/elasticsearch/data/nodes/']
securityContext:
runAsUser: 0
volumeMounts:
- mountPath: /usr/share/elasticsearch/data
name: elasticsearch-master
- name: file-permissions
image: busybox:1.28
command: ['chown', '-R', '1000:1000', '/usr/share/elasticsearch/']
securityContext:
runAsUser: 0
volumeMounts:
- mountPath: /usr/share/elasticsearch/data
name: elasticsearch-master
nodeSelector:
doks.digitalocean.com/node-pool: elasticsearch
readinessProbe:
initialDelaySeconds: 200
resources:
limits:
cpu: 1000m
memory: 2G
requests:
cpu: 100m
memory: 2G
volumeClaimTemplate:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 20G
storageClassName: do-block-storage
COMPUTED VALUES:
antiAffinity: hard
antiAffinityTopologyKey: kubernetes.io/hostname
clusterHealthCheckParams: wait_for_status=green&timeout=1s
clusterName: elasticsearch
esConfig: {}
esJavaOpts: -Xmx1g -Xms1g
esMajorVersion: ""
extraEnvs: []
extraInitContainers: |
- name: create
image: busybox:1.28
command: ['mkdir', '-p', '/usr/share/elasticsearch/data/nodes/']
securityContext:
runAsUser: 0
volumeMounts:
- mountPath: /usr/share/elasticsearch/data
name: elasticsearch-master
- name: file-permissions
image: busybox:1.28
command: ['chown', '-R', '1000:1000', '/usr/share/elasticsearch/']
securityContext:
runAsUser: 0
volumeMounts:
- mountPath: /usr/share/elasticsearch/data
name: elasticsearch-master
extraVolumeMounts: ""
extraVolumes: ""
fsGroup: ""
fullnameOverride: ""
httpPort: 9200
image: docker.elastic.co/elasticsearch/elasticsearch
imagePullPolicy: IfNotPresent
imagePullSecrets: []
imageTag: 7.4.1
ingress:
annotations: {}
enabled: false
hosts:
- chart-example.local
path: /
tls: []
initResources: {}
keystore: []
labels: {}
lifecycle: {}
masterService: ""
masterTerminationFix: false
maxUnavailable: 1
minimumMasterNodes: 2
nameOverride: ""
networkHost: 0.0.0.0
nodeAffinity: {}
nodeGroup: master
nodeSelector:
doks.digitalocean.com/node-pool: elasticsearch
persistence:
annotations: {}
enabled: true
podAnnotations: {}
podManagementPolicy: Parallel
podSecurityContext:
fsGroup: 1000
runAsUser: 1000
podSecurityPolicy:
create: false
name: ""
spec:
fsGroup:
rule: RunAsAny
privileged: true
runAsUser:
rule: RunAsAny
seLinux:
rule: RunAsAny
supplementalGroups:
rule: RunAsAny
volumes:
- secret
- configMap
- persistentVolumeClaim
priorityClassName: ""
protocol: http
rbac:
create: false
serviceAccountName: ""
readinessProbe:
failureThreshold: 3
initialDelaySeconds: 200
periodSeconds: 10
successThreshold: 3
timeoutSeconds: 5
replicas: 3
resources:
limits:
cpu: 1000m
memory: 2G
requests:
cpu: 100m
memory: 2G
roles:
data: "true"
ingest: "true"
master: "true"
schedulerName: ""
secretMounts: []
securityContext:
capabilities:
drop:
- ALL
runAsNonRoot: true
runAsUser: 1000
service:
annotations: {}
httpPortName: http
nodePort: ""
transportPortName: transport
type: ClusterIP
sidecarResources: {}
sysctlInitContainer:
enabled: true
sysctlVmMaxMapCount: 262144
terminationGracePeriod: 120
tolerations: []
transportPort: 9300
updateStrategy: RollingUpdate
volumeClaimTemplate:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 20G
storageClassName: do-block-storage
HOOKS:
---
# elasticsearch-tncit-test
apiVersion: v1
kind: Pod
metadata:
name: "elasticsearch-tncit-test"
annotations:
"helm.sh/hook": test-success
spec:
containers:
- name: "elasticsearch-myxea-test"
image: "docker.elastic.co/elasticsearch/elasticsearch:7.4.1"
command:
- "sh"
- "-c"
- |
#!/usr/bin/env bash -e
curl -XGET --fail 'elasticsearch-master:9200/_cluster/health?wait_for_status=green&timeout=1s'
restartPolicy: Never
MANIFEST:
---
# Source: elasticsearch/templates/poddisruptionbudget.yaml
apiVersion: policy/v1beta1
kind: PodDisruptionBudget
metadata:
name: "elasticsearch-master-pdb"
spec:
maxUnavailable: 1
selector:
matchLabels:
app: "elasticsearch-master"
---
# Source: elasticsearch/templates/service.yaml
kind: Service
apiVersion: v1
metadata:
name: elasticsearch-master
labels:
heritage: "Tiller"
release: "elasticsearch"
chart: "elasticsearch"
app: "elasticsearch-master"
annotations:
{}
spec:
type: ClusterIP
selector:
heritage: "Tiller"
release: "elasticsearch"
chart: "elasticsearch"
app: "elasticsearch-master"
ports:
- name: http
protocol: TCP
port: 9200
- name: transport
protocol: TCP
port: 9300
---
# Source: elasticsearch/templates/service.yaml
kind: Service
apiVersion: v1
metadata:
name: elasticsearch-master-headless
labels:
heritage: "Tiller"
release: "elasticsearch"
chart: "elasticsearch"
app: "elasticsearch-master"
annotations:
service.alpha.kubernetes.io/tolerate-unready-endpoints: "true"
spec:
clusterIP: None # This is needed for statefulset hostnames like elasticsearch-0 to resolve
# Create endpoints also if the related pod isn't ready
publishNotReadyAddresses: true
selector:
app: "elasticsearch-master"
ports:
- name: http
port: 9200
- name: transport
port: 9300
---
# Source: elasticsearch/templates/statefulset.yaml
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: elasticsearch-master
labels:
heritage: "Tiller"
release: "elasticsearch"
chart: "elasticsearch"
app: "elasticsearch-master"
annotations:
esMajorVersion: "7"
spec:
serviceName: elasticsearch-master-headless
selector:
matchLabels:
app: "elasticsearch-master"
replicas: 3
podManagementPolicy: Parallel
updateStrategy:
type: RollingUpdate
volumeClaimTemplates:
- metadata:
name: elasticsearch-master
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 20G
storageClassName: do-block-storage
template:
metadata:
name: "elasticsearch-master"
labels:
heritage: "Tiller"
release: "elasticsearch"
chart: "elasticsearch"
app: "elasticsearch-master"
annotations:
spec:
securityContext:
fsGroup: 1000
runAsUser: 1000
nodeSelector:
doks.digitalocean.com/node-pool: elasticsearch
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: app
operator: In
values:
- "elasticsearch-master"
topologyKey: kubernetes.io/hostname
terminationGracePeriodSeconds: 120
volumes:
initContainers:
- name: configure-sysctl
securityContext:
runAsUser: 0
privileged: true
image: "docker.elastic.co/elasticsearch/elasticsearch:7.4.1"
command: ["sysctl", "-w", "vm.max_map_count=262144"]
resources:
{}
- name: create
image: busybox:1.28
command: ['mkdir', '-p', '/usr/share/elasticsearch/data/nodes/']
securityContext:
runAsUser: 0
volumeMounts:
- mountPath: /usr/share/elasticsearch/data
name: elasticsearch-master
- name: file-permissions
image: busybox:1.28
command: ['chown', '-R', '1000:1000', '/usr/share/elasticsearch/']
securityContext:
runAsUser: 0
volumeMounts:
- mountPath: /usr/share/elasticsearch/data
name: elasticsearch-master
containers:
- name: "elasticsearch"
securityContext:
capabilities:
drop:
- ALL
runAsNonRoot: true
runAsUser: 1000
image: "docker.elastic.co/elasticsearch/elasticsearch:7.4.1"
imagePullPolicy: "IfNotPresent"
readinessProbe:
failureThreshold: 3
initialDelaySeconds: 200
periodSeconds: 10
successThreshold: 3
timeoutSeconds: 5
exec:
command:
- sh
- -c
- |
#!/usr/bin/env bash -e
# If the node is starting up wait for the cluster to be ready (request params: 'wait_for_status=green&timeout=1s' )
# Once it has started only check that the node itself is responding
START_FILE=/tmp/.es_start_file
http () {
local path="${1}"
if [ -n "${ELASTIC_USERNAME}" ] && [ -n "${ELASTIC_PASSWORD}" ]; then
BASIC_AUTH="-u ${ELASTIC_USERNAME}:${ELASTIC_PASSWORD}"
else
BASIC_AUTH=''
fi
curl -XGET -s -k --fail ${BASIC_AUTH} http://127.0.0.1:9200${path}
}
if [ -f "${START_FILE}" ]; then
echo 'Elasticsearch is already running, lets check the node is healthy'
http "/"
else
echo 'Waiting for elasticsearch cluster to become cluster to be ready (request params: "wait_for_status=green&timeout=1s" )'
if http "/_cluster/health?wait_for_status=green&timeout=1s" ; then
touch ${START_FILE}
exit 0
else
echo 'Cluster is not yet ready (request params: "wait_for_status=green&timeout=1s" )'
exit 1
fi
fi
ports:
- name: http
containerPort: 9200
- name: transport
containerPort: 9300
resources:
limits:
cpu: 1000m
memory: 2G
requests:
cpu: 100m
memory: 2G
env:
- name: node.name
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: cluster.initial_master_nodes
value: "elasticsearch-master-0,elasticsearch-master-1,elasticsearch-master-2,"
- name: discovery.seed_hosts
value: "elasticsearch-master-headless"
- name: cluster.name
value: "elasticsearch"
- name: network.host
value: "0.0.0.0"
- name: ES_JAVA_OPTS
value: "-Xmx1g -Xms1g"
- name: node.data
value: "true"
- name: node.ingest
value: "true"
- name: node.master
value: "true"
volumeMounts:
- name: "elasticsearch-master"
mountPath: /usr/share/elasticsearch/data
Describe the bug:
I’m using Digital Ocean volumes in here and chown
command of init-container fails due to operation not permitted
error. Once I’ve launched init-container in privileged mode it started to work with no problem whatsoever.
Steps to reproduce:
- Deploy your chart using this example values: https://github.com/elastic/helm-charts/blob/master/elasticsearch/examples/kubernetes-kind/values.yaml
- In
kubectl desribe
we see that init-container failed
Expected behavior:
init-container to exit with code 0 having no problems.
Provide logs and/or server output (if relevant):
From kubectl describe
we can see that file-permissions
init-container failed:
file-permissions:
Container ID: docker://202ad2e7783984b1243f900a1add839e10d8b0687a4717015b548bb632158a67
Image: busybox:1.28
Image ID: docker-pullable://busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47
Port: <none>
Host Port: <none>
Command:
chown
-R
1000:1000
/usr/share/elasticsearch/
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: Error
Exit Code: 1
Started: Wed, 06 Nov 2019 04:27:43 +0200
Finished: Wed, 06 Nov 2019 04:27:43 +0200
Ready: False
Restart Count: 6
Environment: <none>
Mounts:
/usr/share/elasticsearch/data from elasticsearch-master (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-nwlnz (ro)
holms@debian ~/D/c/s/b/t/s/p/charts> kubectl logs elasticsearch-master-0 -c file-permissions --namespace elasticsearch
chmod: /usr/share/elasticsearch/: Operation not permitted
chmod: /usr/share/elasticsearch/: Operation not permitted
So starting this init-container in privileged mode solved an issue:
volumeClaimTemplate:
accessModes: [ "ReadWriteOnce" ]
storageClassName: do-block-storage
resources:
requests:
storage: 20G
extraInitContainers: |
- name: create
image: busybox:1.28
command: ['mkdir', '/usr/share/elasticsearch/data/nodes/']
securityContext:
runAsUser: 0
volumeMounts:
- mountPath: /usr/share/elasticsearch/data
name: elasticsearch-master
- name: file-permissions
image: busybox:1.28
command: ['chown', '-R', '1000:1000', '/usr/share/elasticsearch/']
securityContext:
runAsUser: 0
volumeMounts:
- mountPath: /usr/share/elasticsearch/data
name: elasticsearch-master
Issue Analytics
- State:
- Created 4 years ago
- Reactions:1
- Comments:12 (5 by maintainers)
Top GitHub Comments
Sorry for delay @jmlrt
That’s what I’ve ended up using 😃 runAsUser is required. I actually can change issue title if you want. Because it’s all about runAsUser and chown’inig in here. These values works for DOKS 😃
Just hit this issue today using Rancher deployed k8s. Workaround was to use
runAsUser: 0
on init containers too.Chart version:
elasticsearch-7.4.1
Kubernetes version:
server: 1.15.5 client: 1.16.1
Kubernetes provider:
Rancher Kubernetes Engine, on bare-metal servers
Helm Version:
Don’t think Rancher is using Helm directly to deploy services
My small brick to the wall
Additionnaly regarding the volume to mount I had to use
{{ template "uname" . }}
as volume name. This might be specific to Rancher’s helm’s app deployment. But not sure because{{ template "uname" . }}
seems present in thestatefulsets.yml
file. I’m new to Kubernetes so sorry if it’s dumb 😃.Here is the extraInitContainers in values.xml I used:
PS: hesitated to created another issue but I don’t feel it’s necessary, tell me otherwise.