[Elasticsearch] User not created when enabling security
See original GitHub issueChart version: v7.3.2
Kubernetes version: 1.17.4
Kubernetes provider: E.g. GKE (Google Kubernetes Engine) Custom
Helm Version: 3.1.2
helm get release
output
NAME: sec-elasticsearch-group-master
LAST DEPLOYED: Mon Apr 20 10:13:41 2020
NAMESPACE: environment-1-platform
STATUS: deployed
REVISION: 4
USER-SUPPLIED VALUES:
antiAffinity: hard
antiAffinityTopologyKey: kubernetes.io/hostname
clusterHealthCheckParams: wait_for_status=green&timeout=1s
clusterName: sec-easier-elasticsearch
esConfig:
elasticsearch.yml: |
path.repo: “/backup/easier-data/” # to allow data backup
xpack.security.enabled: true
xpack.security.transport.ssl.enabled: true
xpack.security.transport.ssl.verification_mode: certificate
xpack.security.transport.ssl.keystore.path: /usr/share/elasticsearch/config/certs/elk-cert.p12
xpack.security.transport.ssl.truststore.path: /usr/share/elasticsearch/config/certs/elk-cert.p12
xpack.security.http.ssl.enabled: true
xpack.security.http.ssl.truststore.path: /usr/share/elasticsearch/config/certs/elk-cert.p12
xpack.security.http.ssl.keystore.path: /usr/share/elasticsearch/config/certs/elk-cert.p12
esJavaOpts: -Xmx1g -Xms1g
esMajorVersion: “”
extraEnvs:
- name: ELASTIC_PASSWORD valueFrom: secretKeyRef: key: password name: elastic-credentials
- name: ELASTIC_USERNAME
valueFrom:
secretKeyRef:
key: username
name: elastic-credentials
extraInitContainers: “”
extraVolumeMounts: |
- name: backup-volume mountPath: /backup/easier-data/ extraVolumes: |
- name: backup-volume persistentVolumeClaim: claimName: pvc-elasticsearch-backup # existing PVC to be shared y all nodes to make the backup fsGroup: “” fullnameOverride: “” httpPort: 9200 image: docker.elastic.co/elasticsearch/elasticsearch imagePullPolicy: IfNotPresent imagePullSecrets: [] imageTag: 7.3.2 ingress: annotations: {} enabled: false hosts:
- chart-example.local
path: /
tls: []
initResources: {}
keystore: []
labels: {}
lifecycle: {}
masterService: “”
masterTerminationFix: false
maxUnavailable: 1
minimumMasterNodes: 1
nameOverride: “”
networkHost: 0.0.0.0
nodeAffinity: {}
nodeGroup: master
nodeSelector: {}
persistence:
annotations: {}
enabled: true
podAnnotations: {}
podManagementPolicy: Parallel
podSecurityContext:
fsGroup: 1000
runAsUser: 1000
podSecurityPolicy:
create: false
name: “”
spec:
fsGroup:
rule: RunAsAny
privileged: true
runAsUser:
rule: RunAsAny
seLinux:
rule: RunAsAny
supplementalGroups:
rule: RunAsAny
volumes:
- secret
- configMap
- persistentVolumeClaim priorityClassName: “” protocol: https rbac: create: false serviceAccountName: “” readinessProbe: failureThreshold: 3 initialDelaySeconds: 10 periodSeconds: 10 successThreshold: 3 timeoutSeconds: 5 replicas: 2 resources: limits: cpu: 1000m memory: 2Gi requests: cpu: 100m memory: 2Gi roles: data: “false” ingest: “false” master: “true” schedulerName: “” secretMounts:
- name: elastic-certificates
path: /usr/share/elasticsearch/config/certs
secretName: elk-certificates-p12
securityContext:
capabilities:
drop:
- ALL runAsNonRoot: true runAsUser: 1000 service: annotations: {} httpPortName: http nodePort: null transportPortName: transport type: ClusterIP sidecarResources: {} sysctlInitContainer: enabled: true sysctlVmMaxMapCount: 262144 terminationGracePeriod: 120 tolerations: [] transportPort: 9300 updateStrategy: RollingUpdate volumeClaimTemplate: accessModes:
- ReadWriteOnce resources: requests: storage: 300Gi
Describe the bug:
The cluster starts the creation but never ends. Looking into the logs, there seems to exist a problem retrieving the user “easier” in my case.
"level": "ERROR", "component": "o.e.x.s.a.e.NativeUsersStore", "cluster.name": "sec-easier-elasticsearch", "node.name": "sec-easier-elasticsearch-master-1", "cluster.uuid": "8xY15pq0SGiSdiF366INZA", "node.id": "AaszF7y3TdWX7Lv-1ESKzw", "message": "security index is unavailable. short circuiting retrieval of user [easier]" }
Even if in my values.yaml I have the lines for the user/password and the secrets created in my cluster:
extraEnvs:
- name: ELASTIC_PASSWORD
valueFrom:
secretKeyRef:
name: elastic-credentials
key: password
- name: ELASTIC_USERNAME
valueFrom:
secretKeyRef:
name: elastic-credentials
key: username
If I connect manually to the cluster, node by node, creating the user/password manually:
bin/x-pack/users useradd admin -r superuser -p CHANGE-ME
Steps to reproduce:
1.helm install sec-elasticsearch-group-master -v values-master-secure.yaml ./ 2.two nodes are created 3.look for the error in one of the nodes
Expected behavior:
The cluster is created with user and password from values.yaml and secrets.
Issue Analytics
- State:
- Created 3 years ago
- Comments:10
Top GitHub Comments
The chart should probably be patched to handle this – in the short term, it would be good to at least put info into the NOTES.txt to let people know that they need to do this. In the longer term, the chart (stateful set.yaml?) should be patched to do this automatically.
Have the same problem…