Missing option to disable the daemonset in metricbeat
See original GitHub issueChart version: 7.8.0 Kubernetes version: 1.15.9 Kubernetes provider: E.g. GKE (Google Kubernetes Engine) on-premise Helm Version: 2.12.2
helm get release
output
Output of helm get release
REVISION: 1
RELEASED: Tue Jun 30 20:48:16 2020
CHART: elasticsearch-7.3.2
USER-SUPPLIED VALUES:
clusterHealthCheckParams: wait_for_status=yellow&timeout=10s
minimumMasterNodes: 1
nodeSelector:
kubernetes.io/hostname: demoworker2test
podSecurityPolicy:
name: 50-rootfilesystem
rbac:
create: true
serviceAccountName: elasticsearch-master
readinessProbe:
initialDelaySeconds: 30
timeoutSeconds: 20
replicas: 1
resources:
limits:
memory: 2.5Gi
requests:
memory: 1Gi
sysctlInitContainer:
enabled: false
COMPUTED VALUES:
antiAffinity: hard
antiAffinityTopologyKey: kubernetes.io/hostname
clusterHealthCheckParams: wait_for_status=yellow&timeout=10s
clusterName: elasticsearch
esConfig: {}
esJavaOpts: -Xmx1g -Xms1g
esMajorVersion: ""
extraEnvs: []
extraInitContainers: ""
extraVolumeMounts: ""
extraVolumes: ""
fsGroup: ""
fullnameOverride: ""
httpPort: 9200
image: docker.elastic.co/elasticsearch/elasticsearch
imagePullPolicy: IfNotPresent
imagePullSecrets: []
imageTag: 7.3.2
ingress:
annotations: {}
enabled: false
hosts:
- chart-example.local
path: /
tls: []
initResources: {}
keystore: []
labels: {}
lifecycle: {}
masterService: ""
masterTerminationFix: false
maxUnavailable: 1
minimumMasterNodes: 1
nameOverride: ""
networkHost: 0.0.0.0
nodeAffinity: {}
nodeGroup: master
nodeSelector:
kubernetes.io/hostname: demoworker2test
persistence:
annotations: {}
enabled: true
podAnnotations: {}
podManagementPolicy: Parallel
podSecurityContext:
fsGroup: 1000
runAsUser: 1000
podSecurityPolicy:
create: false
name: 50-rootfilesystem
spec:
fsGroup:
rule: RunAsAny
privileged: true
runAsUser:
rule: RunAsAny
seLinux:
rule: RunAsAny
supplementalGroups:
rule: RunAsAny
volumes:
- secret
- configMap
- persistentVolumeClaim
priorityClassName: ""
protocol: http
rbac:
create: true
serviceAccountName: elasticsearch-master
readinessProbe:
failureThreshold: 3
initialDelaySeconds: 30
periodSeconds: 10
successThreshold: 3
timeoutSeconds: 20
replicas: 1
resources:
limits:
cpu: 1000m
memory: 2.5Gi
requests:
cpu: 100m
memory: 1Gi
roles:
data: "true"
ingest: "true"
master: "true"
schedulerName: ""
secretMounts: []
securityContext:
capabilities:
drop:
- ALL
runAsNonRoot: true
runAsUser: 1000
service:
annotations: {}
httpPortName: http
nodePort: null
transportPortName: transport
type: ClusterIP
sidecarResources: {}
sysctlInitContainer:
enabled: false
sysctlVmMaxMapCount: 262144
terminationGracePeriod: 120
tolerations: []
transportPort: 9300
updateStrategy: RollingUpdate
volumeClaimTemplate:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 30Gi
HOOKS:
---
# elasticsearch-evlyd-test
apiVersion: v1
kind: Pod
metadata:
name: "elasticsearch-evlyd-test"
annotations:
"helm.sh/hook": test-success
spec:
containers:
- name: "elasticsearch-ldhnq-test"
image: "docker.elastic.co/elasticsearch/elasticsearch:7.3.2"
command:
- "sh"
- "-c"
- |
#!/usr/bin/env bash -e
curl -XGET --fail 'elasticsearch-master:9200/_cluster/health?wait_for_status=yellow&timeout=10s'
restartPolicy: Never
MANIFEST:
---
# Source: elasticsearch/templates/poddisruptionbudget.yaml
apiVersion: policy/v1beta1
kind: PodDisruptionBudget
metadata:
name: "elasticsearch-master-pdb"
spec:
maxUnavailable: 1
selector:
matchLabels:
app: "elasticsearch-master"
---
# Source: elasticsearch/templates/serviceaccount.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: "elasticsearch-master"
labels:
heritage: "Tiller"
release: "elasticsearch"
chart: "elasticsearch-7.3.2"
app: "elasticsearch-master"
---
# Source: elasticsearch/templates/role.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: "elasticsearch-master"
labels:
heritage: "Tiller"
release: "elasticsearch"
chart: "elasticsearch-7.3.2"
app: "elasticsearch-master"
rules:
- apiGroups:
- extensions
resources:
- podsecuritypolicies
resourceNames:
- "50-rootfilesystem"
verbs:
- use
---
# Source: elasticsearch/templates/rolebinding.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: "elasticsearch-master"
labels:
heritage: "Tiller"
release: "elasticsearch"
chart: "elasticsearch-7.3.2"
app: "elasticsearch-master"
subjects:
- kind: ServiceAccount
name: "elasticsearch-master"
namespace: "elastic"
roleRef:
kind: Role
name: "elasticsearch-master"
apiGroup: rbac.authorization.k8s.io
---
# Source: elasticsearch/templates/service.yaml
kind: Service
apiVersion: v1
metadata:
name: elasticsearch-master
labels:
heritage: "Tiller"
release: "elasticsearch"
chart: "elasticsearch"
app: "elasticsearch-master"
annotations:
{}
spec:
type: ClusterIP
selector:
heritage: "Tiller"
release: "elasticsearch"
chart: "elasticsearch"
app: "elasticsearch-master"
ports:
- name: http
protocol: TCP
port: 9200
- name: transport
protocol: TCP
port: 9300
---
# Source: elasticsearch/templates/service.yaml
kind: Service
apiVersion: v1
metadata:
name: elasticsearch-master-headless
labels:
heritage: "Tiller"
release: "elasticsearch"
chart: "elasticsearch"
app: "elasticsearch-master"
annotations:
service.alpha.kubernetes.io/tolerate-unready-endpoints: "true"
spec:
clusterIP: None # This is needed for statefulset hostnames like elasticsearch-0 to resolve
# Create endpoints also if the related pod isn't ready
publishNotReadyAddresses: true
selector:
app: "elasticsearch-master"
ports:
- name: http
port: 9200
- name: transport
port: 9300
---
# Source: elasticsearch/templates/statefulset.yaml
apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
name: elasticsearch-master
labels:
heritage: "Tiller"
release: "elasticsearch"
chart: "elasticsearch"
app: "elasticsearch-master"
annotations:
esMajorVersion: "7"
spec:
serviceName: elasticsearch-master-headless
selector:
matchLabels:
app: "elasticsearch-master"
replicas: 1
podManagementPolicy: Parallel
updateStrategy:
type: RollingUpdate
volumeClaimTemplates:
- metadata:
name: elasticsearch-master
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 30Gi
template:
metadata:
name: "elasticsearch-master"
labels:
heritage: "Tiller"
release: "elasticsearch"
chart: "elasticsearch"
app: "elasticsearch-master"
annotations:
spec:
securityContext:
fsGroup: 1000
runAsUser: 1000
serviceAccountName: "elasticsearch-master"
nodeSelector:
kubernetes.io/hostname: demoworker2test
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: app
operator: In
values:
- "elasticsearch-master"
topologyKey: kubernetes.io/hostname
terminationGracePeriodSeconds: 120
volumes:
initContainers:
containers:
- name: "elasticsearch"
securityContext:
capabilities:
drop:
- ALL
runAsNonRoot: true
runAsUser: 1000
image: "docker.elastic.co/elasticsearch/elasticsearch:7.3.2"
imagePullPolicy: "IfNotPresent"
readinessProbe:
failureThreshold: 3
initialDelaySeconds: 30
periodSeconds: 10
successThreshold: 3
timeoutSeconds: 20
exec:
command:
- sh
- -c
- |
#!/usr/bin/env bash -e
# If the node is starting up wait for the cluster to be ready (request params: 'wait_for_status=yellow&timeout=10s' )
# Once it has started only check that the node itself is responding
START_FILE=/tmp/.es_start_file
http () {
local path="${1}"
if [ -n "${ELASTIC_USERNAME}" ] && [ -n "${ELASTIC_PASSWORD}" ]; then
BASIC_AUTH="-u ${ELASTIC_USERNAME}:${ELASTIC_PASSWORD}"
else
BASIC_AUTH=''
fi
curl -XGET -s -k --fail ${BASIC_AUTH} http://127.0.0.1:9200${path}
}
if [ -f "${START_FILE}" ]; then
echo 'Elasticsearch is already running, lets check the node is healthy'
http "/"
else
echo 'Waiting for elasticsearch cluster to become cluster to be ready (request params: "wait_for_status=yellow&timeout=10s" )'
if http "/_cluster/health?wait_for_status=yellow&timeout=10s" ; then
touch ${START_FILE}
exit 0
else
echo 'Cluster is not yet ready (request params: "wait_for_status=yellow&timeout=10s" )'
exit 1
fi
fi
ports:
- name: http
containerPort: 9200
- name: transport
containerPort: 9300
resources:
limits:
cpu: 1000m
memory: 2.5Gi
requests:
cpu: 100m
memory: 1Gi
env:
- name: node.name
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: cluster.initial_master_nodes
value: "elasticsearch-master-0,"
- name: discovery.seed_hosts
value: "elasticsearch-master-headless"
- name: cluster.name
value: "elasticsearch"
- name: network.host
value: "0.0.0.0"
- name: ES_JAVA_OPTS
value: "-Xmx1g -Xms1g"
- name: node.data
value: "true"
- name: node.ingest
value: "true"
- name: node.master
value: "true"
volumeMounts:
- name: "elasticsearch-master"
mountPath: /usr/share/elasticsearch/data
Describe the bug: Eearlier we have used the metricbeat chart from the stable repository, https://github.com/helm/charts/tree/master/stable/metricbeat. In that chart we could decide whether to enable/disable the daemonset and deployment. As we know, that chart is deprecated. We’re trying to migrate to this elastic chart, but there seems to be no way of disabling the daemonset here. To quote https://www.elastic.co/guide/en/beats/metricbeat/7.x/metricbeat-module-kubernetes.html:
Some of the previous components are running on each of the Kubernetes nodes (like kubelet or proxy) while others provide a single cluster-wide endpoint. This is important to determine the optimal configuration and running strategy for the different metricsets included in the module.
We don’t use metricbeat for other things than k8s events, so this seems difficult to do with the state of this chart. Is there a reason both the daemonset and deployment is now mandatory?
Issue Analytics
- State:
- Created 3 years ago
- Reactions:2
- Comments:8 (7 by maintainers)
Top GitHub Comments
I’ve just merged #715 which includes a “fix” for the
namespace
issue.@fatmcgav sure, I’ll give it a try 😃