Kibana 7.7.0 FATAL Error
See original GitHub issueChart version: 7.7.0
Kubernetes version:
Client Version: version.Info{Major:"1", Minor:"17", GitVersion:"v1.17.0", GitCommit:"70132b0f130acc0bed193d9ba59dd186f0e634cf", GitTreeState:"clean", BuildDate:"2019-12-07T21:20:10Z", GoVersion:"go1.13.4", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"16", GitVersion:"v1.16.8", GitCommit:"ec6eb119b81be488b030e849b9e64fda4caaf33c", GitTreeState:"clean", BuildDate:"2020-03-12T20:52:22Z", GoVersion:"go1.13.8", Compiler:"gc", Platform:"linux/amd64"}
Kubernetes provider: Bare metal
Helm Version:
Client: &version.Version{SemVer:"v2.16.7", GitCommit:"5f2584fd3d35552c4af26036f0c464191287986b", GitTreeState:"clean"}
Server: &version.Version{SemVer:"v2.16.7", GitCommit:"5f2584fd3d35552c4af26036f0c464191287986b", GitTreeState:"clean"}
helm get release
output
Output of helm get release
REVISION: 1
RELEASED: Fri Jun 5 13:29:03 2020
CHART: elasticsearch-7.7.0
USER-SUPPLIED VALUES:
antiAffinity: hard
antiAffinityTopologyKey: kubernetes.io/hostname
clusterHealthCheckParams: wait_for_status=green&timeout=1s
clusterName: elasticsearch
envFrom: []
esConfig: {}
esJavaOpts: -Xmx1g -Xms1g
esMajorVersion: ""
extraContainers: []
extraEnvs: []
extraInitContainers: []
extraVolumeMounts: []
extraVolumes: []
fsGroup: ""
fullnameOverride: ""
httpPort: 9200
image: docker.elastic.co/elasticsearch/elasticsearch
imagePullPolicy: IfNotPresent
imagePullSecrets: []
imageTag: 7.7.0
ingress:
annotations: {}
enabled: false
hosts:
- chart-example.local
path: /
tls: []
initResources: {}
keystore: []
labels: {}
lifecycle: {}
masterService: elasticsearch-master
masterTerminationFix: false
maxUnavailable: 1
nameOverride: ""
networkHost: 0.0.0.0
nodeAffinity: {}
nodeGroup: master
nodeSelector: {}
persistence:
annotations: {}
enabled: false
podAnnotations: {}
podManagementPolicy: Parallel
podSecurityContext:
fsGroup: 1000
runAsUser: 1000
podSecurityPolicy:
create: false
name: ""
spec:
fsGroup:
rule: RunAsAny
privileged: true
runAsUser:
rule: RunAsAny
seLinux:
rule: RunAsAny
supplementalGroups:
rule: RunAsAny
volumes:
- secret
- configMap
- persistentVolumeClaim
priorityClassName: ""
protocol: http
rbac:
create: false
serviceAccountName: ""
readinessProbe:
failureThreshold: 3
initialDelaySeconds: 10
periodSeconds: 10
successThreshold: 3
timeoutSeconds: 5
replicas: 1
resources:
limits:
cpu: 1000m
memory: 2Gi
requests:
cpu: 1000m
memory: 2Gi
roles:
data: "true"
ingest: "true"
master: "true"
schedulerName: ""
secretMounts: []
securityContext:
capabilities:
drop:
- ALL
runAsNonRoot: true
runAsUser: 1000
service:
annotations: {}
httpPortName: http
labels: {}
labelsHeadless: {}
loadBalancerIP: ""
loadBalancerSourceRanges: []
nodePort: ""
transportPortName: transport
type: ClusterIP
sidecarResources: {}
sysctlInitContainer:
enabled: true
sysctlVmMaxMapCount: 262144
terminationGracePeriod: 120
tolerations: []
transportPort: 9300
updateStrategy: RollingUpdate
volumeClaimTemplate:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 5Gi
storageClassName: local
COMPUTED VALUES:
antiAffinity: hard
antiAffinityTopologyKey: kubernetes.io/hostname
clusterHealthCheckParams: wait_for_status=green&timeout=1s
clusterName: elasticsearch
envFrom: []
esConfig: {}
esJavaOpts: -Xmx1g -Xms1g
esMajorVersion: ""
extraContainers: []
extraEnvs: []
extraInitContainers: []
extraVolumeMounts: []
extraVolumes: []
fsGroup: ""
fullnameOverride: ""
httpPort: 9200
image: docker.elastic.co/elasticsearch/elasticsearch
imagePullPolicy: IfNotPresent
imagePullSecrets: []
imageTag: 7.7.0
ingress:
annotations: {}
enabled: false
hosts:
- chart-example.local
path: /
tls: []
initResources: {}
keystore: []
labels: {}
lifecycle: {}
masterService: elasticsearch-master
masterTerminationFix: false
maxUnavailable: 1
minimumMasterNodes: 2
nameOverride: ""
networkHost: 0.0.0.0
nodeAffinity: {}
nodeGroup: master
nodeSelector: {}
persistence:
annotations: {}
enabled: false
podAnnotations: {}
podManagementPolicy: Parallel
podSecurityContext:
fsGroup: 1000
runAsUser: 1000
podSecurityPolicy:
create: false
name: ""
spec:
fsGroup:
rule: RunAsAny
privileged: true
runAsUser:
rule: RunAsAny
seLinux:
rule: RunAsAny
supplementalGroups:
rule: RunAsAny
volumes:
- secret
- configMap
- persistentVolumeClaim
priorityClassName: ""
protocol: http
rbac:
create: false
serviceAccountName: ""
readinessProbe:
failureThreshold: 3
initialDelaySeconds: 10
periodSeconds: 10
successThreshold: 3
timeoutSeconds: 5
replicas: 1
resources:
limits:
cpu: 1000m
memory: 2Gi
requests:
cpu: 1000m
memory: 2Gi
roles:
data: "true"
ingest: "true"
master: "true"
schedulerName: ""
secretMounts: []
securityContext:
capabilities:
drop:
- ALL
runAsNonRoot: true
runAsUser: 1000
service:
annotations: {}
httpPortName: http
labels: {}
labelsHeadless: {}
loadBalancerIP: ""
loadBalancerSourceRanges: []
nodePort: ""
transportPortName: transport
type: ClusterIP
sidecarResources: {}
sysctlInitContainer:
enabled: true
sysctlVmMaxMapCount: 262144
terminationGracePeriod: 120
tolerations: []
transportPort: 9300
updateStrategy: RollingUpdate
volumeClaimTemplate:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 5Gi
storageClassName: local
HOOKS:
---
# elastic-hxpvk-test
apiVersion: v1
kind: Pod
metadata:
name: "elastic-hxpvk-test"
annotations:
"helm.sh/hook": test-success
spec:
securityContext:
fsGroup: 1000
runAsUser: 1000
containers:
- name: "elastic-zzvww-test"
image: "docker.elastic.co/elasticsearch/elasticsearch:7.7.0"
command:
- "sh"
- "-c"
- |
#!/usr/bin/env bash -e
curl -XGET --fail 'elasticsearch-master:9200/_cluster/health?wait_for_status=green&timeout=1s'
restartPolicy: Never
MANIFEST:
---
# Source: elasticsearch/templates/poddisruptionbudget.yaml
apiVersion: policy/v1beta1
kind: PodDisruptionBudget
metadata:
name: "elasticsearch-master-pdb"
spec:
maxUnavailable: 1
selector:
matchLabels:
app: "elasticsearch-master"
---
# Source: elasticsearch/templates/service.yaml
kind: Service
apiVersion: v1
metadata:
name: elasticsearch-master
labels:
heritage: "Tiller"
release: "elastic"
chart: "elasticsearch"
app: "elasticsearch-master"
annotations:
{}
spec:
type: ClusterIP
selector:
heritage: "Tiller"
release: "elastic"
chart: "elasticsearch"
app: "elasticsearch-master"
ports:
- name: http
protocol: TCP
port: 9200
- name: transport
protocol: TCP
port: 9300
---
# Source: elasticsearch/templates/service.yaml
kind: Service
apiVersion: v1
metadata:
name: elasticsearch-master-headless
labels:
heritage: "Tiller"
release: "elastic"
chart: "elasticsearch"
app: "elasticsearch-master"
annotations:
service.alpha.kubernetes.io/tolerate-unready-endpoints: "true"
spec:
clusterIP: None # This is needed for statefulset hostnames like elasticsearch-0 to resolve
# Create endpoints also if the related pod isn't ready
publishNotReadyAddresses: true
selector:
app: "elasticsearch-master"
ports:
- name: http
port: 9200
- name: transport
port: 9300
---
# Source: elasticsearch/templates/statefulset.yaml
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: elasticsearch-master
labels:
heritage: "Tiller"
release: "elastic"
chart: "elasticsearch"
app: "elasticsearch-master"
annotations:
esMajorVersion: "7"
spec:
serviceName: elasticsearch-master-headless
selector:
matchLabels:
app: "elasticsearch-master"
replicas: 1
podManagementPolicy: Parallel
updateStrategy:
type: RollingUpdate
template:
metadata:
name: "elasticsearch-master"
labels:
heritage: "Tiller"
release: "elastic"
chart: "elasticsearch"
app: "elasticsearch-master"
annotations:
spec:
securityContext:
fsGroup: 1000
runAsUser: 1000
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: app
operator: In
values:
- "elasticsearch-master"
topologyKey: kubernetes.io/hostname
terminationGracePeriodSeconds: 120
volumes:
initContainers:
- name: configure-sysctl
securityContext:
runAsUser: 0
privileged: true
image: "docker.elastic.co/elasticsearch/elasticsearch:7.7.0"
imagePullPolicy: "IfNotPresent"
command: ["sysctl", "-w", "vm.max_map_count=262144"]
resources:
{}
containers:
- name: "elasticsearch"
securityContext:
capabilities:
drop:
- ALL
runAsNonRoot: true
runAsUser: 1000
image: "docker.elastic.co/elasticsearch/elasticsearch:7.7.0"
imagePullPolicy: "IfNotPresent"
readinessProbe:
exec:
command:
- sh
- -c
- |
#!/usr/bin/env bash -e
# If the node is starting up wait for the cluster to be ready (request params: 'wait_for_status=green&timeout=1s' )
# Once it has started only check that the node itself is responding
START_FILE=/tmp/.es_start_file
if [ -n "${ELASTIC_USERNAME}" ] && [ -n "${ELASTIC_PASSWORD}" ]; then
BASIC_AUTH="-u ${ELASTIC_USERNAME}:${ELASTIC_PASSWORD}"
else
BASIC_AUTH=''
fi
if [ -f "${START_FILE}" ]; then
echo 'Elasticsearch is already running, lets check the node is healthy'
HTTP_CODE=$(curl -XGET -s -k ${BASIC_AUTH} -o /dev/null -w '%{http_code}' http://127.0.0.1:9200/)
RC=$?
if [[ ${RC} -ne 0 ]]; then
echo "curl -XGET -s -k \${BASIC_AUTH} -o /dev/null -w '%{http_code}' http://127.0.0.1:9200/ failed with RC ${RC}"
exit ${RC}
fi
# ready if HTTP code 200, 503 is tolerable if ES version is 6.x
if [[ ${HTTP_CODE} == "200" ]]; then
exit 0
elif [[ ${HTTP_CODE} == "503" && "7" == "6" ]]; then
exit 0
else
echo "curl -XGET -s -k \${BASIC_AUTH} -o /dev/null -w '%{http_code}' http://127.0.0.1:9200/ failed with HTTP code ${HTTP_CODE}"
exit 1
fi
else
echo 'Waiting for elasticsearch cluster to become ready (request params: "wait_for_status=green&timeout=1s" )'
if curl -XGET -s -k --fail ${BASIC_AUTH} http://127.0.0.1:9200/_cluster/health?wait_for_status=green&timeout=1s ; then
touch ${START_FILE}
exit 0
else
echo 'Cluster is not yet ready (request params: "wait_for_status=green&timeout=1s" )'
exit 1
fi
fi
failureThreshold: 3
initialDelaySeconds: 10
periodSeconds: 10
successThreshold: 3
timeoutSeconds: 5
ports:
- name: http
containerPort: 9200
- name: transport
containerPort: 9300
resources:
limits:
cpu: 1000m
memory: 2Gi
requests:
cpu: 1000m
memory: 2Gi
env:
- name: node.name
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: cluster.initial_master_nodes
value: "elasticsearch-master-0,"
- name: discovery.seed_hosts
value: "elasticsearch-master-headless"
- name: cluster.name
value: "elasticsearch"
- name: network.host
value: "0.0.0.0"
- name: ES_JAVA_OPTS
value: "-Xmx1g -Xms1g"
- name: node.data
value: "true"
- name: node.ingest
value: "true"
- name: node.master
value: "true"
volumeMounts:
Steps to reproduce:
helm install --name kibana elastic/kibana -f kibana/values.yaml
Provide logs and/or server output (if relevant):
vozzy@node1 elastic-helm-charts λ kubectl logs --since=3h kibana-kibana-6f8557c998-mzl6j
{"type":"log","@timestamp":"2020-06-05T14:00:01Z","tags":["warning","plugins-discovery"],"pid":6,"message":"Expect plugin \"id\" in camelCase, but found: apm_oss"}
{"type":"log","@timestamp":"2020-06-05T14:00:01Z","tags":["warning","plugins-discovery"],"pid":6,"message":"Expect plugin \"id\" in camelCase, but found: file_upload"}
{"type":"log","@timestamp":"2020-06-05T14:00:01Z","tags":["warning","plugins-discovery"],"pid":6,"message":"Expect plugin \"id\" in camelCase, but found: triggers_actions_ui"}
{"type":"log","@timestamp":"2020-06-05T14:00:49Z","tags":["info","plugins-system"],"pid":6,"message":"Setting up [76] plugins: [taskManager,siem,licensing,eventLog,encryptedSavedObjects,code,visTypeVega,usageCollection,metrics,ossTelemetry,lens,telemetryCollectionManager,telemetry,telemetryCollectionXpack,timelion,features,kibanaLegacy,devTools,apm_oss,translations,rollup,observability,uiActions,statusPage,share,newsfeed,savedObjects,kibanaUtils,kibanaReact,inspector,maps,embeddable,drilldowns,advancedUiActions,esUiShared,discover,bfetch,expressions,visualizations,data,home,cloud,console,consoleExtensions,searchprofiler,painlessLab,canvas,management,upgradeAssistant,security,snapshotRestore,transform,licenseManagement,indexManagement,remoteClusters,reporting,advancedSettings,spaces,actions,case,alerting,apm,alertingBuiltins,uptime,ml,telemetryManagementSection,file_upload,dataEnhanced,navigation,graph,dashboard,charts,watcher,triggers_actions_ui,infra,monitoring]"}
{"type":"log","@timestamp":"2020-06-05T14:00:49Z","tags":["warning","plugins","encryptedSavedObjects","config"],"pid":6,"message":"Generating a random key for xpack.encryptedSavedObjects.encryptionKey. To be able to decrypt encrypted saved objects attributes after restart, please set xpack.encryptedSavedObjects.encryptionKey in kibana.yml"}
{"type":"log","@timestamp":"2020-06-05T14:00:49Z","tags":["warning","plugins","security","config"],"pid":6,"message":"Generating a random key for xpack.security.encryptionKey. To prevent sessions from being invalidated on restart, please set xpack.security.encryptionKey in kibana.yml"}
{"type":"log","@timestamp":"2020-06-05T14:00:49Z","tags":["warning","plugins","security","config"],"pid":6,"message":"Session cookies will be transmitted over insecure connections. This is not recommended."}
{"type":"log","@timestamp":"2020-06-05T14:00:49Z","tags":["warning","plugins","actions","actions"],"pid":6,"message":"APIs are disabled due to the Encrypted Saved Objects plugin using an ephemeral encryption key. Please set xpack.encryptedSavedObjects.encryptionKey in kibana.yml."}
{"type":"log","@timestamp":"2020-06-05T14:00:49Z","tags":["warning","plugins","alerting","plugins","alerting"],"pid":6,"message":"APIs are disabled due to the Encrypted Saved Objects plugin using an ephemeral encryption key. Please set xpack.encryptedSavedObjects.encryptionKey in kibana.yml."}
{"type":"log","@timestamp":"2020-06-05T14:00:49Z","tags":["info","plugins","monitoring","monitoring"],"pid":6,"message":"config sourced from: production cluster"}
{"type":"log","@timestamp":"2020-06-05T14:00:49Z","tags":["warning","plugins","monitoring","monitoring"],"pid":6,"message":"X-Pack Monitoring Cluster Alerts will not be available: undefined"}
{"type":"log","@timestamp":"2020-06-05T14:01:19Z","tags":["fatal","root"],"pid":6,"message":"Error: Setup lifecycle of \"monitoring\" plugin wasn't completed in 30sec. Consider disabling the plugin and re-start.\n at Timeout.setTimeout (/usr/share/kibana/src/core/utils/promise.js:31:90)\n at ontimeout (timers.js:436:11)\n at tryOnTimeout (timers.js:300:5)\n at listOnTimeout (timers.js:263:5)\n at Timer.processTimers (timers.js:223:10)"}
{"type":"log","@timestamp":"2020-06-05T14:01:19Z","tags":["info","plugins-system"],"pid":6,"message":"Stopping all plugins."}
{"type":"log","@timestamp":"2020-06-05T14:01:19Z","tags":["info","plugins","infra"],"pid":6,"message":"Stopping plugin"}
{"type":"log","@timestamp":"2020-06-05T14:01:19Z","tags":["info","plugins","watcher"],"pid":6,"message":"Stopping plugin"}
{"type":"log","@timestamp":"2020-06-05T14:01:19Z","tags":["info","plugins","graph"],"pid":6,"message":"Stopping plugin"}
{"type":"log","@timestamp":"2020-06-05T14:01:19Z","tags":["info","plugins","dataEnhanced"],"pid":6,"message":"Stopping plugin"}
{"type":"log","@timestamp":"2020-06-05T14:01:19Z","tags":["info","plugins","file_upload"],"pid":6,"message":"Stopping plugin"}
{"type":"log","@timestamp":"2020-06-05T14:01:19Z","tags":["info","plugins","ml"],"pid":6,"message":"Stopping plugin"}
{"type":"log","@timestamp":"2020-06-05T14:01:19Z","tags":["info","plugins","uptime"],"pid":6,"message":"Stopping plugin"}
{"type":"log","@timestamp":"2020-06-05T14:01:19Z","tags":["info","plugins","alertingBuiltins"],"pid":6,"message":"Stopping plugin"}
{"type":"log","@timestamp":"2020-06-05T14:01:19Z","tags":["info","plugins","apm"],"pid":6,"message":"Stopping plugin"}
{"type":"log","@timestamp":"2020-06-05T14:01:19Z","tags":["info","plugins","alerting"],"pid":6,"message":"Stopping plugin"}
{"type":"log","@timestamp":"2020-06-05T14:01:19Z","tags":["info","plugins","case"],"pid":6,"message":"Stopping plugin"}
{"type":"log","@timestamp":"2020-06-05T14:01:19Z","tags":["info","plugins","actions"],"pid":6,"message":"Stopping plugin"}
{"type":"log","@timestamp":"2020-06-05T14:01:19Z","tags":["info","plugins","spaces"],"pid":6,"message":"Stopping plugin"}
{"type":"log","@timestamp":"2020-06-05T14:01:19Z","tags":["info","plugins","remoteClusters"],"pid":6,"message":"Stopping plugin"}
{"type":"log","@timestamp":"2020-06-05T14:01:19Z","tags":["info","plugins","indexManagement"],"pid":6,"message":"Stopping plugin"}
{"type":"log","@timestamp":"2020-06-05T14:01:19Z","tags":["info","plugins","licenseManagement"],"pid":6,"message":"Stopping plugin"}
{"type":"log","@timestamp":"2020-06-05T14:01:19Z","tags":["info","plugins","transform"],"pid":6,"message":"Stopping plugin"}
{"type":"log","@timestamp":"2020-06-05T14:01:19Z","tags":["info","plugins","snapshotRestore"],"pid":6,"message":"Stopping plugin"}
{"type":"log","@timestamp":"2020-06-05T14:01:19Z","tags":["info","plugins","security"],"pid":6,"message":"Stopping plugin"}
{"type":"log","@timestamp":"2020-06-05T14:01:19Z","tags":["info","plugins","upgradeAssistant"],"pid":6,"message":"Stopping plugin"}
{"type":"log","@timestamp":"2020-06-05T14:01:19Z","tags":["info","plugins","canvas"],"pid":6,"message":"Stopping plugin"}
{"type":"log","@timestamp":"2020-06-05T14:01:19Z","tags":["info","plugins","painlessLab"],"pid":6,"message":"Stopping plugin"}
{"type":"log","@timestamp":"2020-06-05T14:01:19Z","tags":["info","plugins","searchprofiler"],"pid":6,"message":"Stopping plugin"}
{"type":"log","@timestamp":"2020-06-05T14:01:19Z","tags":["info","plugins","consoleExtensions"],"pid":6,"message":"Stopping plugin"}
{"type":"log","@timestamp":"2020-06-05T14:01:19Z","tags":["info","plugins","console"],"pid":6,"message":"Stopping plugin"}
{"type":"log","@timestamp":"2020-06-05T14:01:19Z","tags":["info","plugins","cloud"],"pid":6,"message":"Stopping plugin"}
{"type":"log","@timestamp":"2020-06-05T14:01:19Z","tags":["info","plugins","home"],"pid":6,"message":"Stopping plugin"}
{"type":"log","@timestamp":"2020-06-05T14:01:19Z","tags":["info","plugins","data"],"pid":6,"message":"Stopping plugin"}
{"type":"log","@timestamp":"2020-06-05T14:01:19Z","tags":["info","plugins","visualizations"],"pid":6,"message":"Stopping plugin"}
{"type":"log","@timestamp":"2020-06-05T14:01:19Z","tags":["info","plugins","expressions"],"pid":6,"message":"Stopping plugin"}
{"type":"log","@timestamp":"2020-06-05T14:01:19Z","tags":["info","plugins","bfetch"],"pid":6,"message":"Stopping plugin"}
{"type":"log","@timestamp":"2020-06-05T14:01:19Z","tags":["info","plugins","share"],"pid":6,"message":"Stopping plugin"}
{"type":"log","@timestamp":"2020-06-05T14:01:19Z","tags":["info","plugins","rollup"],"pid":6,"message":"Stopping plugin"}
{"type":"log","@timestamp":"2020-06-05T14:01:19Z","tags":["info","plugins","translations"],"pid":6,"message":"Stopping plugin"}
{"type":"log","@timestamp":"2020-06-05T14:01:19Z","tags":["info","plugins","apm_oss"],"pid":6,"message":"Stopping plugin"}
{"type":"log","@timestamp":"2020-06-05T14:01:19Z","tags":["info","plugins","kibanaLegacy"],"pid":6,"message":"Stopping plugin"}
{"type":"log","@timestamp":"2020-06-05T14:01:19Z","tags":["info","plugins","features"],"pid":6,"message":"Stopping plugin"}
{"type":"log","@timestamp":"2020-06-05T14:01:19Z","tags":["info","plugins","timelion"],"pid":6,"message":"Stopping plugin"}
{"type":"log","@timestamp":"2020-06-05T14:01:19Z","tags":["info","plugins","telemetryCollectionXpack"],"pid":6,"message":"Stopping plugin"}
{"type":"log","@timestamp":"2020-06-05T14:01:19Z","tags":["info","plugins","telemetry"],"pid":6,"message":"Stopping plugin"}
{"type":"log","@timestamp":"2020-06-05T14:01:19Z","tags":["info","plugins","telemetryCollectionManager"],"pid":6,"message":"Stopping plugin"}
{"type":"log","@timestamp":"2020-06-05T14:01:19Z","tags":["info","plugins","lens"],"pid":6,"message":"Stopping plugin"}
{"type":"log","@timestamp":"2020-06-05T14:01:19Z","tags":["info","plugins","ossTelemetry"],"pid":6,"message":"Stopping plugin"}
{"type":"log","@timestamp":"2020-06-05T14:01:19Z","tags":["info","plugins","metrics"],"pid":6,"message":"Stopping plugin"}
{"type":"log","@timestamp":"2020-06-05T14:01:19Z","tags":["info","plugins","usageCollection"],"pid":6,"message":"Stopping plugin"}
{"type":"log","@timestamp":"2020-06-05T14:01:19Z","tags":["info","plugins","visTypeVega"],"pid":6,"message":"Stopping plugin"}
{"type":"log","@timestamp":"2020-06-05T14:01:19Z","tags":["info","plugins","code"],"pid":6,"message":"Stopping plugin"}
{"type":"log","@timestamp":"2020-06-05T14:01:19Z","tags":["info","plugins","encryptedSavedObjects"],"pid":6,"message":"Stopping plugin"}
{"type":"log","@timestamp":"2020-06-05T14:01:19Z","tags":["info","plugins","eventLog"],"pid":6,"message":"Stopping plugin"}
{"type":"log","@timestamp":"2020-06-05T14:01:19Z","tags":["info","plugins","licensing"],"pid":6,"message":"Stopping plugin"}
{"type":"log","@timestamp":"2020-06-05T14:01:19Z","tags":["info","plugins","siem"],"pid":6,"message":"Stopping plugin"}
{"type":"log","@timestamp":"2020-06-05T14:01:19Z","tags":["info","plugins","taskManager"],"pid":6,"message":"Stopping plugin"}
FATAL Error: Setup lifecycle of "monitoring" plugin wasn't completed in 30sec. Consider disabling the plugin and re-start.
vozzy@node1 elastic-helm-charts λ kubectl get pod elasticsearch-master-0
NAME READY STATUS RESTARTS AGE
elasticsearch-master-0 1/1 Running 0 37m
I did not find a way to set the plugin-timeout via the Kibana config file
Any ideas? This repo worked perfectly in Minikube for me, so I decided to deploy my own k8s cluster as a next step.
Thanks
Issue Analytics
- State:
- Created 3 years ago
- Reactions:6
- Comments:16
Top Results From Across the Web
Unable to enable alerting feature in kibana 7.7 - Elastic Discuss
FATAL Error : [config validation of [xpack.security].transport]: ... definition for this key is missing kibana is upgraded to 7.7.0.
Read more >How to enable APM in Kibana? - elasticsearch - Stack Overflow
js:46:11) code: 'InvalidConfig', processExitCode: 64, cause: undefined } FATAL Error: Unknown configuration key(s): "xpack.apm.enabled", "xpack.
Read more >Elasticsearch version for 2.4 - Magento Forums
Minimum requiered version of elasticsearch is 7.7.0 ... Fatal error: Uncaught Error: Wrong parameters for Elasticsearch\Common\Exceptions\ ...
Read more >Fess 13.7 error with Elasticsearch 7.7.0 - CodeLIbs Forum
Fess 13.7 error with Elasticsearch 7.7.0 · English:Fess ... Mai 27 13:44:04 socialmi systemd-entrypoint[20667]: fatal error in thread [main], exiting
Read more >Learn More About Your Home Network with Elastic SIEM - Part 1
If you run Logstash at this point and encounter error similar to ... This is the required version of NodeJS for Kibana 7.7.0....
Read more >Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start FreeTop Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Top GitHub Comments
And can also confirm that this is still a thing using v7.8.0
Disable monitoring plugin could help. Add
monitoring.enabled=false
to kibana.yml