Crash causing broken node: FAIL: Service snap.microk8s.daemon-apiserver is not running
See original GitHub issueHi,
we have a 3 node deployment of microk8s, and now experienced multiple time the issue that one of our nodes dies and consequentaly doesnt start up anymore with the status FAIL: Service snap.microk8s.daemon-apiserver is not running
The only fix so far is to leave (remove node) and rejoin the node.
before the crash this happend according to our syslog.
Oct 2 13:16:53 k8s-prod-m microk8s.daemon-apiserver[10675]: kube-apiserver: src/db.c:40: db__open_follower: Assertion `db->follower == NULL' failed.
Oct 2 13:16:53 k8s-prod-m microk8s.daemon-controller-manager[14899]: W1002 13:16:53.786811 14899 reflector.go:424] k8s.io/client-go/informers/factory.go:134: watch of *v1beta1.PodSecurityPolicy ended with: very short watch: k8s.io/client-go/informers/factory.go:134: Unexpected watch close - watch lasted less than a second and no items received
Oct 2 13:16:53 k8s-prod-m microk8s.daemon-controller-manager[14899]: W1002 13:16:53.786810 14899 reflector.go:424] k8s.io/client-go/metadata/metadatainformer/informer.go:90: watch of *v1.PartialObjectMetadata ended with: very short watch: k8s.io/client-go/metadata/metadatainformer/informer.go:90: Unexpected watch close - watch lasted less than a second and no items received
Oct 2 13:16:53 k8s-prod-m microk8s.daemon-controller-manager[14899]: W1002 13:16:53.786820 14899 reflector.go:424] k8s.io/client-go/metadata/metadatainformer/informer.go:90: watch of *v1.PartialObjectMetadata ended with: very short watch: k8s.io/client-go/metadata/metadatainformer/informer.go:90: Unexpected watch close - watch lasted less than a second and no items received
Oct 2 13:16:53 k8s-prod-m microk8s.daemon-controller-manager[14899]: W1002 13:16:53.786853 14899 reflector.go:424] k8s.io/client-go/informers/factory.go:134: watch of *v1.Namespace ended with: very short watch: k8s.io/client-go/informers/factory.go:134: Unexpected watch close - watch lasted less than a second and no items received
Oct 2 13:16:53 k8s-prod-m microk8s.daemon-controller-manager[14899]: W1002 13:16:53.786873 14899 reflector.go:424] k8s.io/client-go/informers/factory.go:134: watch of *v1.ServiceAccount ended with: very short watch: k8s.io/client-go/informers/factory.go:134: Unexpected watch close - watch lasted less than a second and no items received
Oct 2 13:16:53 k8s-prod-m microk8s.daemon-controller-manager[14899]: W1002 13:16:53.786893 14899 reflector.go:424] k8s.io/client-go/informers/factory.go:134: watch of *v1.ControllerRevision ended with: very short watch: k8s.io/client-go/informers/factory.go:134: Unexpected watch close - watch lasted less than a second and no items received
Oct 2 13:16:53 k8s-prod-m microk8s.daemon-controller-manager[14899]: W1002 13:16:53.786933 14899 reflector.go:424] k8s.io/client-go/informers/factory.go:134: watch of *v1.VolumeAttachment ended with: very short watch: k8s.io/client-go/informers/factory.go:134: Unexpected watch close - watch lasted less than a second and no items received
Oct 2 13:16:53 k8s-prod-m microk8s.daemon-controller-manager[14899]: W1002 13:16:53.786971 14899 reflector.go:424] k8s.io/client-go/informers/factory.go:134: watch of *v1.Role ended with: very short watch: k8s.io/client-go/informers/factory.go:134: Unexpected watch close - watch lasted less than a second and no items received
Oct 2 13:16:53 k8s-prod-m microk8s.daemon-controller-manager[14899]: W1002 13:16:53.787035 14899 reflector.go:424] k8s.io/client-go/informers/factory.go:134: watch of *v1.LimitRange ended with: very short watch: k8s.io/client-go/informers/factory.go:134: Unexpected watch close - watch lasted less than a second and no items received
Oct 2 13:16:53 k8s-prod-m microk8s.daemon-controller-manager[14899]: W1002 13:16:53.787070 14899 reflector.go:424] k8s.io/client-go/informers/factory.go:134: watch of *v1beta1.RuntimeClass ended with: very short watch: k8s.io/client-go/informers/factory.go:134: Unexpected watch close - watch lasted less than a second and no items received
Oct 2 13:16:53 k8s-prod-m microk8s.daemon-controller-manager[14899]: W1002 13:16:53.787081 14899 reflector.go:424] k8s.io/client-go/informers/factory.go:134: watch of *v1beta1.EndpointSlice ended with: very short watch: k8s.io/client-go/informers/factory.go:134: Unexpected watch close - watch lasted less than a second and no items received
Oct 2 13:16:53 k8s-prod-m microk8s.daemon-controller-manager[14899]: W1002 13:16:53.787106 14899 reflector.go:424] k8s.io/client-go/informers/factory.go:134: watch of *v1.CertificateSigningRequest ended with: very short watch: k8s.io/client-go/informers/factory.go:134: Unexpected watch close - watch lasted less than a second and no items received
Oct 2 13:16:53 k8s-prod-m microk8s.daemon-controller-manager[14899]: W1002 13:16:53.787113 14899 reflector.go:424] k8s.io/client-go/metadata/metadatainformer/informer.go:90: watch of *v1.PartialObjectMetadata ended with: very short watch: k8s.io/client-go/metadata/metadatainformer/informer.go:90: Unexpected watch close - watch lasted less than a second and no items received
Oct 2 13:16:53 k8s-prod-m microk8s.daemon-controller-manager[14899]: W1002 13:16:53.787177 14899 reflector.go:424] k8s.io/client-go/informers/factory.go:134: watch of *v1beta1.Ingress ended with: very short watch: k8s.io/client-go/informers/factory.go:134: Unexpected watch close - watch lasted less than a second and no items received
Oct 2 13:16:53 k8s-prod-m microk8s.daemon-controller-manager[14899]: W1002 13:16:53.787246 14899 reflector.go:424] k8s.io/client-go/informers/factory.go:134: watch of *v1.Lease ended with: very short watch: k8s.io/client-go/informers/factory.go:134: Unexpected watch close - watch lasted less than a second and no items received
Oct 2 13:16:53 k8s-prod-m systemd[1]: snap.microk8s.daemon-apiserver.service: Main process exited, code=killed, status=6/ABRT
Oct 2 13:16:53 k8s-prod-m microk8s.daemon-scheduler[19939]: W1002 13:16:53.787417 19939 reflector.go:424] k8s.io/client-go/informers/factory.go:134: watch of *v1.Pod ended with: very short watch: k8s.io/client-go/informers/factory.go:134: Unexpected watch close - watch lasted less than a second and no items received
Oct 2 13:16:53 k8s-prod-m microk8s.daemon-scheduler[19939]: W1002 13:16:53.787449 19939 reflector.go:424] k8s.io/client-go/informers/factory.go:134: watch of *v1.Node ended with: very short watch: k8s.io/client-go/informers/factory.go:134: Unexpected watch close - watch lasted less than a second and no items received
Oct 2 13:16:53 k8s-prod-m microk8s.daemon-scheduler[19939]: W1002 13:16:53.787417 19939 reflector.go:424] k8s.io/client-go/informers/factory.go:134: watch of *v1.PersistentVolume ended with: very short watch: k8s.io/client-go/informers/factory.go:134: Unexpected watch close - watch lasted less than a second and no items received
Oct 2 13:16:53 k8s-prod-m microk8s.daemon-scheduler[19939]: W1002 13:16:53.787478 19939 reflector.go:424] k8s.io/client-go/informers/factory.go:134: watch of *v1.ReplicationController ended with: very short watch: k8s.io/client-go/informers/factory.go:134: Unexpected watch close - watch lasted less than a second and no items received
Oct 2 13:16:53 k8s-prod-m microk8s.daemon-scheduler[19939]: W1002 13:16:53.787482 19939 reflector.go:424] k8s.io/client-go/informers/factory.go:134: watch of *v1.StatefulSet ended with: very short watch: k8s.io/client-go/informers/factory.go:134: Unexpected watch close - watch lasted less than a second and no items received
Oct 2 13:16:53 k8s-prod-m microk8s.daemon-scheduler[19939]: W1002 13:16:53.787497 19939 reflector.go:424] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:188: watch of *v1.Pod ended with: very short watch: k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:188: Unexpected watch close - watch lasted less than a second and no items received
Oct 2 13:16:53 k8s-prod-m microk8s.daemon-scheduler[19939]: W1002 13:16:53.787504 19939 reflector.go:424] k8s.io/client-go/informers/factory.go:134: watch of *v1.ReplicaSet ended with: very short watch: k8s.io/client-go/informers/factory.go:134: Unexpected watch close - watch lasted less than a second and no items received
Oct 2 13:16:53 k8s-prod-m systemd[1]: snap.microk8s.daemon-apiserver.service: Failed with result 'signal'.
Oct 2 13:16:53 k8s-prod-m microk8s.daemon-scheduler[19939]: W1002 13:16:53.787418 19939 reflector.go:424] k8s.io/client-go/informers/factory.go:134: watch of *v1.PersistentVolumeClaim ended with: very short watch: k8s.io/client-go/informers/factory.go:134: Unexpected watch close - watch lasted less than a second and no items received
Oct 2 13:16:53 k8s-prod-m microk8s.daemon-scheduler[19939]: W1002 13:16:53.787418 19939 reflector.go:424] k8s.io/client-go/informers/factory.go:134: watch of *v1.StorageClass ended with: very short watch: k8s.io/client-go/informers/factory.go:134: Unexpected watch close - watch lasted less than a second and no items received
Oct 2 13:16:53 k8s-prod-m microk8s.daemon-scheduler[19939]: W1002 13:16:53.787417 19939 reflector.go:424] k8s.io/client-go/informers/factory.go:134: watch of *v1.CSINode ended with: very short watch: k8s.io/client-go/informers/factory.go:134: Unexpected watch close - watch lasted less than a second and no items received
Oct 2 13:16:53 k8s-prod-m microk8s.daemon-scheduler[19939]: W1002 13:16:53.787437 19939 reflector.go:424] k8s.io/client-go/informers/factory.go:134: watch of *v1beta1.PodDisruptionBudget ended with: very short watch: k8s.io/client-go/informers/factory.go:134: Unexpected watch close - watch lasted less than a second and no items received
Oct 2 13:16:53 k8s-prod-m microk8s.daemon-scheduler[19939]: W1002 13:16:53.787482 19939 reflector.go:424] k8s.io/client-go/informers/factory.go:134: watch of *v1.Service ended with: very short watch: k8s.io/client-go/informers/factory.go:134: Unexpected watch close - watch lasted less than a second and no items received
Oct 2 13:16:54 k8s-prod-m systemd[1]: snap.microk8s.daemon-apiserver.service: Service RestartSec=100ms expired, scheduling restart.
Oct 2 13:16:54 k8s-prod-m systemd[1]: snap.microk8s.daemon-apiserver.service: Scheduled restart job, restart counter is at 10.
Issue Analytics
- State:
- Created 3 years ago
- Reactions:3
- Comments:9 (1 by maintainers)
Top Results From Across the Web
Service snap.microk8s.daemon-apiserver is not running" #1598
Hello, I have installed microk8s 3 node cluster, all works great for a a couple of days but then it crashes for no...
Read more >Troubleshooting - MicroK8s
You may experience the API server being slow, crashing or forming an unstable multi node cluster. Such problems are often traced to low...
Read more >FAIL: Service snap.microk8s.daemon-apiserver is not running
MicroK8s V1.19 and running in three node with High Availability. How to troubleshoot and fix this error properly? snap list :
Read more >microk8s not running after installation - Stack Overflow
I want to install kubeflow using microk8s on kubernetes cluster, ... is running Service snap.microk8s.daemon-apiserver-kicker is running ...
Read more >5-minute home server with microk8s and Rancher
11 Nov 2019 • on kubernetes, rancher, microk8s, ubuntu, helm, docker ... sudo systemctl restart snap.microk8s.daemon-apiserver.service ...
Read more >Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start FreeTop Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Top GitHub Comments
with the latest fixes introduced through #1598 its much more stable. I’ll close this issue as the other one remains open.
Hi @Aaron-Ritter
I think you are affected by the issue reported in https://github.com/ubuntu/microk8s/issues/1598
We have a few fixes we are evaluating right now in the
latest/edge
channel. You could follow that channel with:We have also this node recovery page [1] you may want to follow instead of a leave & rejoin.
Apologies for the inconvenience. As soon as we release a patch your nodes should get the fix automatically.
[1] https://discuss.kubernetes.io/t/recovery-of-ha-microk8s-clusters/12931