FAIL: Service snap.microk8s.daemon-apiserver is not running
See original GitHub issue@ktsakalozos @freeekanayaka Hi I am new to microk8s…facing the same issue from morning which mention above i have attached my tarball
microk8s inspect
Inspecting Certificates
Inspecting services
Service snap.microk8s.daemon-cluster-agent is running
Service snap.microk8s.daemon-containerd is running
FAIL: Service snap.microk8s.daemon-apiserver is not running
For more details look at: sudo journalctl -u snap.microk8s.daemon-apiserver
Service snap.microk8s.daemon-apiserver-kicker is running
Service snap.microk8s.daemon-proxy is running
Service snap.microk8s.daemon-kubelet is running
Service snap.microk8s.daemon-scheduler is running
Service snap.microk8s.daemon-controller-manager is running
Copy service arguments to the final report tarball
Inspecting AppArmor configuration
Gathering system information
Copy processes list to the final report tarball
Copy snap list to the final report tarball
Copy VM name (or none) to the final report tarball
Copy disk usage information to the final report tarball
Copy memory usage information to the final report tarball
Copy server uptime to the final report tarball
Copy current linux distribution to the final report tarball
Copy openSSL information to the final report tarball
Copy network configuration to the final report tarball
Inspecting kubernetes cluster
Inspect kubernetes cluster
Building the report tarball
Report tarball is at /var/snap/microk8s/1710/inspection-report-20201006_105830.tar.gz
from master:
ubuntu@ip-xxxx:~/s2Shape_Stacks/engine-2.0$ microk8s kubectl get nodes
The connection to the server 127.0.0.1:16443 was refused - did you specify the right host or port?
from node :
kubectl get node
NAME STATUS ROLES AGE VERSION
ip-x-x-x-x NotReady <none> 11d v1.19.2-34+1b3fa60b402c1c
ip-x-x-x-x Ready <none> 9d [v1.19.2-34+1b3fa60b402c1c
ip-x-x-x-x Ready <none> 5d6h v1.19.2-34+1b3fa60b402c1c
Issue Analytics
- State:
- Created 3 years ago
- Comments:10 (3 by maintainers)
Top Results From Across the Web
Service snap.microk8s.daemon-apiserver is not running" #1598
Hello, I have installed microk8s 3 node cluster, all works great for a a couple of days but then it crashes for no...
Read more >FAIL: Service snap.microk8s.daemon-apiserver is not running
MicroK8s V1.19 and running in three node with High Availability. How to troubleshoot and fix this error properly? snap list :
Read more >Troubleshooting - MicroK8s
You may experience the API server being slow, crashing or forming an unstable multi node cluster. Such problems are often traced to low...
Read more >Microk8s stopped working. Status says not running, inspect ...
Multi-node cluster: One node had gone offline (microk8s not running) due to a stuck unsuccessful auto-refresh of the microk8s snap. I had ...
Read more >Service snap.microk8s.daemon-proxy is not running - Ask ...
OK. Found a solution for this problem in LXC that works for me. https://github.com/ubuntu/microk8s/issues/1438.
Read more >Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start FreeTop Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Top GitHub Comments
Hi @rasoolasik, you cannot refresh to the
latest/edge
channel because the API server in that node is crashing. You need tosnap remove microk8s
on the failing node. Then callmicrok8s remove-node <failing-node> --force
from a working node so as to remove it from the k8s cluster. Thensnap install microk8s --classic --channel=latest/edge
and have the new node join the cluster again.The pods on the failing node should have been rescheduled on the Ready nodes. You can confirm this with
microk8s.kubectl get all -A -o wide
@rasoolasik apologies for not replying. Going to 1.18/stable is a good choice while we address the 1.19 issues.