microk8s status shows not running but all services seem to be running properly
See original GitHub issueHey folks,
I’m attaching the tar file from microk8s inspect
after trying a few things. I’ve reinstalled (with and without --purge
) and downgrading to 1.20/stable
. Everything worked fine yesterday, but I’ve run a few updates and the cluster seemed to not be up after that.
Also, microk8s status --wait-ready
just hangs, and microk8s status
gives me the following output:
microk8s is not running. Use microk8s inspect for a deeper inspection.
It would be awesome if you guys could help me around this.
Inspect does show something with cgroups being wrong. But I’m not sure how to configure it under my distro or if that is an issue at all since it is something I’ve never altered the configs for.
Inspecting Certificates
Inspecting services
Service snap.microk8s.daemon-cluster-agent is running
Service snap.microk8s.daemon-containerd is running
Service snap.microk8s.daemon-apiserver is running
Service snap.microk8s.daemon-apiserver-kicker is running
Service snap.microk8s.daemon-control-plane-kicker is running
Service snap.microk8s.daemon-proxy is running
Service snap.microk8s.daemon-kubelet is running
Service snap.microk8s.daemon-scheduler is running
Service snap.microk8s.daemon-controller-manager is running
Copy service arguments to the final report tarball
Inspecting AppArmor configuration
Gathering system information
Copy processes list to the final report tarball
Copy snap list to the final report tarball
Copy VM name (or none) to the final report tarball
Copy disk usage information to the final report tarball
Copy memory usage information to the final report tarball
Copy server uptime to the final report tarball
Copy current linux distribution to the final report tarball
Copy openSSL information to the final report tarball
Copy network configuration to the final report tarball
Inspecting kubernetes cluster
Inspect kubernetes cluster
Inspecting juju
Inspect Juju
Inspecting kubeflow
Inspect Kubeflow
WARNING: The memory cgroup is not enabled.
The cluster may not be functioning properly. Please ensure cgroups are enabled
See for example: https://microk8s.io/docs/install-alternatives#heading--arm
Building the report tarball
Report tarball is at /var/snap/microk8s/2264/inspection-report-20210624_202344.tar.gz
Issue Analytics
- State:
- Created 2 years ago
- Comments:5
Top Results From Across the Web
microk8s status shows not running but all services seem to be ...
I've reinstalled (with and without --purge ) and downgrading to 1.20/stable . Everything worked fine yesterday, but I've run a few updates and...
Read more >Troubleshooting - MicroK8s
This confirms the services that are running, and the resultant report file can be viewed to get a detailed look at every aspect...
Read more >Fresh debian 10 vm - microk8s not starting properly
Hi, I tried to setup microk8s on a fresh debian 9, didn't work. So I installed debian 10, set up both ipv4 and...
Read more >Distribute ROS 2 across machines with MicroK8s - Canonical
Our simple ROS 2 talker and listener setup runs well on a single Kubernetes node, now let's distribute it out across multiple computers....
Read more >How to Debug Kubernetes Pending Pods and Scheduling ...
A cordoned node can continue to host the pods it is already running, but it cannot accept any new pods, even if it...
Read more >Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start FreeTop Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Top GitHub Comments
@GuiMarthe I did end up doing that with the following (in case I need to find this again in the future!):
The frustrating part is I then had to do this on all the nodes and then re-create them all. But it is up and working again now.
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.