question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

kubelet service not compatible with cgroups v2

See original GitHub issue

I have a snap-based microk8s installation on Fedora. After upgrading to Fedora 31 Beta, kubelet service failed to start:

Sep 17 23:34:14 k8s.sakura-it.pl microk8s.daemon-kubelet[5736]: I0917 23:34:14.837307    5736 server.go:425] Version: v1.15.3
Sep 17 23:34:14 k8s.sakura-it.pl microk8s.daemon-kubelet[5736]: I0917 23:34:14.837804    5736 plugins.go:103] No cloud provider specified.
Sep 17 23:34:14 k8s.sakura-it.pl systemd[1]: run-rac03045094fb44c29a3af5719f0b1a94.scope: Succeeded.
Sep 17 23:34:14 k8s.sakura-it.pl microk8s.daemon-kubelet[5736]: F0917 23:34:14.857370    5736 server.go:273] failed to run Kubelet: mountpoint for cpu not found

This seems to be related to a change introduced in Fedora, that made cgroups v2 default: https://fedoraproject.org/wiki/Changes/CGroupsV2

Setting systemd.unified_cgroup_hierarchy=0 kernel parameter (essentially, going back to cgroups v1) works the problem around and now kubelet starts again.

I’d include the result of microk8s.inspect, it wasn’t for the fact that inspect script fails due to #631 .

Issue Analytics

  • State:closed
  • Created 4 years ago
  • Comments:9 (7 by maintainers)

github_iconTop GitHub Comments

3reactions
Richard87commented, Oct 29, 2020

Editing /var/snap/microk8s/current/args/containerd-template.toml updating runtime_type to io.containerd.runc.v2 works!

     [plugins.cri.containerd.default_runtime]
        runtime_type = "io.containerd.runc.v2"
        runtime_engine = ""
        runtime_root = ""
0reactions
jglickcommented, Dec 3, 2021

https://github.com/ubuntu/microk8s/issues/651#issuecomment-718799924 works for me as well to solve

  Warning  FailedCreatePodSandBox  5m29s (x26 over 11m)  kubelet            Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create containerd task: cgroups: cgroup mountpoint does not exist: unknown

after starting Microk8s for the first time in a while after some OS/kernel upgrades. Reference: https://github.com/ubuntu/microk8s/blob/c22142ae3c2bd3702b891e68d53704eb153c4efd/microk8s-resources/wrappers/run-containerd-with-args#L43-L45

Read more comments on GitHub >

github_iconTop Results From Across the Web

kubelet service not compatible with cgroups v2 #651 - GitHub
I have a snap-based microk8s installation on Fedora. After upgrading to Fedora 31 Beta, kubelet service failed to start: Sep 17 23:34:14 ...
Read more >
Fix kubelet on FCOS 34 (I2f965142) - Gerrit Code Review
Fix kubelet on FCOS 34 Fedora CoreOS 34 has switched from cgroups v1 to cgroups v2 by default, which changes the sysfs hierarchy....
Read more >
Five Things to Prepare for Cgroup v2 with Kubernetes
Deploy cAdvisor compatible with cgroup v2. ... Decide whether to adopt cgroup v2 or not; Three things to prepare for infrastructure.
Read more >
About cgroup v2 - Kubernetes
On Linux, control groups constrain resources that are allocated to processes. The kubelet and the underlying container runtime need to ...
Read more >
Kubelet service not starting in mmumshad's hardway
May 22 09:15:17 worker-1 kubelet[36777]: I0522 09:15:17.345149 36777 server.go:666] --cgroups-per-qos enabled, but --cgroup-root was not ...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found