Microk8s constantly spawning new processes when idle
See original GitHub issueI have a VPS server running microk8s (previously 1.19/stable and now upgraded to 1.20/stable while troubleshooting). My server’s load is constantly high (5-10) with kube-apiserver
appearing as top CPU consumer.
Digging a bit more, using execsnoop-bpfcc
I found lots of new processes being constantly spawned by microk8s. Every other second up to 50 new processes per second. The server is basically idle. That can’t be right, can it?
Quick and dirty stats with grep microk8s | cut -d' ' -f1 | uniq -c
on the output of execsnoop-bpfcc
:
14 20:57:02
3 20:57:03
48 20:57:04
7 20:57:05
9 20:57:06
3 20:57:07
13 20:57:09
1 20:57:10
1 20:57:11
13 20:57:12
But some processes excluded by this look like they were spawned by microk8s too, by looking at the PPIDs.
One of the processes which I find a lot is the following:
20:57:07 runc 151295 27204 0 /snap/microk8s/1910/bin/runc --root /run/containerd/runc/k8s.io --log /var/snap/microk8s/common/run/containerd/io.containerd.runtime.v2.task/k8s.io/$id --log-format json exec --process /var/snap/microk8s/common/run/runc-process430775463 --detach --pid-file /var/snap/microk8s/common/run/containerd/io.containerd.runtime.v2.task/k8s.io/$id $id
This process is spawned at least once per second, always with a new, unique id (a 64 character hex-string).
The logs of containerd always contain the same 4-6 lines repeated, again with changing ids:
Feb 12 21:33:14 $host microk8s.daemon-containerd[16547]: time="2021-02-12T21:33:14.423305512+01:00" level=info msg="Exec process \"$id1\" exits with exit code 0 and error <nil>"
Feb 12 21:33:14 $host microk8s.daemon-containerd[16547]: time="2021-02-12T21:33:14.423353573+01:00" level=info msg="Exec process \"$id2\" exits with exit code 0 and error <nil>"
Feb 12 21:33:14 $host microk8s.daemon-containerd[16547]: time="2021-02-12T21:33:14.423624404+01:00" level=info msg="Finish piping \"stderr\" of container exec \"$id2\""
Feb 12 21:33:14 $host microk8s.daemon-containerd[16547]: time="2021-02-12T21:33:14.423487501+01:00" level=info msg="Finish piping \"stdout\" of container exec \"$id2\""
Feb 12 21:33:14 $host microk8s.daemon-containerd[16547]: time="2021-02-12T21:33:14.433792186+01:00" level=info msg="ExecSync for \"$id3\" returns with exit code 0"
Feb 12 21:33:14 $host microk8s.daemon-containerd[16547]: time="2021-02-12T21:33:14.435643166+01:00" level=info msg="ExecSync for \"$id3\" returns with exit code 0"
Inspection report attached: inspection-report-20210212_211908.tar.gz
Issue Analytics
- State:
- Created 3 years ago
- Comments:13
Top GitHub Comments
i have loadavg of 1 on a 4 vcpu system with a completely idle microk8s instance and i find this is not really acceptable for something which is called “micro…” .
execsnoop-bpfcc at least is telling me that this not something which was build with efficiency in mind.
is this a bug or is this the same old “so what? we have enough ram/cpu today!”-developer story ?
@balchua the thing is: I want to use (micro)k8s to run my software, not to constantly max out 1+ CPUs. It seems a bit excessive, but maybe my expectation is wrong and that is normal behavior? Perhaps my setup is botched? I can’t really tell.
The docs state:
Hogging CPU does not feel lightweight to me.