Kubelet and Kubelite Conflicting during Microk8s Install in LXC.
See original GitHub issueHello.
I’m looking for help with getting Microk8s working inside LXC
containers. kubelet
seems to be restarting repeatedly in conflict
with kubelite
.
Setup
Host OS: Ubuntu 20.04.2
lxc --version
: 4.14 installed via snap
Versions inside LXC container:
lxc image: ubuntu 20.04 LTS amd64 (release) (20210510), Fingerprint =52c9bf12cbd3
root@purek8:~# snap list
Name Version Rev Tracking Publisher Notes
core18 20210507 2066 latest/stable canonical✓ base
docker 19.03.13 796 latest/stable canonical✓ -
lxd 4.0.6 20326 4.0/stable/… canonical✓ -
microk8s v1.21.1 2230 1.21/edge canonical✓ classic
snapd 2.50 11841 latest/stable canonical✓ snapd
root@purek8:~# snap --version
snap 2.50
snapd 2.50
series 16
ubuntu 20.04
kernel 5.8.0-53-generic
Note that the same issues happened to Microk8s versions 1.20/edge
, 1.20/stable
, and 1.21/stable
Steps Taken
I followed the instructions here https://microk8s.io/docs/lxd.
My storage fs is btrfs and the setup instructions say the lxc
profiles are for ext4 and zfs; I was unable to find profile for
btrfs, so I just went with the ext4
profile. This might be
relevant because I was able to install Microk8s on the ubuntu host OS
and it worked fine.
My default config is:
config: {}
description: Default LXD profile
devices:
eth0:
name: eth0
network: lxdbr0
type: nic
root:
path: /
pool: default
type: disk
name: default
used_by:
- /1.0/instances/purek8
Also, my container is called purek8
not microk8s
in the tutorial.
Issue
Microk8s components (notably kubelet
is terminating repeatedly), and
thus many microk8s features fail to run.
root@purek8:~# microk8s.status
microk8s is not running. Use microk8s inspect for a deeper inspection.
inspect
does not immediately reveal any obvious issues.
root@purek8:~# microk8s.inspect
Inspecting Certificates
Inspecting services
Service snap.microk8s.daemon-cluster-agent is running
Service snap.microk8s.daemon-containerd is running
Service snap.microk8s.daemon-apiserver-kicker is running
Service snap.microk8s.daemon-kubelite is running
Copy service arguments to the final report tarball
Inspecting AppArmor configuration
Gathering system information
Copy processes list to the final report tarball
Copy snap list to the final report tarball
Copy VM name (or none) to the final report tarball
Copy disk usage information to the final report tarball
Copy memory usage information to the final report tarball
Copy server uptime to the final report tarball
Copy current linux distribution to the final report tarball
Copy openSSL information to the final report tarball
Copy network configuration to the final report tarball
Inspecting kubernetes cluster
Inspect kubernetes cluster
Inspecting juju
Inspect Juju
Inspecting kubeflow
Inspect Kubeflow
Building the report tarball
Report tarball is at /var/snap/microk8s/2230/inspection-report-20210531_142600.tar.gz
Full file is inspection-report-20210531_142600.tar.gz
However, inspecting this node (container I am running in), we see
kubelet
starting over and over again.
root@purek8:~# microk8s.kubectl describe node purek8
Name: purek8
Roles: <none>
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
kubernetes.io/arch=amd64
kubernetes.io/hostname=purek8
kubernetes.io/os=linux
microk8s.io/cluster=true
Annotations: volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Sun, 30 May 2021 20:37:10 +0000
Taints: node.kubernetes.io/not-ready:NoSchedule
Unschedulable: false
Lease:
HolderIdentity: purek8
AcquireTime: <unset>
RenewTime: Mon, 31 May 2021 14:24:50 +0000
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
MemoryPressure False Sun, 30 May 2021 20:37:10 +0000 Sun, 30 May 2021 20:37:10 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Sun, 30 May 2021 20:37:10 +0000 Sun, 30 May 2021 20:37:10 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Sun, 30 May 2021 20:37:10 +0000 Sun, 30 May 2021 20:37:10 +0000 KubeletHasSufficientPID kubelet has sufficient PID available
Ready False Sun, 30 May 2021 20:37:10 +0000 Sun, 30 May 2021 20:37:10 +0000 KubeletNotReady [container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful, c
ontainer runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized, CSINode is not yet initialized, missing node capacity for resources: ephemeral-storage]
... #skipping a bunch of detail
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Starting 18h kubelet Starting kubelet.
Normal Starting 18h kubelet Starting kubelet.
Normal Starting 18h kubelet Starting kubelet.
Warning InvalidDiskCapacity 18h kubelet invalid capacity 0 on image filesystem
Normal NodeHasSufficientMemory 18h kubelet Node purek8 status is now: NodeHasSufficientMemory
Warning InvalidDiskCapacity 18h kubelet invalid capacity 0 on image filesystem
Normal Starting 18h kubelet Starting kubelet.
Normal Starting 18h kubelet Starting kubelet.
Warning InvalidDiskCapacity 18h kubelet invalid capacity 0 on image filesystem
Normal Starting 18h kubelet Starting kubelet.
Warning InvalidDiskCapacity 18h kubelet invalid capacity 0 on image filesystem
Warning InvalidDiskCapacity 18h kubelet invalid capacity 0 on image filesystem
Normal Starting 18h kubelet Starting kubelet.
Normal Starting 18h kubelet Starting kubelet.
Warning InvalidDiskCapacity 18h kubelet invalid capacity 0 on image filesystem
Normal Starting 18h kubelet Starting kubelet.
Warning InvalidDiskCapacity 18h kubelet invalid capacity 0 on image filesystem
Normal Starting 18h kubelet Starting kubelet.
Warning InvalidDiskCapacity 18h kubelet invalid capacity 0 on image filesystem
Normal Starting 18h kubelet Starting kubelet.
Warning InvalidDiskCapacity 18h kubelet invalid capacity 0 on image filesystem
... More kubelet restarts at frequency of 1 restart for every 5 seconds.
Looking at kubelet
’s logs, we see kubelet restarting.
root@purek8:~# journalctl -u snap.microk8s.daemon-kubelet.service
snap.microk8s.daemon-kubelet.service snap.microk8s.daemon-kubelite.service
root@purek8:~# journalctl -u snap.microk8s.daemon-kubelet.service
-- Logs begin at Sun 2021-05-30 17:08:18 UTC, end at Sun 2021-05-30 20:13:39 UTC. --
May 30 17:09:19 purek8 systemd[1]: Started Service for snap application microk8s.daemon-kubelet.
May 30 17:09:19 purek8 microk8s.daemon-kubelet[2214]: + export PATH=/snap/microk8s/2210/usr/sbin:/snap/microk8s/2210/usr/bin:/snap/microk8s/2210/sbin:/snap/microk8s/2210/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/>
May 30 17:09:19 purek8 microk8s.daemon-kubelet[2214]: + PATH=/snap/microk8s/2210/usr/sbin:/snap/microk8s/2210/usr/bin:/snap/microk8s/2210/sbin:/snap/microk8s/2210/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/gam>
May 30 17:09:19 purek8 microk8s.daemon-kubelet[2349]: ++ /snap/microk8s/2210/bin/uname -m
May 30 17:09:19 purek8 microk8s.daemon-kubelet[2214]: + ARCH=x86_64
May 30 17:09:19 purek8 microk8s.daemon-kubelet[2214]: + export LD_LIBRARY_PATH=:/snap/microk8s/2210/lib:/snap/microk8s/2210/usr/lib:/snap/microk8s/2210/lib/x86_64-linux-gnu:/snap/microk8s/2210/usr/lib/x86_64-linux-gnu
May 30 17:09:19 purek8 microk8s.daemon-kubelet[2214]: + LD_LIBRARY_PATH=:/snap/microk8s/2210/lib:/snap/microk8s/2210/usr/lib:/snap/microk8s/2210/lib/x86_64-linux-gnu:/snap/microk8s/2210/usr/lib/x86_64-linux-gnu
May 30 17:09:19 purek8 microk8s.daemon-kubelet[2214]: + export LD_LIBRARY_PATH=/var/lib/snapd/lib/gl:/var/lib/snapd/lib/gl32:/var/lib/snapd/void::/snap/microk8s/2210/lib:/snap/microk8s/2210/usr/lib:/snap/microk8s/2210/lib/x86_64-linux-g>
May 30 17:09:19 purek8 microk8s.daemon-kubelet[2214]: + LD_LIBRARY_PATH=/var/lib/snapd/lib/gl:/var/lib/snapd/lib/gl32:/var/lib/snapd/void::/snap/microk8s/2210/lib:/snap/microk8s/2210/usr/lib:/snap/microk8s/2210/lib/x86_64-linux-gnu:/sna>
May 30 17:09:19 purek8 microk8s.daemon-kubelet[2214]: + '[' -e /var/snap/microk8s/2210/var/lock/lite.lock ']'
May 30 17:09:19 purek8 microk8s.daemon-kubelet[2214]: + echo 'Will not run along with kubelite'
May 30 17:09:19 purek8 microk8s.daemon-kubelet[2214]: Will not run along with kubelite
May 30 17:09:19 purek8 microk8s.daemon-kubelet[2214]: + exit 0
May 30 17:09:19 purek8 systemd[1]: snap.microk8s.daemon-kubelet.service: Succeeded.
May 30 19:07:19 purek8 systemd[1]: Started Service for snap application microk8s.daemon-kubelet.
May 30 19:07:19 purek8 microk8s.daemon-kubelet[156271]: + export PATH=/snap/microk8s/2210/usr/sbin:/snap/microk8s/2210/usr/bin:/snap/microk8s/2210/sbin:/snap/microk8s/2210/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin>
May 30 19:07:19 purek8 microk8s.daemon-kubelet[156271]: + PATH=/snap/microk8s/2210/usr/sbin:/snap/microk8s/2210/usr/bin:/snap/microk8s/2210/sbin:/snap/microk8s/2210/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/g>
May 30 19:07:19 purek8 microk8s.daemon-kubelet[156398]: ++ /snap/microk8s/2210/bin/uname -m
May 30 19:07:19 purek8 microk8s.daemon-kubelet[156271]: + ARCH=x86_64
May 30 19:07:19 purek8 microk8s.daemon-kubelet[156271]: + export LD_LIBRARY_PATH=:/snap/microk8s/2210/lib:/snap/microk8s/2210/usr/lib:/snap/microk8s/2210/lib/x86_64-linux-gnu:/snap/microk8s/2210/usr/lib/x86_64-linux-gnu
May 30 19:07:19 purek8 microk8s.daemon-kubelet[156271]: + LD_LIBRARY_PATH=:/snap/microk8s/2210/lib:/snap/microk8s/2210/usr/lib:/snap/microk8s/2210/lib/x86_64-linux-gnu:/snap/microk8s/2210/usr/lib/x86_64-linux-gnu
May 30 19:07:19 purek8 microk8s.daemon-kubelet[156271]: + export LD_LIBRARY_PATH=/var/lib/snapd/lib/gl:/var/lib/snapd/lib/gl32:/var/lib/snapd/void::/snap/microk8s/2210/lib:/snap/microk8s/2210/usr/lib:/snap/microk8s/2210/lib/x86_64-linux>
May 30 19:07:19 purek8 microk8s.daemon-kubelet[156271]: + LD_LIBRARY_PATH=/var/lib/snapd/lib/gl:/var/lib/snapd/lib/gl32:/var/lib/snapd/void::/snap/microk8s/2210/lib:/snap/microk8s/2210/usr/lib:/snap/microk8s/2210/lib/x86_64-linux-gnu:/s>
May 30 19:07:19 purek8 microk8s.daemon-kubelet[156271]: + '[' -e /var/snap/microk8s/2210/var/lock/lite.lock ']'
May 30 19:07:19 purek8 microk8s.daemon-kubelet[156271]: + echo 'Will not run along with kubelite'
May 30 19:07:19 purek8 microk8s.daemon-kubelet[156271]: Will not run along with kubelite
May 30 19:07:19 purek8 microk8s.daemon-kubelet[156271]: + exit 0
... More identical entries.
The reason kubelet
is not sticking around seems to be “Will not
run along with kubelite.” At this point, I am a bit lost as to what
is required to mitigate this issue. Any help will be appreciated.
This issue with kubelite sounds very similar to #2014, but it looks like that issue was mitigated.
Issue Analytics
- State:
- Created 2 years ago
- Comments:13 (1 by maintainers)
Top GitHub Comments
I use btrfs too and microk8s in LXD was failing. I managed to get a step further by taking the zfs profile and:
/dev/zfs
to/dev/btfs-control
.When I added the btrfs-control dev, microk8s did start, while before it was in some restart loop. The nvme device was added, because I saw many errors like
Nov 10 09:59:02 kube-ctrl-01 microk8s.daemon-kubelite[226]: W1110 09:59:02.782093 226 fs.go:595] Unable to get btrfs mountpoint IDs: /dev/nvme0n1p4 is not a block device
My install now reports status:
Here is something you could try although I am not sure if it will work. When doing a
lxd init
you are asked what the storage pool should be backed by. Maybe if you choose dir there?