question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

microk8s doesn't seem to work on raspberry pi 3 or 4 devices

See original GitHub issue

Hey there,

I’ve spent the day trying to get microk8s running on a variety of Raspberry Pi following the tutorial here, with a variety of different and interesting failures. I apologise if some of these are duplicated, I am in no way familiar with k8s and flailing wildly.

Running raspbian aarch64 from here. Tested with microk8s stable and edge, both showing the same symptoms / collection of issues. uname -a shows:

Linux pi-k8s-01 5.10.17-v8+ #1414 SMP PREEMPT Fri Apr 30 13:23:25 BST 2021 aarch64 GNU/Linux

inspection-report-20210721_030553.tar.gz

Firstly, installing microk8s appears to work, and after a reboot microk8s status shows things are okay. Commands vary between completing instantly and taking tens of seconds, which may be related to #2280 though moving journald to volatile storage has not made a notable difference (it seems maybe related to the container restarting later).

pi@pi-k8s-01:~ $ microk8s status
microk8s is running
high-availability: no
  datastore master nodes: 127.0.0.1:19001
  datastore standby nodes: none
addons:
  enabled:
    ha-cluster           # Configure high availability on the current node
  disabled:
    dashboard            # The Kubernetes dashboard
    dns                  # CoreDNS
    helm                 # Helm 2 - the package manager for Kubernetes
    helm3                # Helm 3 - Kubernetes package manager
    host-access          # Allow Pods connecting to Host services smoothly
    ingress              # Ingress controller for external access
    linkerd              # Linkerd is a service mesh for Kubernetes and other frameworks
    metallb              # Loadbalancer for your Kubernetes cluster
    metrics-server       # K8s Metrics Server for API access to service metrics
    openebs              # OpenEBS is the open-source storage solution for Kubernetes
    portainer            # Portainer UI for your Kubernetes cluster
    prometheus           # Prometheus operator for monitoring and logging
    rbac                 # Role-Based Access Control for authorisation
    registry             # Private image registry exposed on localhost:32000
    storage              # Storage class; allocates storage from host directory
    traefik              # traefik Ingress controller for external access

Checking what’s running under k8s it looks like a lot of tasks are not ready, maybe related to #2367:

pi@pi-k8s-01:~ $ microk8s kubectl get all --all-namespaces
NAMESPACE     NAME                                          READY   STATUS    RESTARTS   AGE
kube-system   pod/calico-kube-controllers-f7868dd95-bf2st   0/1     Pending   0          101m
kube-system   pod/calico-node-nzjzw                         1/1     Running   39         101m

NAMESPACE     NAME                                TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)    AGE
default       service/kubernetes                  ClusterIP   10.152.183.1     <none>        443/TCP    101m
kube-system   service/metrics-server              ClusterIP   10.152.183.23    <none>        443/TCP    96m
kube-system   service/kubernetes-dashboard        ClusterIP   10.152.183.185   <none>        443/TCP    35m
kube-system   service/dashboard-metrics-scraper   ClusterIP   10.152.183.179   <none>        8000/TCP   35m

NAMESPACE     NAME                         DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR            AGE
kube-system   daemonset.apps/calico-node   1         1         0       1            0           kubernetes.io/os=linux   101m

NAMESPACE     NAME                                        READY   UP-TO-DATE   AVAILABLE   AGE
kube-system   deployment.apps/calico-kube-controllers     0/1     1            0           101m
kube-system   deployment.apps/metrics-server              0/1     0            0           96m
kube-system   deployment.apps/kubernetes-dashboard        0/1     0            0           35m
kube-system   deployment.apps/dashboard-metrics-scraper   0/1     0            0           35m

NAMESPACE     NAME                                                DESIRED   CURRENT   READY   AGE
kube-system   replicaset.apps/calico-kube-controllers-f7868dd95   1         1         0       101m

Running commands has a 50/50 chance of failure, presumably related to #1916 (and maybe #2280), though this node is not joined to a cluster nor does anything appear to be getting OOM killed. This usually returns one of a few errors:

The connection to the server 127.0.0.1:16443 was refused - did you specify the right host or port?

Attempting to install the dashoard with microk8s enable dashboard sometimes seems to work, and in other cases appears to kill the snap.microk8s.daemon-kubelite, seemingly requiring a restart to recover. When it does claim to succeed the containers never seem to run (as you can see above), nor does microk8s status report that the dashboard is enabled.

Attempting to forward a port to the dashboard machine (via microk8s kubectl port-forward -n kube-system service/kubernetes-dashboard 10443:443) first results in:

error: watch closed before UntilWithoutRetry timeout

which is probably to be expected if the container isn’t up. Then if retried:

The connection to the server 127.0.0.1:16443 was refused - did you specify the right host or port?

which appears to also crash something that takes a while to recover.

A few errors do end up in the logs, but I haven’t had much luck resolving them. A sampling of unique entries via sudo journalctl -u snap.microk8s.daemon-* --all | grep error | tail -500:

Jul 21 03:23:33 pi-k8s-01 microk8s.daemon-containerd[29649]: time="2021-07-21T03:23:33.486142810+01:00" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.17-v8+\\n\"): skip plugin" type=io.containerd.snapshotter.v1
Jul 21 03:23:33 pi-k8s-01 microk8s.daemon-containerd[29649]: time="2021-07-21T03:23:33.486893901+01:00" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/snap/microk8s/common/var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
Jul 21 03:23:33 pi-k8s-01 microk8s.daemon-containerd[29649]: time="2021-07-21T03:23:33.487056063+01:00" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
Jul 21 03:23:33 pi-k8s-01 microk8s.daemon-containerd[29649]: time="2021-07-21T03:23:33.488229827+01:00" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/snap/microk8s/common/var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
Jul 21 03:24:25 pi-k8s-01 microk8s.daemon-containerd[29921]: time="2021-07-21T03:24:25.694509662+01:00" level=error msg="failed to delete" cmd="/snap/microk8s/2343/bin/containerd-shim-runc-v1 -namespace k8s.io -address /var/snap/microk8s/common/run/containerd.sock -publish-binary /snap/microk8s/2343/bin/containerd -id e9fed7097d0b2493e8483a08e43e42d19145bd881e064b181ec11d8a7831e7d3 -bundle /var/snap/microk8s/common/run/containerd/io.containerd.runtime.v2.task/k8s.io/e9fed7097d0b2493e8483a08e43e42d19145bd881e064b181ec11d8a7831e7d3 delete" error="exit status 1"
...
Jul 21 03:24:15 pi-k8s-01 microk8s.daemon-kubelite[29977]: E0721 03:24:15.068578   29977 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
Jul 21 03:24:25 pi-k8s-01 microk8s.daemon-containerd[29921]: time="2021-07-21T03:24:25.694509662+01:00" level=error msg="failed to delete" cmd="/snap/microk8s/2343/bin/containerd-shim-runc-v1 -namespace k8s.io -address /var/snap/microk8s/common/run/containerd.sock -publish-binary /snap/microk8s/2343/bin/containerd -id e9fed7097d0b2493e8483a08e43e42d19145bd881e064b181ec11d8a7831e7d3 -bundle /var/snap/microk8s/common/run/containerd/io.containerd.runtime.v2.task/k8s.io/e9fed7097d0b2493e8483a08e43e42d19145bd881e064b181ec11d8a7831e7d3 delete" error="exit status 1"
Jul 21 03:24:25 pi-k8s-01 microk8s.daemon-containerd[29921]: time="2021-07-21T03:24:25.695113389+01:00" level=warning msg="failed to clean up after shim disconnected" error="io.containerd.runc.v1: remove /run/containerd/s/bf95c1279ce968440d9923564b9437c60297e8f859ed59ea05f45fe7462c2cc7: no such file or directory\n: exit status 1" id=e9fed7097d0b2493e8483a08e43e42d19145bd881e064b181ec11d8a7831e7d3 namespace=k8s.io
Jul 21 03:24:28 pi-k8s-01 microk8s.daemon-containerd[29921]: time="2021-07-21T03:24:28.685861940+01:00" level=error msg="failed to reload cni configuration after receiving fs change event(\"/var/snap/microk8s/2343/args/cni-network/10-calico.conflist\": REMOVE)" error="cni config load failed: no network config found in /var/snap/microk8s/2343/args/cni-network: cni plugin not initialized: failed to load cni config"
Jul 21 03:24:33 pi-k8s-01 microk8s.daemon-containerd[29921]: time="2021-07-21T03:24:33.723080957+01:00" level=error msg="collecting metrics for 053fcd20e91ca8cc84f8b3dbfe64cb59d09b77d13efbd16efe35c277156d3399" error="cgroups: cgroup deleted: unknown"
...
Jul 21 03:32:18 pi-k8s-01 microk8s.daemon-kubelite[7740]: E0721 03:32:18.774730    7740 status.go:71] apiserver received an error that is not an metav1.Status: &url.Error{Op:"Get", URL:"https://pi-k8s-01:10250/containerLogs/kube-system/calico-node-nzjzw/upgrade-ipam", Err:(*net.OpError)(0x40013f3db0)}: Get "https://pi-k8s-01:10250/containerLogs/kube-system/calico-node-nzjzw/upgrade-ipam": dial tcp 127.0.0.1:10250: connect: connection refused

Most of these errors are repeated constantly, the “exit status 1” seems to suggest containerd is restarting but, there doesn’t seem to be any super obvious indication of a fresh start in the logs.

I hope some part of this is useful, I’d hoped for it to be a quick morning setup but seem to have bitten off more than I can chew ^_^

Issue Analytics

  • State:open
  • Created 2 years ago
  • Reactions:1
  • Comments:10 (2 by maintainers)

github_iconTop GitHub Comments

2reactions
ktsakalozoscommented, Sep 23, 2021

@copiltembel @teq0 if you do not want to have HA (multi-master) you can microk8s disable ha-cluster on your nodes before joining them. In this setup only the first node will run the control plane.

0reactions
bohatermateuszcommented, Sep 21, 2022

So next day i had to reinstall again, because it started again showing “connection refuse” - so problem still exists

Read more comments on GitHub >

github_iconTop Results From Across the Web

microk8s doesn't seem to work on raspberry pi 3 or 4 devices
I've been happily running K8s on a cluster of RPi 2s and 3s for years. But as of (roughly) 1.17 it seems that...
Read more >
Troubleshooting - MicroK8s
If a pod is not behaving as expected, the first port of call should be the logs. First determine the resource identifier for...
Read more >
Installing MicroK8s on a Raspberry Pi - Discuss Kubernetes
i On Ubuntu 21.10: Install extra kernel modules with sudo apt install linux-modules-extra-raspi and restart MicroK8s with sudo microk8s stop; ...
Read more >
Use MicroK8s with Raspberry Pi in this tutorial - TechTarget
MicroK8s is a small Kubernetes distribution that can run on a Raspberry Pi 4. Follow this tutorial to learn how to use them...
Read more >
Single-Node Kubernetes on Raspberry Pi with MicroK8s and ...
Introduction The goal of this blog post is to explain how to setup and run Ubuntu Server on a Raspberry Pi with MicroK8s...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found