question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

rpi4 cluster - microk8s is not running - cni plugin not initialized

See original GitHub issue

hello, i’m new to microk8s, i followed this tutorial to install microk8s on raspberry pi 4b cluster, but sudo microk8s status is saying it’s not running, attaching inspection report

inspection-report-20210222_044504.tar.gz

i googled a bit, and found some other commands to check this issue, please help, thank you!

below is output of sudo snap logs -f microk8s:

$ sudo snap logs -f microk8s
2021-02-22T05:21:15Z microk8s.daemon-controller-manager[2147]: E0222 05:21:15.182873    2147 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
2021-02-22T05:21:15Z microk8s.daemon-containerd[2135]: time="2021-02-22T05:21:15.443111734Z" level=error msg="Failed to load cni configuration" error="cni config load failed: no network config found in /var/snap/microk8s/2049/args/cni-network: cni plugin not initialized: failed to load cni config"
2021-02-22T05:21:15Z microk8s.daemon-kubelet[2171]: E0222 05:21:15.443688    2171 kubelet.go:2163] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
2021-02-22T05:21:19Z microk8s.daemon-apiserver[2125]: I0222 05:21:19.148979    2125 client.go:360] parsed scheme: "passthrough"
2021-02-22T05:21:19Z microk8s.daemon-apiserver[2125]: I0222 05:21:19.149151    2125 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{unix:///var/snap/microk8s/2049/var/kubernetes/backend//kine.sock  <nil> 0 <nil>}] <nil> <nil>}
2021-02-22T05:21:19Z microk8s.daemon-apiserver[2125]: I0222 05:21:19.149217    2125 clientconn.go:948] ClientConn switching balancer to "pick_first"
2021-02-22T05:21:20Z microk8s.daemon-containerd[2135]: time="2021-02-22T05:21:20.445999586Z" level=error msg="Failed to load cni configuration" error="cni config load failed: no network config found in /var/snap/microk8s/2049/args/cni-network: cni plugin not initialized: failed to load cni config"
2021-02-22T05:21:20Z microk8s.daemon-kubelet[2171]: E0222 05:21:20.446782    2171 kubelet.go:2163] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
2021-02-22T05:21:25Z microk8s.daemon-containerd[2135]: time="2021-02-22T05:21:25.449087615Z" level=error msg="Failed to load cni configuration" error="cni config load failed: no network config found in /var/snap/microk8s/2049/args/cni-network: cni plugin not initialized: failed to load cni config"
2021-02-22T05:21:25Z microk8s.daemon-kubelet[2171]: E0222 05:21:25.449824    2171 kubelet.go:2163] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
2021-02-22T05:21:30Z microk8s.daemon-containerd[2135]: time="2021-02-22T05:21:30.452273819Z" level=error msg="Failed to load cni configuration" error="cni config load failed: no network config found in /var/snap/microk8s/2049/args/cni-network: cni plugin not initialized: failed to load cni config"
2021-02-22T05:21:30Z microk8s.daemon-kubelet[2171]: E0222 05:21:30.453059    2171 kubelet.go:2163] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
2021-02-22T05:21:30Z microk8s.daemon-containerd[2135]: time="2021-02-22T05:21:30.667597793Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-5ts8c,Uid:99fd0271-f271-4256-9d25-86cc1796796a,Namespace:kube-system,Attempt:0,} failed, error" error="failed to get sandbox image \"k8s.gcr.io/pause:3.1\": failed to pull image \"k8s.gcr.io/pause:3.1\": failed to pull and unpack image \"k8s.gcr.io/pause:3.1\": failed to resolve reference \"k8s.gcr.io/pause:3.1\": failed to do request: Head \"https://k8s.gcr.io/v2/pause/manifests/3.1\": dial tcp 74.125.204.82:443: i/o timeout"
2021-02-22T05:21:30Z microk8s.daemon-kubelet[2171]: E0222 05:21:30.668315    2171 remote_runtime.go:116] RunPodSandbox from runtime service failed: rpc error: code = Unknown desc = failed to get sandbox image "k8s.gcr.io/pause:3.1": failed to pull image "k8s.gcr.io/pause:3.1": failed to pull and unpack image "k8s.gcr.io/pause:3.1": failed to resolve reference "k8s.gcr.io/pause:3.1": failed to do request: Head "https://k8s.gcr.io/v2/pause/manifests/3.1": dial tcp 74.125.204.82:443: i/o timeout
2021-02-22T05:21:30Z microk8s.daemon-kubelet[2171]: E0222 05:21:30.668524    2171 kuberuntime_sandbox.go:70] CreatePodSandbox for pod "calico-node-5ts8c_kube-system(99fd0271-f271-4256-9d25-86cc1796796a)" failed: rpc error: code = Unknown desc = failed to get sandbox image "k8s.gcr.io/pause:3.1": failed to pull image "k8s.gcr.io/pause:3.1": failed to pull and unpack image "k8s.gcr.io/pause:3.1": failed to resolve reference "k8s.gcr.io/pause:3.1": failed to do request: Head "https://k8s.gcr.io/v2/pause/manifests/3.1": dial tcp 74.125.204.82:443: i/o timeout
2021-02-22T05:21:30Z microk8s.daemon-kubelet[2171]: E0222 05:21:30.668602    2171 kuberuntime_manager.go:755] createPodSandbox for pod "calico-node-5ts8c_kube-system(99fd0271-f271-4256-9d25-86cc1796796a)" failed: rpc error: code = Unknown desc = failed to get sandbox image "k8s.gcr.io/pause:3.1": failed to pull image "k8s.gcr.io/pause:3.1": failed to pull and unpack image "k8s.gcr.io/pause:3.1": failed to resolve reference "k8s.gcr.io/pause:3.1": failed to do request: Head "https://k8s.gcr.io/v2/pause/manifests/3.1": dial tcp 74.125.204.82:443: i/o timeout
2021-02-22T05:21:30Z microk8s.daemon-kubelet[2171]: E0222 05:21:30.668832    2171 pod_workers.go:191] Error syncing pod 99fd0271-f271-4256-9d25-86cc1796796a ("calico-node-5ts8c_kube-system(99fd0271-f271-4256-9d25-86cc1796796a)"), skipping: failed to "CreatePodSandbox" for "calico-node-5ts8c_kube-system(99fd0271-f271-4256-9d25-86cc1796796a)" with CreatePodSandboxError: "CreatePodSandbox for pod \"calico-node-5ts8c_kube-system(99fd0271-f271-4256-9d25-86cc1796796a)\" failed: rpc error: code = Unknown desc = failed to get sandbox image \"k8s.gcr.io/pause:3.1\": failed to pull image \"k8s.gcr.io/pause:3.1\": failed to pull and unpack image \"k8s.gcr.io/pause:3.1\": failed to resolve reference \"k8s.gcr.io/pause:3.1\": failed to do request: Head \"https://k8s.gcr.io/v2/pause/manifests/3.1\": dial tcp 74.125.204.82:443: i/o timeout"
2021-02-22T05:21:34Z microk8s.daemon-controller-manager[2147]: I0222 05:21:34.871038    2147 request.go:655] Throttling request took 1.044368952s, request: GET:https://127.0.0.1:16443/apis/flowcontrol.apiserver.k8s.io/v1beta1?timeout=32s
2021-02-22T05:21:35Z microk8s.daemon-containerd[2135]: time="2021-02-22T05:21:35.454994854Z" level=error msg="Failed to load cni configuration" error="cni config load failed: no network config found in /var/snap/microk8s/2049/args/cni-network: cni plugin not initialized: failed to load cni config"
2021-02-22T05:21:35Z microk8s.daemon-kubelet[2171]: E0222 05:21:35.455840    2171 kubelet.go:2163] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
2021-02-22T05:21:35Z microk8s.daemon-controller-manager[2147]: W0222 05:21:35.776885    2147 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
2021-02-22T05:21:40Z microk8s.daemon-containerd[2135]: time="2021-02-22T05:21:40.457318603Z" level=error msg="Failed to load cni configuration" error="cni config load failed: no network config found in /var/snap/microk8s/2049/args/cni-network: cni plugin not initialized: failed to load cni config"
2021-02-22T05:21:40Z microk8s.daemon-kubelet[2171]: E0222 05:21:40.459031    2171 kubelet.go:2163] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
2021-02-22T05:21:40Z microk8s.daemon-apiserver[2125]: W0222 05:21:40.809375    2125 handler_proxy.go:102] no RequestInfo found in the context
2021-02-22T05:21:40Z microk8s.daemon-apiserver[2125]: E0222 05:21:40.809677    2125 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
2021-02-22T05:21:40Z microk8s.daemon-apiserver[2125]: , Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
2021-02-22T05:21:40Z microk8s.daemon-apiserver[2125]: I0222 05:21:40.809764    2125 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
2021-02-22T05:21:45Z microk8s.daemon-containerd[2135]: time="2021-02-22T05:21:45.461304403Z" level=error msg="Failed to load cni configuration" error="cni config load failed: no network config found in /var/snap/microk8s/2049/args/cni-network: cni plugin not initialized: failed to load cni config"

and output for sudo microk8s kubectl describe nodes:

$ sudo microk8s kubectl describe nodes
Name:               rpi01
Roles:              <none>
Labels:             beta.kubernetes.io/arch=arm64
                    beta.kubernetes.io/os=linux
                    kubernetes.io/arch=arm64
                    kubernetes.io/hostname=rpi01
                    kubernetes.io/os=linux
                    microk8s.io/cluster=true
Annotations:        node.alpha.kubernetes.io/ttl: 0
                    volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp:  Mon, 22 Feb 2021 03:01:30 +0000
Taints:             node.kubernetes.io/not-ready:NoSchedule
Unschedulable:      false
Lease:
  HolderIdentity:  rpi01
  AcquireTime:     <unset>
  RenewTime:       Mon, 22 Feb 2021 04:55:28 +0000
Conditions:
  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
  ----             ------  -----------------                 ------------------                ------                       -------
  MemoryPressure   False   Mon, 22 Feb 2021 04:53:56 +0000   Mon, 22 Feb 2021 03:01:30 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
  DiskPressure     False   Mon, 22 Feb 2021 04:53:56 +0000   Mon, 22 Feb 2021 03:01:30 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
  PIDPressure      False   Mon, 22 Feb 2021 04:53:56 +0000   Mon, 22 Feb 2021 03:01:30 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
  Ready            False   Mon, 22 Feb 2021 04:53:56 +0000   Mon, 22 Feb 2021 03:01:30 +0000   KubeletNotReady              runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
Addresses:
  InternalIP:  192.168.2.51
  Hostname:    rpi01
Capacity:
  cpu:                4
  ephemeral-storage:  122644532Ki
  memory:             7998760Ki
  pods:               110
Allocatable:
  cpu:                4
  ephemeral-storage:  121595956Ki
  memory:             7896360Ki
  pods:               110
System Info:
  Machine ID:                 f30fd0559fc943e5adc5b7b4b4895d05
  System UUID:                f30fd0559fc943e5adc5b7b4b4895d05
  Boot ID:                    ee9bdf61-7331-4500-8380-470bdad90769
  Kernel Version:             5.4.0-1028-raspi
  OS Image:                   Ubuntu 20.04.2 LTS
  Operating System:           linux
  Architecture:               arm64
  Container Runtime Version:  containerd://1.3.7
  Kubelet Version:            v1.20.2-34+c6851e88267786
  Kube-Proxy Version:         v1.20.2-34+c6851e88267786
Non-terminated Pods:          (1 in total)
  Namespace                   Name                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
  ---------                   ----                 ------------  ----------  ---------------  -------------  ---
  kube-system                 calico-node-5ts8c    250m (6%)     0 (0%)      0 (0%)           0 (0%)         96m
Allocated resources:
  (Total limits may be over 100 percent, i.e., overcommitted.)
  Resource           Requests   Limits
  --------           --------   ------
  cpu                250m (6%)  0 (0%)
  memory             0 (0%)     0 (0%)
  ephemeral-storage  0 (0%)     0 (0%)
Events:
  Type     Reason                   Age                From        Message
  ----     ------                   ----               ----        -------
  Normal   Starting                 52m                kubelet     Starting kubelet.
  Warning  InvalidDiskCapacity      52m                kubelet     invalid capacity 0 on image filesystem
  Normal   NodeHasSufficientMemory  52m                kubelet     Node rpi01 status is now: NodeHasSufficientMemory
  Normal   NodeHasNoDiskPressure    52m                kubelet     Node rpi01 status is now: NodeHasNoDiskPressure
  Normal   NodeHasSufficientPID     52m                kubelet     Node rpi01 status is now: NodeHasSufficientPID
  Normal   NodeAllocatableEnforced  52m                kubelet     Updated Node Allocatable limit across pods
  Normal   Starting                 48m                kubelet     Starting kubelet.
  Warning  InvalidDiskCapacity      48m                kubelet     invalid capacity 0 on image filesystem
  Normal   NodeHasSufficientMemory  48m                kubelet     Node rpi01 status is now: NodeHasSufficientMemory
  Normal   NodeHasNoDiskPressure    48m                kubelet     Node rpi01 status is now: NodeHasNoDiskPressure
  Normal   NodeHasSufficientPID     48m                kubelet     Node rpi01 status is now: NodeHasSufficientPID
  Normal   NodeAllocatableEnforced  48m                kubelet     Updated Node Allocatable limit across pods
  Normal   Starting                 18m                kubelet     Starting kubelet.
  Warning  InvalidDiskCapacity      18m                kubelet     invalid capacity 0 on image filesystem
  Normal   NodeAllocatableEnforced  18m                kubelet     Updated Node Allocatable limit across pods
  Normal   NodeHasNoDiskPressure    18m (x7 over 18m)  kubelet     Node rpi01 status is now: NodeHasNoDiskPressure
  Normal   NodeHasSufficientPID     18m (x7 over 18m)  kubelet     Node rpi01 status is now: NodeHasSufficientPID
  Normal   Starting                 18m                kube-proxy  Starting kube-proxy.
  Normal   NodeHasSufficientMemory  18m (x8 over 18m)  kubelet     Node rpi01 status is now: NodeHasSufficientMemory
  Warning  InvalidDiskCapacity      12m                kubelet     invalid capacity 0 on image filesystem
  Normal   Starting                 12m                kubelet     Starting kubelet.
  Normal   NodeAllocatableEnforced  12m                kubelet     Updated Node Allocatable limit across pods
  Normal   NodeHasSufficientMemory  12m (x7 over 12m)  kubelet     Node rpi01 status is now: NodeHasSufficientMemory
  Normal   NodeHasNoDiskPressure    12m (x7 over 12m)  kubelet     Node rpi01 status is now: NodeHasNoDiskPressure
  Normal   NodeHasSufficientPID     12m (x7 over 12m)  kubelet     Node rpi01 status is now: NodeHasSufficientPID
  Normal   Starting                 11m                kube-proxy  Starting kube-proxy.
  Warning  Rebooted                 11m                kubelet     Node rpi01 has been rebooted, boot id: ee9bdf61-7331-4500-8380-470bdad90769


Name:               rpi02
Roles:              <none>
Labels:             beta.kubernetes.io/arch=arm64
                    beta.kubernetes.io/os=linux
                    kubernetes.io/arch=arm64
                    kubernetes.io/hostname=rpi02
                    kubernetes.io/os=linux
                    microk8s.io/cluster=true
Annotations:        node.alpha.kubernetes.io/ttl: 0
                    volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp:  Mon, 22 Feb 2021 03:20:12 +0000
Taints:             node.kubernetes.io/not-ready:NoSchedule
Unschedulable:      false
Lease:
  HolderIdentity:  rpi02
  AcquireTime:     <unset>
  RenewTime:       Mon, 22 Feb 2021 04:55:23 +0000
Conditions:
  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
  ----             ------  -----------------                 ------------------                ------                       -------
  MemoryPressure   False   Mon, 22 Feb 2021 04:54:01 +0000   Mon, 22 Feb 2021 03:20:10 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
  DiskPressure     False   Mon, 22 Feb 2021 04:54:01 +0000   Mon, 22 Feb 2021 03:20:10 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
  PIDPressure      False   Mon, 22 Feb 2021 04:54:01 +0000   Mon, 22 Feb 2021 03:20:10 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
  Ready            False   Mon, 22 Feb 2021 04:54:01 +0000   Mon, 22 Feb 2021 03:20:10 +0000   KubeletNotReady              runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
Addresses:
  InternalIP:  192.168.2.52
  Hostname:    rpi02
Capacity:
  cpu:                4
  ephemeral-storage:  122644532Ki
  memory:             7998760Ki
  pods:               110
Allocatable:
  cpu:                4
  ephemeral-storage:  121595956Ki
  memory:             7896360Ki
  pods:               110
System Info:
  Machine ID:                 06bbf095744c43a29fd71af49dd5b1b6
  System UUID:                06bbf095744c43a29fd71af49dd5b1b6
  Boot ID:                    214f4d0f-a289-4711-8647-6d02e85d3e23
  Kernel Version:             5.4.0-1028-raspi
  OS Image:                   Ubuntu 20.04.2 LTS
  Operating System:           linux
  Architecture:               arm64
  Container Runtime Version:  containerd://1.3.7
  Kubelet Version:            v1.20.2-34+c6851e88267786
  Kube-Proxy Version:         v1.20.2-34+c6851e88267786
Non-terminated Pods:          (1 in total)
  Namespace                   Name                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
  ---------                   ----                 ------------  ----------  ---------------  -------------  ---
  kube-system                 calico-node-f4x27    250m (6%)     0 (0%)      0 (0%)           0 (0%)         95m
Allocated resources:
  (Total limits may be over 100 percent, i.e., overcommitted.)
  Resource           Requests   Limits
  --------           --------   ------
  cpu                250m (6%)  0 (0%)
  memory             0 (0%)     0 (0%)
  ephemeral-storage  0 (0%)     0 (0%)
Events:
  Type     Reason                   Age                From        Message
  ----     ------                   ----               ----        -------
  Warning  InvalidDiskCapacity      51m                kubelet     invalid capacity 0 on image filesystem
  Normal   NodeHasSufficientPID     51m                kubelet     Node rpi02 status is now: NodeHasSufficientPID
  Normal   NodeHasSufficientMemory  51m                kubelet     Node rpi02 status is now: NodeHasSufficientMemory
  Normal   NodeHasNoDiskPressure    51m                kubelet     Node rpi02 status is now: NodeHasNoDiskPressure
  Normal   Starting                 51m                kubelet     Starting kubelet.
  Normal   NodeAllocatableEnforced  51m                kubelet     Updated Node Allocatable limit across pods
  Normal   Starting                 18m                kubelet     Starting kubelet.
  Warning  InvalidDiskCapacity      18m                kubelet     invalid capacity 0 on image filesystem
  Normal   NodeAllocatableEnforced  18m                kubelet     Updated Node Allocatable limit across pods
  Normal   NodeHasSufficientPID     18m (x4 over 18m)  kubelet     Node rpi02 status is now: NodeHasSufficientPID
  Normal   Starting                 18m                kube-proxy  Starting kube-proxy.
  Normal   NodeHasNoDiskPressure    18m (x4 over 18m)  kubelet     Node rpi02 status is now: NodeHasNoDiskPressure
  Normal   NodeHasSufficientMemory  18m (x4 over 18m)  kubelet     Node rpi02 status is now: NodeHasSufficientMemory
  Normal   Starting                 11m                kubelet     Starting kubelet.
  Warning  InvalidDiskCapacity      11m                kubelet     invalid capacity 0 on image filesystem
  Normal   NodeHasSufficientMemory  11m (x2 over 11m)  kubelet     Node rpi02 status is now: NodeHasSufficientMemory
  Normal   NodeHasNoDiskPressure    11m (x2 over 11m)  kubelet     Node rpi02 status is now: NodeHasNoDiskPressure
  Normal   NodeHasSufficientPID     11m (x2 over 11m)  kubelet     Node rpi02 status is now: NodeHasSufficientPID
  Normal   NodeAllocatableEnforced  11m                kubelet     Updated Node Allocatable limit across pods
  Warning  Rebooted                 11m                kubelet     Node rpi02 has been rebooted, boot id: 214f4d0f-a289-4711-8647-6d02e85d3e23
  Normal   Starting                 11m                kube-proxy  Starting kube-proxy.


Name:               rpi03
Roles:              <none>
Labels:             beta.kubernetes.io/arch=arm64
                    beta.kubernetes.io/os=linux
                    kubernetes.io/arch=arm64
                    kubernetes.io/hostname=rpi03
                    kubernetes.io/os=linux
                    microk8s.io/cluster=true
Annotations:        node.alpha.kubernetes.io/ttl: 0
                    volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp:  Mon, 22 Feb 2021 03:37:40 +0000
Taints:             node.kubernetes.io/not-ready:NoExecute
                    node.kubernetes.io/not-ready:NoSchedule
Unschedulable:      false
Lease:
  HolderIdentity:  rpi03
  AcquireTime:     <unset>
  RenewTime:       Mon, 22 Feb 2021 04:55:24 +0000
Conditions:
  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
  ----             ------  -----------------                 ------------------                ------                       -------
  MemoryPressure   False   Mon, 22 Feb 2021 04:54:02 +0000   Mon, 22 Feb 2021 03:37:39 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
  DiskPressure     False   Mon, 22 Feb 2021 04:54:02 +0000   Mon, 22 Feb 2021 03:37:39 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
  PIDPressure      False   Mon, 22 Feb 2021 04:54:02 +0000   Mon, 22 Feb 2021 03:37:39 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
  Ready            False   Mon, 22 Feb 2021 04:54:02 +0000   Mon, 22 Feb 2021 03:37:39 +0000   KubeletNotReady              runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
Addresses:
  InternalIP:  192.168.2.53
  Hostname:    rpi03
Capacity:
  cpu:                4
  ephemeral-storage:  122644532Ki
  memory:             7998760Ki
  pods:               110
Allocatable:
  cpu:                4
  ephemeral-storage:  121595956Ki
  memory:             7896360Ki
  pods:               110
System Info:
  Machine ID:                 819542b7fec448b1b7875a1e09d01232
  System UUID:                819542b7fec448b1b7875a1e09d01232
  Boot ID:                    476e802a-0cc3-4a21-ad6f-ddfb5d4a944b
  Kernel Version:             5.4.0-1028-raspi
  OS Image:                   Ubuntu 20.04.2 LTS
  Operating System:           linux
  Architecture:               arm64
  Container Runtime Version:  containerd://1.3.7
  Kubelet Version:            v1.20.2-34+c6851e88267786
  Kube-Proxy Version:         v1.20.2-34+c6851e88267786
Non-terminated Pods:          (1 in total)
  Namespace                   Name                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
  ---------                   ----                 ------------  ----------  ---------------  -------------  ---
  kube-system                 calico-node-vx8hr    250m (6%)     0 (0%)      0 (0%)           0 (0%)         77m
Allocated resources:
  (Total limits may be over 100 percent, i.e., overcommitted.)
  Resource           Requests   Limits
  --------           --------   ------
  cpu                250m (6%)  0 (0%)
  memory             0 (0%)     0 (0%)
  ephemeral-storage  0 (0%)     0 (0%)
Events:
  Type     Reason                   Age                From        Message
  ----     ------                   ----               ----        -------
  Normal   Starting                 77m                kubelet     Starting kubelet.
  Warning  InvalidDiskCapacity      77m                kubelet     invalid capacity 0 on image filesystem
  Normal   NodeHasSufficientMemory  77m (x2 over 77m)  kubelet     Node rpi03 status is now: NodeHasSufficientMemory
  Normal   NodeHasNoDiskPressure    77m (x2 over 77m)  kubelet     Node rpi03 status is now: NodeHasNoDiskPressure
  Normal   NodeHasSufficientPID     77m (x2 over 77m)  kubelet     Node rpi03 status is now: NodeHasSufficientPID
  Normal   NodeAllocatableEnforced  77m                kubelet     Updated Node Allocatable limit across pods
  Normal   Starting                 77m                kube-proxy  Starting kube-proxy.
  Normal   Starting                 51m                kubelet     Starting kubelet.
  Warning  InvalidDiskCapacity      51m                kubelet     invalid capacity 0 on image filesystem
  Normal   NodeHasSufficientMemory  51m                kubelet     Node rpi03 status is now: NodeHasSufficientMemory
  Normal   NodeHasNoDiskPressure    51m                kubelet     Node rpi03 status is now: NodeHasNoDiskPressure
  Normal   NodeHasSufficientPID     51m                kubelet     Node rpi03 status is now: NodeHasSufficientPID
  Normal   NodeAllocatableEnforced  51m                kubelet     Updated Node Allocatable limit across pods
  Warning  InvalidDiskCapacity      18m                kubelet     invalid capacity 0 on image filesystem
  Normal   Starting                 18m                kubelet     Starting kubelet.
  Normal   NodeAllocatableEnforced  18m                kubelet     Updated Node Allocatable limit across pods
  Normal   Starting                 18m                kube-proxy  Starting kube-proxy.
  Normal   NodeHasSufficientPID     18m (x5 over 18m)  kubelet     Node rpi03 status is now: NodeHasSufficientPID
  Normal   NodeHasSufficientMemory  18m (x5 over 18m)  kubelet     Node rpi03 status is now: NodeHasSufficientMemory
  Normal   NodeHasNoDiskPressure    18m (x5 over 18m)  kubelet     Node rpi03 status is now: NodeHasNoDiskPressure
  Normal   Starting                 11m                kubelet     Starting kubelet.
  Warning  InvalidDiskCapacity      11m                kubelet     invalid capacity 0 on image filesystem
  Normal   NodeHasSufficientMemory  11m (x2 over 11m)  kubelet     Node rpi03 status is now: NodeHasSufficientMemory
  Normal   NodeHasNoDiskPressure    11m (x2 over 11m)  kubelet     Node rpi03 status is now: NodeHasNoDiskPressure
  Normal   NodeHasSufficientPID     11m (x2 over 11m)  kubelet     Node rpi03 status is now: NodeHasSufficientPID
  Normal   NodeAllocatableEnforced  11m                kubelet     Updated Node Allocatable limit across pods
  Normal   Starting                 11m                kube-proxy  Starting kube-proxy.
  Warning  Rebooted                 11m                kubelet     Node rpi03 has been rebooted, boot id: 476e802a-0cc3-4a21-ad6f-ddfb5d4a944b


Name:               rpi04
Roles:              <none>
Labels:             beta.kubernetes.io/arch=arm64
                    beta.kubernetes.io/os=linux
                    kubernetes.io/arch=arm64
                    kubernetes.io/hostname=rpi04
                    kubernetes.io/os=linux
                    microk8s.io/cluster=true
Annotations:        node.alpha.kubernetes.io/ttl: 0
                    volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp:  Mon, 22 Feb 2021 03:49:17 +0000
Taints:             node.kubernetes.io/not-ready:NoSchedule
Unschedulable:      false
Lease:
  HolderIdentity:  rpi04
  AcquireTime:     <unset>
  RenewTime:       Mon, 22 Feb 2021 04:55:26 +0000
Conditions:
  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
  ----             ------  -----------------                 ------------------                ------                       -------
  MemoryPressure   False   Mon, 22 Feb 2021 04:54:03 +0000   Mon, 22 Feb 2021 03:49:12 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
  DiskPressure     False   Mon, 22 Feb 2021 04:54:03 +0000   Mon, 22 Feb 2021 03:49:12 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
  PIDPressure      False   Mon, 22 Feb 2021 04:54:03 +0000   Mon, 22 Feb 2021 03:49:12 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
  Ready            False   Mon, 22 Feb 2021 04:54:03 +0000   Mon, 22 Feb 2021 03:49:12 +0000   KubeletNotReady              runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
Addresses:
  InternalIP:  192.168.2.54
  Hostname:    rpi04
Capacity:
  cpu:                4
  ephemeral-storage:  122644532Ki
  memory:             7998772Ki
  pods:               110
Allocatable:
  cpu:                4
  ephemeral-storage:  121595956Ki
  memory:             7896372Ki
  pods:               110
System Info:
  Machine ID:                 c82835252dd94bd2a587684e69434cbf
  System UUID:                c82835252dd94bd2a587684e69434cbf
  Boot ID:                    38edaff7-1dcc-430d-a4d7-d0ecd557f87d
  Kernel Version:             5.4.0-1028-raspi
  OS Image:                   Ubuntu 20.04.2 LTS
  Operating System:           linux
  Architecture:               arm64
  Container Runtime Version:  containerd://1.3.7
  Kubelet Version:            v1.20.2-34+c6851e88267786
  Kube-Proxy Version:         v1.20.2-34+c6851e88267786
Non-terminated Pods:          (1 in total)
  Namespace                   Name                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
  ---------                   ----                 ------------  ----------  ---------------  -------------  ---
  kube-system                 calico-node-hn4wp    250m (6%)     0 (0%)      0 (0%)           0 (0%)         66m
Allocated resources:
  (Total limits may be over 100 percent, i.e., overcommitted.)
  Resource           Requests   Limits
  --------           --------   ------
  cpu                250m (6%)  0 (0%)
  memory             0 (0%)     0 (0%)
  ephemeral-storage  0 (0%)     0 (0%)
Events:
  Type     Reason                   Age                From        Message
  ----     ------                   ----               ----        -------
  Normal   Starting                 66m                kubelet     Starting kubelet.
  Warning  InvalidDiskCapacity      66m                kubelet     invalid capacity 0 on image filesystem
  Normal   NodeHasSufficientMemory  66m (x2 over 66m)  kubelet     Node rpi04 status is now: NodeHasSufficientMemory
  Normal   NodeHasNoDiskPressure    66m (x2 over 66m)  kubelet     Node rpi04 status is now: NodeHasNoDiskPressure
  Normal   NodeHasSufficientPID     66m (x2 over 66m)  kubelet     Node rpi04 status is now: NodeHasSufficientPID
  Normal   NodeAllocatableEnforced  66m                kubelet     Updated Node Allocatable limit across pods
  Normal   Starting                 66m                kube-proxy  Starting kube-proxy.
  Normal   Starting                 51m                kubelet     Starting kubelet.
  Warning  InvalidDiskCapacity      51m                kubelet     invalid capacity 0 on image filesystem
  Normal   NodeHasSufficientMemory  51m                kubelet     Node rpi04 status is now: NodeHasSufficientMemory
  Normal   NodeHasNoDiskPressure    51m                kubelet     Node rpi04 status is now: NodeHasNoDiskPressure
  Normal   NodeHasSufficientPID     51m                kubelet     Node rpi04 status is now: NodeHasSufficientPID
  Normal   NodeAllocatableEnforced  51m                kubelet     Updated Node Allocatable limit across pods
  Warning  InvalidDiskCapacity      18m                kubelet     invalid capacity 0 on image filesystem
  Normal   Starting                 18m                kubelet     Starting kubelet.
  Normal   NodeAllocatableEnforced  18m                kubelet     Updated Node Allocatable limit across pods
  Normal   NodeHasSufficientPID     18m (x7 over 18m)  kubelet     Node rpi04 status is now: NodeHasSufficientPID
  Normal   NodeHasSufficientMemory  18m (x7 over 18m)  kubelet     Node rpi04 status is now: NodeHasSufficientMemory
  Normal   NodeHasNoDiskPressure    18m (x8 over 18m)  kubelet     Node rpi04 status is now: NodeHasNoDiskPressure
  Normal   Starting                 18m                kube-proxy  Starting kube-proxy.
  Normal   Starting                 11m                kubelet     Starting kubelet.
  Warning  InvalidDiskCapacity      11m                kubelet     invalid capacity 0 on image filesystem
  Normal   NodeHasSufficientMemory  11m (x2 over 11m)  kubelet     Node rpi04 status is now: NodeHasSufficientMemory
  Normal   NodeHasNoDiskPressure    11m (x2 over 11m)  kubelet     Node rpi04 status is now: NodeHasNoDiskPressure
  Normal   NodeHasSufficientPID     11m (x2 over 11m)  kubelet     Node rpi04 status is now: NodeHasSufficientPID
  Normal   NodeAllocatableEnforced  11m                kubelet     Updated Node Allocatable limit across pods
  Normal   Starting                 11m                kube-proxy  Starting kube-proxy.
  Warning  Rebooted                 11m                kubelet     Node rpi04 has been rebooted, boot id: 38edaff7-1dcc-430d-a4d7-d0ecd557f87d
$ sudo microk8s.kubectl get node
NAME    STATUS     ROLES    AGE   VERSION
rpi02   NotReady   <none>   29m   v1.20.2-34+c6851e88267786
rpi01   NotReady   <none>   48m   v1.20.2-34+c6851e88267786
rpi03   NotReady   <none>   12m   v1.20.2-34+c6851e88267786
rpi04   NotReady   <none>   43s   v1.20.2-34+c6851e88267786
$ sudo microk8s kubectl get services
NAME         TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)   AGE
kubernetes   ClusterIP   10.152.183.1   <none>        443/TCP   50m
$ sudo microk8s.kubectl get all --all-namespaces
NAMESPACE     NAME                                             READY   STATUS     RESTARTS   AGE
kube-system   pod/calico-node-5ts8c                            0/1     Init:0/3   0          54m
kube-system   pod/calico-node-f4x27                            0/1     Init:0/3   0          53m
kube-system   pod/calico-node-vx8hr                            0/1     Init:0/3   0          36m
kube-system   pod/calico-node-hn4wp                            0/1     Init:0/3   0          24m
kube-system   pod/calico-kube-controllers-847c8c99d-wn9w4      0/1     Pending    0          72m
kube-system   pod/coredns-86f78bb79c-7dbtr                     0/1     Pending    0          13m
kube-system   pod/metrics-server-7b7db5984b-4mn48              0/1     Pending    0          7m31s
kube-system   pod/kubernetes-dashboard-7ffd448895-fn9k2        0/1     Pending    0          5m51s
kube-system   pod/dashboard-metrics-scraper-6c4568dc68-p6mjg   0/1     Pending    0          5m50s

NAMESPACE     NAME                                TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)                  AGE
default       service/kubernetes                  ClusterIP   10.152.183.1     <none>        443/TCP                  72m
kube-system   service/kube-dns                    ClusterIP   10.152.183.10    <none>        53/UDP,53/TCP,9153/TCP   13m
kube-system   service/metrics-server              ClusterIP   10.152.183.25    <none>        443/TCP                  7m31s
kube-system   service/kubernetes-dashboard        ClusterIP   10.152.183.138   <none>        443/TCP                  5m51s
kube-system   service/dashboard-metrics-scraper   ClusterIP   10.152.183.232   <none>        8000/TCP                 5m51s

NAMESPACE     NAME                         DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR            AGE
kube-system   daemonset.apps/calico-node   4         4         0       4            0           kubernetes.io/os=linux   72m

NAMESPACE     NAME                                        READY   UP-TO-DATE   AVAILABLE   AGE
kube-system   deployment.apps/calico-kube-controllers     0/1     1            0           72m
kube-system   deployment.apps/metrics-server              0/1     1            0           7m31s
kube-system   deployment.apps/kubernetes-dashboard        0/1     1            0           5m51s
kube-system   deployment.apps/dashboard-metrics-scraper   0/1     1            0           5m51s
kube-system   deployment.apps/coredns                     0/1     1            0           13m

NAMESPACE     NAME                                                   DESIRED   CURRENT   READY   AGE
kube-system   replicaset.apps/calico-kube-controllers-847c8c99d      1         1         0       72m
kube-system   replicaset.apps/coredns-86f78bb79c                     1         1         0       13m
kube-system   replicaset.apps/metrics-server-7b7db5984b              1         1         0       7m31s
kube-system   replicaset.apps/kubernetes-dashboard-7ffd448895        1         1         0       5m51s
kube-system   replicaset.apps/dashboard-metrics-scraper-6c4568dc68   1         1         0       5m50s
root@rpi01:/var/snap/microk8s/2049/args/cni-network# cat cni.yaml
---
# Source: calico/templates/calico-config.yaml
# This ConfigMap is used to configure a self-hosted Calico installation.
kind: ConfigMap
apiVersion: v1
metadata:
  name: calico-config
  namespace: kube-system
data:
  # Typha is disabled.
  typha_service_name: "none"
  # Configure the backend to use.
  calico_backend: "vxlan"

  # Configure the MTU to use
  veth_mtu: "1440"

  # The CNI network configuration to install on each node.  The special
  # values in this config will be automatically populated.
  cni_network_config: |-
    {
      "name": "k8s-pod-network",
      "cniVersion": "0.3.1",
      "plugins": [
        {
          "type": "calico",
          "log_level": "info",
          "datastore_type": "kubernetes",
          "nodename_file_optional": true,
          "nodename": "__KUBERNETES_NODE_NAME__",
          "mtu": __CNI_MTU__,
          "ipam": {
              "type": "calico-ipam"
          },
          "policy": {
              "type": "k8s"
          },
          "kubernetes": {
              "kubeconfig": "__KUBECONFIG_FILEPATH__"
          }
        },
        {
          "type": "portmap",
          "snat": true,
          "capabilities": {"portMappings": true}
        },
        {
          "type": "bandwidth",
          "capabilities": {"bandwidth": true}
        }
      ]
    }

---
# Source: calico/templates/kdd-crds.yaml

apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
  name: bgpconfigurations.crd.projectcalico.org
spec:
  scope: Cluster
  group: crd.projectcalico.org
  version: v1
  names:
    kind: BGPConfiguration
    plural: bgpconfigurations
    singular: bgpconfiguration

---
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
  name: bgppeers.crd.projectcalico.org
spec:
  scope: Cluster
  group: crd.projectcalico.org
  version: v1
  names:
    kind: BGPPeer
    plural: bgppeers
    singular: bgppeer

---
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
  name: blockaffinities.crd.projectcalico.org
spec:
  scope: Cluster
  group: crd.projectcalico.org
  version: v1
  names:
    kind: BlockAffinity
    plural: blockaffinities
    singular: blockaffinity

---
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
  name: clusterinformations.crd.projectcalico.org
spec:
  scope: Cluster
  group: crd.projectcalico.org
  version: v1
  names:
    kind: ClusterInformation
    plural: clusterinformations
    singular: clusterinformation

---
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
  name: felixconfigurations.crd.projectcalico.org
spec:
  scope: Cluster
  group: crd.projectcalico.org
  version: v1
  names:
    kind: FelixConfiguration
    plural: felixconfigurations
    singular: felixconfiguration

---
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
  name: globalnetworkpolicies.crd.projectcalico.org
spec:
  scope: Cluster
  group: crd.projectcalico.org
  version: v1
  names:
    kind: GlobalNetworkPolicy
    plural: globalnetworkpolicies
    singular: globalnetworkpolicy

---
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
  name: globalnetworksets.crd.projectcalico.org
spec:
  scope: Cluster
  group: crd.projectcalico.org
  version: v1
  names:
    kind: GlobalNetworkSet
    plural: globalnetworksets
    singular: globalnetworkset

---
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
  name: hostendpoints.crd.projectcalico.org
spec:
  scope: Cluster
  group: crd.projectcalico.org
  version: v1
  names:
    kind: HostEndpoint
    plural: hostendpoints
    singular: hostendpoint

---
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
  name: ipamblocks.crd.projectcalico.org
spec:
  scope: Cluster
  group: crd.projectcalico.org
  version: v1
  names:
    kind: IPAMBlock
    plural: ipamblocks
    singular: ipamblock

---
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
  name: ipamconfigs.crd.projectcalico.org
spec:
  scope: Cluster
  group: crd.projectcalico.org
  version: v1
  names:
    kind: IPAMConfig
    plural: ipamconfigs
    singular: ipamconfig

---
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
  name: ipamhandles.crd.projectcalico.org
spec:
  scope: Cluster
  group: crd.projectcalico.org
  version: v1
  names:
    kind: IPAMHandle
    plural: ipamhandles
    singular: ipamhandle

---
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
  name: ippools.crd.projectcalico.org
spec:
  scope: Cluster
  group: crd.projectcalico.org
  version: v1
  names:
    kind: IPPool
    plural: ippools
    singular: ippool

---
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
  name: networkpolicies.crd.projectcalico.org
spec:
  scope: Namespaced
  group: crd.projectcalico.org
  version: v1
  names:
    kind: NetworkPolicy
    plural: networkpolicies
    singular: networkpolicy

---
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
  name: networksets.crd.projectcalico.org
spec:
  scope: Namespaced
  group: crd.projectcalico.org
  version: v1
  names:
    kind: NetworkSet
    plural: networksets
    singular: networkset

---
---
# Source: calico/templates/rbac.yaml

# Include a clusterrole for the kube-controllers component,
# and bind it to the calico-kube-controllers serviceaccount.
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: calico-kube-controllers
rules:
  # Nodes are watched to monitor for deletions.
  - apiGroups: [""]
    resources:
      - nodes
    verbs:
      - watch
      - list
      - get
  # Pods are queried to check for existence.
  - apiGroups: [""]
    resources:
      - pods
    verbs:
      - get
  # IPAM resources are manipulated when nodes are deleted.
  - apiGroups: ["crd.projectcalico.org"]
    resources:
      - ippools
    verbs:
      - list
  - apiGroups: ["crd.projectcalico.org"]
    resources:
      - blockaffinities
      - ipamblocks
      - ipamhandles
    verbs:
      - get
      - list
      - create
      - update
      - delete
  # Needs access to update clusterinformations.
  - apiGroups: ["crd.projectcalico.org"]
    resources:
      - clusterinformations
    verbs:
      - get
      - create
      - update
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: calico-kube-controllers
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: calico-kube-controllers
subjects:
- kind: ServiceAccount
  name: calico-kube-controllers
  namespace: kube-system
---
# Include a clusterrole for the calico-node DaemonSet,
# and bind it to the calico-node serviceaccount.
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: calico-node
rules:
  # The CNI plugin needs to get pods, nodes, and namespaces.
  - apiGroups: [""]
    resources:
      - pods
      - nodes
      - namespaces
    verbs:
      - get
  - apiGroups: [""]
    resources:
      - endpoints
      - services
    verbs:
      # Used to discover service IPs for advertisement.
      - watch
      - list
      # Used to discover Typhas.
      - get
  # Pod CIDR auto-detection on kubeadm needs access to config maps.
  - apiGroups: [""]
    resources:
      - configmaps
    verbs:
      - get
  - apiGroups: [""]
    resources:
      - nodes/status
    verbs:
      # Needed for clearing NodeNetworkUnavailable flag.
      - patch
      # Calico stores some configuration information in node annotations.
      - update
  # Watch for changes to Kubernetes NetworkPolicies.
  - apiGroups: ["networking.k8s.io"]
    resources:
      - networkpolicies
    verbs:
      - watch
      - list
  # Used by Calico for policy information.
  - apiGroups: [""]
    resources:
      - pods
      - namespaces
      - serviceaccounts
    verbs:
      - list
      - watch
  # The CNI plugin patches pods/status.
  - apiGroups: [""]
    resources:
      - pods/status
    verbs:
      - patch
  # Calico monitors various CRDs for config.
  - apiGroups: ["crd.projectcalico.org"]
    resources:
      - globalfelixconfigs
      - felixconfigurations
      - bgppeers
      - globalbgpconfigs
      - bgpconfigurations
      - ippools
      - ipamblocks
      - globalnetworkpolicies
      - globalnetworksets
      - networkpolicies
      - networksets
      - clusterinformations
      - hostendpoints
      - blockaffinities
    verbs:
      - get
      - list
      - watch
  # Calico must create and update some CRDs on startup.
  - apiGroups: ["crd.projectcalico.org"]
    resources:
      - ippools
      - felixconfigurations
      - clusterinformations
    verbs:
      - create
      - update
  # Calico stores some configuration information on the node.
  - apiGroups: [""]
    resources:
      - nodes
    verbs:
      - get
      - list
      - watch
  # These permissions are only required for upgrade from v2.6, and can
  # be removed after upgrade or on fresh installations.
  - apiGroups: ["crd.projectcalico.org"]
    resources:
      - bgpconfigurations
      - bgppeers
    verbs:
      - create
      - update
  # These permissions are required for Calico CNI to perform IPAM allocations.
  - apiGroups: ["crd.projectcalico.org"]
    resources:
      - blockaffinities
      - ipamblocks
      - ipamhandles
    verbs:
      - get
      - list
      - create
      - update
      - delete
  - apiGroups: ["crd.projectcalico.org"]
    resources:
      - ipamconfigs
    verbs:
      - get
  # Block affinities must also be watchable by confd for route aggregation.
  - apiGroups: ["crd.projectcalico.org"]
    resources:
      - blockaffinities
    verbs:
      - watch
  # The Calico IPAM migration needs to get daemonsets. These permissions can be
  # removed if not upgrading from an installation using host-local IPAM.
  - apiGroups: ["apps"]
    resources:
      - daemonsets
    verbs:
      - get

---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: calico-node
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: calico-node
subjects:
- kind: ServiceAccount
  name: calico-node
  namespace: kube-system

---
# Source: calico/templates/calico-node.yaml
# This manifest installs the calico-node container, as well
# as the CNI plugins and network config on
# each master and worker node in a Kubernetes cluster.
kind: DaemonSet
apiVersion: apps/v1
metadata:
  name: calico-node
  namespace: kube-system
  labels:
    k8s-app: calico-node
spec:
  selector:
    matchLabels:
      k8s-app: calico-node
  updateStrategy:
    type: RollingUpdate
    rollingUpdate:
      maxUnavailable: 1
  template:
    metadata:
      labels:
        k8s-app: calico-node
      annotations:
        # This, along with the CriticalAddonsOnly toleration below,
        # marks the pod as a critical add-on, ensuring it gets
        # priority scheduling and that its resources are reserved
        # if it ever gets evicted.
        scheduler.alpha.kubernetes.io/critical-pod: ''
    spec:
      nodeSelector:
        kubernetes.io/os: linux
      hostNetwork: true
      tolerations:
        # Make sure calico-node gets scheduled on all nodes.
        - effect: NoSchedule
          operator: Exists
        # Mark the pod as a critical add-on for rescheduling.
        - key: CriticalAddonsOnly
          operator: Exists
        - effect: NoExecute
          operator: Exists
      serviceAccountName: calico-node
      # Minimize downtime during a rolling upgrade or deletion; tell Kubernetes to do a "force
      # deletion": https://kubernetes.io/docs/concepts/workloads/pods/pod/#termination-of-pods.
      terminationGracePeriodSeconds: 0
      priorityClassName: system-node-critical
      initContainers:
        # This container performs upgrade from host-local IPAM to calico-ipam.
        # It can be deleted if this is a fresh installation, or if you have already
        # upgraded to use calico-ipam.
        - name: upgrade-ipam
          image: calico/cni:v3.13.2
          command: ["/opt/cni/bin/calico-ipam", "-upgrade"]
          env:
            - name: KUBERNETES_NODE_NAME
              valueFrom:
                fieldRef:
                  fieldPath: spec.nodeName
            - name: CALICO_NETWORKING_BACKEND
              valueFrom:
                configMapKeyRef:
                  name: calico-config
                  key: calico_backend
          volumeMounts:
            - mountPath: /var/lib/cni/networks
              name: host-local-net-dir
            - mountPath: /host/opt/cni/bin
              name: cni-bin-dir
          securityContext:
            privileged: true
        # This container installs the CNI binaries
        # and CNI network config file on each node.
        - name: install-cni
          image: calico/cni:v3.13.2
          command: ["/install-cni.sh"]
          env:
            # Name of the CNI config file to create.
            - name: CNI_CONF_NAME
              value: "10-calico.conflist"
            # The CNI network config to install on each node.
            - name: CNI_NETWORK_CONFIG
              valueFrom:
                configMapKeyRef:
                  name: calico-config
                  key: cni_network_config
            # Set the hostname based on the k8s node name.
            - name: KUBERNETES_NODE_NAME
              valueFrom:
                fieldRef:
                  fieldPath: spec.nodeName
            # CNI MTU Config variable
            - name: CNI_MTU
              valueFrom:
                configMapKeyRef:
                  name: calico-config
                  key: veth_mtu
            # Prevents the container from sleeping forever.
            - name: SLEEP
              value: "false"
            - name: CNI_NET_DIR
              value: "/var/snap/microk8s/current/args/cni-network"
            #- name: SKIP_TLS_VERIFY
            #  value: "true"
          volumeMounts:
            - mountPath: /host/opt/cni/bin
              name: cni-bin-dir
            - mountPath: /host/etc/cni/net.d
              name: cni-net-dir
          securityContext:
            privileged: true
        # Adds a Flex Volume Driver that creates a per-pod Unix Domain Socket to allow Dikastes
        # to communicate with Felix over the Policy Sync API.
        - name: flexvol-driver
          image: calico/pod2daemon-flexvol:v3.13.2
          volumeMounts:
          - name: flexvol-driver-host
            mountPath: /host/driver
          securityContext:
            privileged: true
      containers:
        # Runs calico-node container on each Kubernetes node.  This
        # container programs network policy and routes on each
        # host.
        - name: calico-node
          image: calico/node:v3.13.2
          env:
            # Use Kubernetes API as the backing datastore.
            - name: DATASTORE_TYPE
              value: "kubernetes"
            # Wait for the datastore.
            - name: WAIT_FOR_DATASTORE
              value: "true"
            # Set based on the k8s node name.
            - name: NODENAME
              valueFrom:
                fieldRef:
                  fieldPath: spec.nodeName
            # Choose the backend to use.
            - name: CALICO_NETWORKING_BACKEND
              valueFrom:
                configMapKeyRef:
                  name: calico-config
                  key: calico_backend
            # Cluster type to identify the deployment type
            - name: CLUSTER_TYPE
              value: "k8s,bgp"
            # Auto-detect the BGP IP address.
            - name: IP
              value: "autodetect"
            - name: IP_AUTODETECTION_METHOD
              value: "can-reach=192.168.2.52"
            # Enable IPIP
            #- name: CALICO_IPV4POOL_IPIP
            - name: CALICO_IPV4POOL_VXLAN
              value: "Always"
            # Set MTU for tunnel device used if ipip is enabled
            - name: FELIX_IPINIPMTU
              valueFrom:
                configMapKeyRef:
                  name: calico-config
                  key: veth_mtu
            # The default IPv4 pool to create on startup if none exists. Pod IPs will be
            # chosen from this range. Changing this value after installation will have
            # no effect. This should fall within `--cluster-cidr`.
            - name: CALICO_IPV4POOL_CIDR
              value: "10.1.0.0/16"
            # Disable file logging so `kubectl logs` works.
            - name: CALICO_DISABLE_FILE_LOGGING
              value: "true"
            # Set Felix endpoint to host default action to ACCEPT.
            - name: FELIX_DEFAULTENDPOINTTOHOSTACTION
              value: "ACCEPT"
            # Disable IPv6 on Kubernetes.
            - name: FELIX_IPV6SUPPORT
              value: "false"
            # Set Felix logging to "error"
            - name: FELIX_LOGSEVERITYSCREEN
              value: "error"
            - name: FELIX_HEALTHENABLED
              value: "true"
          securityContext:
            privileged: true
          resources:
            requests:
              cpu: 250m
          livenessProbe:
            exec:
              command:
              - /bin/calico-node
              - -felix-live
              # - -bird-live
            periodSeconds: 10
            initialDelaySeconds: 10
            failureThreshold: 6
          readinessProbe:
            exec:
              command:
              - /bin/calico-node
              - -felix-ready
              # - -bird-ready
            periodSeconds: 10
          volumeMounts:
            - mountPath: /lib/modules
              name: lib-modules
              readOnly: true
            - mountPath: /run/xtables.lock
              name: xtables-lock
              readOnly: false
            - mountPath: /var/run/calico
              name: var-run-calico
              readOnly: false
            - mountPath: /var/lib/calico
              name: var-lib-calico
              readOnly: false
            - name: policysync
              mountPath: /var/run/nodeagent
      volumes:
        # Used by calico-node.
        - name: lib-modules
          hostPath:
            path: /lib/modules
        - name: var-run-calico
          hostPath:
            path: /var/snap/microk8s/current/var/run/calico
        - name: var-lib-calico
          hostPath:
            path: /var/snap/microk8s/current/var/lib/calico
        - name: xtables-lock
          hostPath:
            path: /run/xtables.lock
            type: FileOrCreate
        # Used to install CNI.
        - name: cni-bin-dir
          hostPath:
            path: /var/snap/microk8s/current/opt/cni/bin
        - name: cni-net-dir
          hostPath:
            path: /var/snap/microk8s/current/args/cni-network
        # Mount in the directory for host-local IPAM allocations. This is
        # used when upgrading from host-local to calico-ipam, and can be removed
        # if not using the upgrade-ipam init container.
        - name: host-local-net-dir
          hostPath:
            path: /var/snap/microk8s/current/var/lib/cni/networks
        # Used to create per-pod Unix Domain Sockets
        - name: policysync
          hostPath:
            type: DirectoryOrCreate
            path: /var/snap/microk8s/current/var/run/nodeagent
        # Used to install Flex Volume Driver
        - name: flexvol-driver-host
          hostPath:
            type: DirectoryOrCreate
            path: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds
---

apiVersion: v1
kind: ServiceAccount
metadata:
  name: calico-node
  namespace: kube-system

---
# Source: calico/templates/calico-kube-controllers.yaml

# See https://github.com/projectcalico/kube-controllers
apiVersion: apps/v1
kind: Deployment
metadata:
  name: calico-kube-controllers
  namespace: kube-system
  labels:
    k8s-app: calico-kube-controllers
spec:
  # The controllers can only have a single active instance.
  replicas: 1
  selector:
    matchLabels:
      k8s-app: calico-kube-controllers
  strategy:
    type: Recreate
  template:
    metadata:
      name: calico-kube-controllers
      namespace: kube-system
      labels:
        k8s-app: calico-kube-controllers
      annotations:
        scheduler.alpha.kubernetes.io/critical-pod: ''
    spec:
      nodeSelector:
        kubernetes.io/os: linux
      tolerations:
        # Mark the pod as a critical add-on for rescheduling.
        - key: CriticalAddonsOnly
          operator: Exists
        - key: node-role.kubernetes.io/master
          effect: NoSchedule
      serviceAccountName: calico-kube-controllers
      priorityClassName: system-cluster-critical
      containers:
        - name: calico-kube-controllers
          image: calico/kube-controllers:v3.13.2
          env:
            # Choose which controllers to run.
            - name: ENABLED_CONTROLLERS
              value: node
            - name: DATASTORE_TYPE
              value: kubernetes
          readinessProbe:
            exec:
              command:
              - /usr/bin/check-status
              - -r

---

apiVersion: v1
kind: ServiceAccount
metadata:
  name: calico-kube-controllers
  namespace: kube-system
---
# Source: calico/templates/calico-etcd-secrets.yaml

---
# Source: calico/templates/calico-typha.yaml

---
# Source: calico/templates/configure-canal.yaml

Issue Analytics

  • State:closed
  • Created 3 years ago
  • Comments:5

github_iconTop GitHub Comments

1reaction
whg517commented, Sep 8, 2021

Hi, @cjbd can you share your solution to this problem?

This error also occurred when I was building according to the official K8S document. Kubeadm cannot initialize properly. This problem appears in the kubelet log.

0reactions
cjbdcommented, Feb 24, 2021

@balchua, thanks! i fixed my network, now that status is saying microk8s is running

Read more comments on GitHub >

github_iconTop Results From Across the Web

rpi4 cluster - microk8s is not running - cni plugin not initialized
hello, i'm new to microk8s, i followed this tutorial to install microk8s on raspberry pi 4b cluster, but sudo microk8s status is saying...
Read more >
microk8s install problem "cni plugin not initialized"
1 Server install and following the same instructions... e.g. sudo snap install microk8s --classic --channel=1.19. What worked ...
Read more >
Raspberry Pi 4 Microk8s cluster is not starting containers?
I'm going to leave this up in case anyone runs into the same problem. The issue was that I was trying to run...
Read more >
Kubernetes Master Node in NotReady State With Message ...
A Kubernetes master node is showing as NotReady and the describe output for the node is showing "cni not initialized". Node Status.
Read more >
Building a Raspberry Pi Kubernetes Cluster on Ubuntu 20.04 ...
They have just started with 18.04. Ubuntu 20.04 is the latest LTS version, so I decided to go with that. For the runtime,...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found