Basic new install ... pods in CrashLoopbackOff
See original GitHub issueThis seems related to Calico specifically calico-kube-controllers and as a result nothing works, ingress, dns, dashboard the whole install is in an unstable state
I did a purge and installed with the latest stable i.e sudo snap install microk8s --classic --channel=latest/stable
same result
There are numerous errors in the logs related to Calico as you will see in the report but for example
13817 pod_workers.go:949] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with CrashLoopBackOff: \"back-off 10s restarting failed container=calico-kube-controllers pod=calico-kube-controllers-58698c8568-xkbq4_kube-system(805ff11b-d8af-4626-8d24-70edbe3e88f3)\"" pod="kube-system/calico-kube-controllers-58698c8568-xkbq4" podUID=805ff11b-d8af-4626-8d24-70edbe3e88f3
I disabled my firewall and vpn and still see the same
I read a post that suggested disabling ha microk8s disable ha-cluster
which didn’t fix the issue
I upped the timeout on the calico set for live and ready probes same errors
Please run [inspection-report-20220420_175832.tar.gz]([inspection-report-20220420_175832.tar.gz](https://github.com/canonical/microk8s/files/8526463/inspection-report-20220420_175832.tar.gz)
and attach the generated tarball to this issue.
We appreciate your feedback. Thank you for using microk8s.
Issue Analytics
- State:
- Created a year ago
- Reactions:1
- Comments:5 (2 by maintainers)
Top GitHub Comments
One more thing, purging microk8s still leaves the calico virtual interfaces and after several restarts attempting to fix the problem I noticed it just kept creating new ones and not cleaning up or reusing the old so it became quite polluted
@ktsakalozos Yep that did it, thanks ! 🥳