question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

Basic new install ... pods in CrashLoopbackOff

See original GitHub issue

This seems related to Calico specifically calico-kube-controllers and as a result nothing works, ingress, dns, dashboard the whole install is in an unstable state I did a purge and installed with the latest stable i.e sudo snap install microk8s --classic --channel=latest/stable same result

There are numerous errors in the logs related to Calico as you will see in the report but for example

13817 pod_workers.go:949] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with CrashLoopBackOff: \"back-off 10s restarting failed container=calico-kube-controllers pod=calico-kube-controllers-58698c8568-xkbq4_kube-system(805ff11b-d8af-4626-8d24-70edbe3e88f3)\"" pod="kube-system/calico-kube-controllers-58698c8568-xkbq4" podUID=805ff11b-d8af-4626-8d24-70edbe3e88f3

I disabled my firewall and vpn and still see the same I read a post that suggested disabling ha microk8s disable ha-cluster which didn’t fix the issue I upped the timeout on the calico set for live and ready probes same errors

Please run [inspection-report-20220420_175832.tar.gz]([inspection-report-20220420_175832.tar.gz](https://github.com/canonical/microk8s/files/8526463/inspection-report-20220420_175832.tar.gz) and attach the generated tarball to this issue.

We appreciate your feedback. Thank you for using microk8s.

Issue Analytics

  • State:closed
  • Created a year ago
  • Reactions:1
  • Comments:5 (2 by maintainers)

github_iconTop GitHub Comments

1reaction
SamDcommented, Apr 21, 2022

One more thing, purging microk8s still leaves the calico virtual interfaces and after several restarts attempting to fix the problem I noticed it just kept creating new ones and not cleaning up or reusing the old so it became quite polluted

0reactions
SamDcommented, Apr 28, 2022

@ktsakalozos Yep that did it, thanks ! 🥳

Read more comments on GitHub >

github_iconTop Results From Across the Web

Kubernetes CrashLoopBackOff: What it is, and how to fix it?
Learn to visualize, alert, and troubleshoot a Kubernetes CrashLoopBackOff: A pod starting, crashing, starting again, and crashing again.
Read more >
Understanding Kubernetes CrashLoopBackoff Events
CrashLoopBackOff is a status message that indicates one of your pods is in a constant state of flux—one or more containers are failing...
Read more >
Kubernetes CrashLoopBackOff Error: What It Is and How to Fix It
CrashLoopBackOff is a common Kubernetes error, which indicates that a pod failed to start, Kubernetes tried to restart it, and it continued to...
Read more >
Pod In CrashLoopBackOff State – Runbooks - GitHub Pages
A CrashLoopBackOff error occurs when a pod startup fails repeatedly in Kubernetes. Check RunBook Match. When running a kubectl get pods command, ...
Read more >
Kubernetes - How to Debug CrashLoopBackOff in a Container
Here is the output from kubectl describe pod for a CrashLoopBackOff: ... https://raw.githubusercontent.com/releaseapp-io/container-debug/main/install.sh)".
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found