question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

Fresh microk8s v1.24 installation have failing calico-kube-controller pod

See original GitHub issue

Summary

On a fresh microk8s-v1.24.4 installation on ubuntu-server-22.04, I see that calico-kube-controllers deployment’s pod is failing from the start, causing it to be in “CrashloopBackOff” state. Here are the list of pods in the system:

NAMESPACE     NAME                                       READY   STATUS             RESTARTS        AGE
kube-system   calico-node-knlnn                          1/1     Running            2 (35m ago)     57m
ingress       nginx-ingress-microk8s-controller-clx2q    1/1     Running            0               27m
kube-system   calico-kube-controllers-64d4cb6ccb-26krj   0/1     CrashLoopBackOff   7 (4m38s ago)   15m

Here is the log-output from the calico-kube-controllers pod:

standard_init_linux.go:228: exec user process caused: exec format error

What Should Happen Instead?

The calico, and all other kube-system pods should all be running properly

Reproduction Steps

  1. Install Ubuntu-server 22.04 on VirtualBox Version 6.1.38 r153438 (Qt5.12.8)
  2. While installing ubuntu-server, enable microk8s installation via cloudinit, and select stable/1.24 channel to install kubernetes v1.24.4
  3. Once installed, login to the ubuntu server, and run microk8s kubectl commands to check k8s installation

Introspection Report

inspection-report-20220914_112938.tar.gz

Can you suggest a fix?

Are you interested in contributing with a fix?

Issue Analytics

  • State:closed
  • Created a year ago
  • Comments:6 (2 by maintainers)

github_iconTop GitHub Comments

1reaction
dnewsholmecommented, Sep 14, 2022

I ran into this after upgraded to 22.04.1 LTS. My microk8s snap also upgraded to 1.25, when trying to rollback to 1.24 I got this issue because snap doesn’t clear up the calico virtual network devices on removal.

0reactions
tapanhalanicommented, Sep 19, 2022

I have successfully tried installing microk8s multiple times with multiple fresh virtualbox machines. And i cannot reproduce this issue again.

Closing this issue now. Thank you @neoaggelos for attending to this.

Read more comments on GitHub >

github_iconTop Results From Across the Web

Troubleshooting - MicroK8s
If a pod is not behaving as expected, the first port of call should be the logs. First determine the resource identifier for...
Read more >
calicp-kube-controllers can't get API Server: context deadline ...
I have the problem after having installed microk8s on my Ubuntu 21.10 server: sudo snap install microk8s --channel=1.23 --classic Checked ...
Read more >
microk8s pods are restarting frequently on my raspberry pi ...
I switched back from K3S to MicroK8S and got the restarts on a fresh installed MicroK8S cluster with only one node.
Read more >
MicroK8s v1.24 released! - Ecosystem - Discuss Kubernetes
MicroK8s is a Kubernetes cluster delivered as a single snap package - it can be installed on any Linux distribution which supports snaps, ......
Read more >
Appliance install using Kubernetes microk8s - Google Groups
Error: INSTALLATION FAILED: Internal error occurred: failed calling ... Are you able to run a describe on the ingress pod to see what...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found