question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

Microk8s - Ingress problem (port 80 is already in use)

See original GitHub issue

We are running microk8s in an airgapped environment, so getting hold of logfiles and inspect tarballs is somewhat problematic, but this is the issue we are experiencing.

We had a working 3-node cluster (microk8s v1.20.6 on Ubuntu 20.04) running rook-ceph storage which got corrupted for some reason… but that is beside the point. After some consultation we decided we would just zap the disks associated with rook/ceph and rebuild the cluster.

Did a microk8s leave then microk8s reset on all the nodes - and then removed the old microk8s snap using snap remove microk8s

Checked /var/snap/ to see that the microk8s snap was gone - and it was

So far so good - as I said the system is airgapped so we had to import the snap/assert files manually and then installed the latest microk8s (v1.23.4) using snap ack microk8s_3021.assert and snap install microk8s_3021.snap --classic

The required base images have also been imported

microk8s ctr image ls |awk '{print $1}'|grep -v sha256
REF
docker.io/calico/cni:v3.19.1
docker.io/calico/kube-controllers:v3.17.3
docker.io/calico/node:v3.19.1
docker.io/calico/pod2daemon-flexvol:v3.19.1
docker.io/coredns/coredns:1.8.0
k8s.gcr.io/ingress-nginx/controller:v1.1.0
k8s.gcr.io/pause:3.1

Rebuilding the cluster with the 3 nodes seem to work just fine, and enabling rbac/dns with microk8s enable rbac dns also works without a hitch. The problems start when I try to add the ingress using microk8s enable ingress

The pods get fired up but in CrashLoopBackOff

microk8s kubectl -n ingress logs nginx-ingress-microk8s-controller-t8n8p
-------------------------------------------------------------------------------
NGINX Ingress controller
  Release:       v1.1.0
  Build:         cacbee86b6ccc45bde8ffc184521bed3022e7dee
  Repository:    https://github.com/kubernetes/ingress-nginx
  nginx version: nginx/1.19.9

-------------------------------------------------------------------------------

F0316 11:09:27.290937       8 main.go:67] port 80 is already in use. Please check the flag --http-port
goroutine 1 [running]:
k8s.io/klog/v2.stacks(0x1)
        k8s.io/klog/v2@v2.10.0/klog.go:1026 +0x8a
k8s.io/klog/v2.(*loggingT).output(0x28a5ba0, 0x3, {0x0, 0x0}, 0xc0000ad1f0, 0x1, {0x1f83e62, 0x28a66e0}, 0xc0000ae940, 0x0)
        k8s.io/klog/v2@v2.10.0/klog.go:975 +0x63d
k8s.io/klog/v2.(*loggingT).printDepth(0x1, 0x1, {0x0, 0x0}, {0x0, 0x0}, 0x444ed1, {0xc0000ae940, 0x1, 0x1})
        k8s.io/klog/v2@v2.10.0/klog.go:735 +0x1ba
k8s.io/klog/v2.(*loggingT).print(...)
        k8s.io/klog/v2@v2.10.0/klog.go:717
k8s.io/klog/v2.Fatal(...)
        k8s.io/klog/v2@v2.10.0/klog.go:1494
main.main()
        k8s.io/ingress-nginx/cmd/nginx/main.go:67 +0x1d3

goroutine 34 [chan receive]:
k8s.io/klog/v2.(*loggingT).flushDaemon(0x0)
        k8s.io/klog/v2@v2.10.0/klog.go:1169 +0x6a
created by k8s.io/klog/v2.init.0
        k8s.io/klog/v2@v2.10.0/klog.go:420 +0xfb

There is noe “serverside” usage of port 80 that I can see, and no services created in microk8s either that should be a problem.

netstat -tulpn
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name    
tcp        0      0 127.0.0.1:42979         0.0.0.0:*               LISTEN      15883/containerd    
tcp        0      0 127.0.0.1:40869         0.0.0.0:*               LISTEN      1085/containerd     
tcp        0      0 127.0.0.1:10248         0.0.0.0:*               LISTEN      18774/kubelite      
tcp        0      0 0.0.0.0:25000           0.0.0.0:*               LISTEN      14796/python3       
tcp        0      0 127.0.0.1:5000          0.0.0.0:*               LISTEN      1159/python         
tcp        0      0 127.0.0.1:10249         0.0.0.0:*               LISTEN      18774/kubelite      
tcp        0      0 127.0.0.1:9099          0.0.0.0:*               LISTEN      17155/calico-node   
tcp        0      0 127.0.0.1:10256         0.0.0.0:*               LISTEN      18774/kubelite      
tcp        0      0 127.0.0.53:53           0.0.0.0:*               LISTEN      1030/systemd-resolv 
tcp        0      0 0.0.0.0:22              0.0.0.0:*               LISTEN      1270/sshd: /usr/sbi 
tcp        0      0 XXX.XXX.XXX.XXX:19001   0.0.0.0:*               LISTEN      14958/k8s-dqlite    
tcp        0      0 127.0.0.1:1338          0.0.0.0:*               LISTEN      15883/containerd    
tcp6       0      0 :::10250                :::*                    LISTEN      18774/kubelite      
tcp6       0      0 :::10255                :::*                    LISTEN      18774/kubelite      
tcp6       0      0 :::10257                :::*                    LISTEN      18774/kubelite      
tcp6       0      0 :::10259                :::*                    LISTEN      18774/kubelite      
tcp6       0      0 :::22                   :::*                    LISTEN      1270/sshd: /usr/sbi 
tcp6       0      0 :::16443                :::*                    LISTEN      18774/kubelite      
udp        0      0 127.0.0.53:53           0.0.0.0:*                           1030/systemd-resolv 
udp        0      0 0.0.0.0:4789            0.0.0.0:*                           -  
microk8s kubectl -n ingress describe pod nginx-ingress-microk8s-controller-mnphj
Name:         nginx-ingress-microk8s-controller-mnphj
Namespace:    ingress
Priority:     0
Node:         my-servername-n02/192.168.100.11
Start Time:   Wed, 16 Mar 2022 12:09:04 +0100
Labels:       controller-revision-hash=85d7cb8664
              name=nginx-ingress-microk8s
              pod-template-generation=1
Annotations:  cni.projectcalico.org/podIP: 10.1.26.1/32
              cni.projectcalico.org/podIPs: 10.1.26.1/32
Status:       Running
IP:           10.1.26.1
IPs:
  IP:           10.1.26.1
Controlled By:  DaemonSet/nginx-ingress-microk8s-controller
Containers:
  nginx-ingress-microk8s:
    Container ID:  containerd://cb8b9df98cb95dfe50703be7b54134cc2ea5693726292e5bc5187ca1085df9c2
    Image:         k8s.gcr.io/ingress-nginx/controller:v1.1.0
    Image ID:      sha256:ae1a7201ec9545194b2889da30face5f2a7a45e2ba8c7479ac68c9a45a73a7eb
    Ports:         80/TCP, 443/TCP, 10254/TCP
    Host Ports:    80/TCP, 443/TCP, 10254/TCP
    Args:
      /nginx-ingress-controller
      --configmap=$(POD_NAMESPACE)/nginx-load-balancer-microk8s-conf
      --tcp-services-configmap=$(POD_NAMESPACE)/nginx-ingress-tcp-microk8s-conf
      --udp-services-configmap=$(POD_NAMESPACE)/nginx-ingress-udp-microk8s-conf
      --ingress-class=public
       
      --publish-status-address=127.0.0.1
    State:          Waiting
      Reason:       CrashLoopBackOff
    Last State:     Terminated
      Reason:       Error
      Exit Code:    255
      Started:      Wed, 16 Mar 2022 12:35:25 +0100
      Finished:     Wed, 16 Mar 2022 12:35:25 +0100
    Ready:          False
    Restart Count:  10
    Liveness:       http-get http://:10254/healthz delay=10s timeout=5s period=10s #success=1 #failure=3
    Readiness:      http-get http://:10254/healthz delay=0s timeout=5s period=10s #success=1 #failure=3
    Environment:
      POD_NAME:       nginx-ingress-microk8s-controller-mnphj (v1:metadata.name)
      POD_NAMESPACE:  ingress (v1:metadata.namespace)
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-t6z6w (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             False 
  ContainersReady   False 
  PodScheduled      True 
Volumes:
  kube-api-access-t6z6w:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    ConfigMapOptional:       <nil>
    DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/disk-pressure:NoSchedule op=Exists
                             node.kubernetes.io/memory-pressure:NoSchedule op=Exists
                             node.kubernetes.io/not-ready:NoExecute op=Exists
                             node.kubernetes.io/pid-pressure:NoSchedule op=Exists
                             node.kubernetes.io/unreachable:NoExecute op=Exists
                             node.kubernetes.io/unschedulable:NoSchedule op=Exists
Events:
  Type     Reason   Age                   From     Message
  ----     ------   ----                  ----     -------
  Warning  BackOff  4m2s (x132 over 29m)  kubelet  Back-off restarting failed container

Would appreciate it greatly if someone has some insight or ways to figure out what is going on here. I have tried curl 0.0.0.0:80 -vvv but that fails, so I really dont understand why it insists that port 80 is in use…

Issue Analytics

  • State:open
  • Created 2 years ago
  • Comments:5 (1 by maintainers)

github_iconTop GitHub Comments

1reaction
balchuacommented, Mar 21, 2022

@mwilberg i couldn’t find anything fishy. If you run a custom app using port 80 does it get the same error?

0reactions
mwilbergcommented, Mar 22, 2022

@balchua Will give it a try later today - didn’t really try much in terms of custom anything yet, wanted to get the basic functionality up and running first.

Read more comments on GitHub >

github_iconTop Results From Across the Web

Troubleshooting - MicroK8s
If a pod is not behaving as expected, the first port of call should be the logs. First determine the resource identifier for...
Read more >
Microk8s - Ingress not working : r/kubernetes - Reddit
Ingress use the IP of every worker node and expose it on every node with the port 80/444.
Read more >
kubernetes - Microk8s reaches internet but not internal network
In particular, I have a service (a container) that connects to the windows AD server of the local network to authenticate users of...
Read more >
Appliance install using Kubernetes microk8s - Google Groups
I suspect some kind of right problem to use port 80. ... everything on a new virtual machine and now I cannot even...
Read more >
Set up Ingress on Minikube with the NGINX Ingress Controller
kubectl expose deployment web --type=NodePort --port=8080. The output should be: service/web exposed. Verify the Service is created and is ...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found