question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

DNS is crashlooping

See original GitHub issue
$ microk8s.kubectl get all --all-namespaces 
NAMESPACE     NAME                                                  READY     STATUS             RESTARTS   AGE
kube-system   pod/heapster-v1.5.2-84f5c8795f-m466m                  4/4       Running            0          23m
kube-system   pod/kube-dns-864b8bdc77-6mst4                         2/3       CrashLoopBackOff   15         23m
kube-system   pod/kubernetes-dashboard-6948bdb78-262gm              0/1       CrashLoopBackOff   8          23m
kube-system   pod/monitoring-influxdb-grafana-v4-7ffdc569b8-dbmvg   2/2       Running            0          23m

NAMESPACE     NAME                           TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)             AGE
default       service/kubernetes             ClusterIP   10.152.183.1     <none>        443/TCP             23m
kube-system   service/heapster               ClusterIP   10.152.183.109   <none>        80/TCP              23m
kube-system   service/kube-dns               ClusterIP   10.152.183.10    <none>        53/UDP,53/TCP       23m
kube-system   service/kubernetes-dashboard   ClusterIP   10.152.183.178   <none>        443/TCP             23m
kube-system   service/monitoring-grafana     ClusterIP   10.152.183.68    <none>        80/TCP              23m
kube-system   service/monitoring-influxdb    ClusterIP   10.152.183.252   <none>        8083/TCP,8086/TCP   23m

NAMESPACE     NAME                                             DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
kube-system   deployment.apps/heapster-v1.5.2                  1         1         1            1           23m
kube-system   deployment.apps/kube-dns                         1         1         1            0           23m
kube-system   deployment.apps/kubernetes-dashboard             1         1         1            0           23m
kube-system   deployment.apps/monitoring-influxdb-grafana-v4   1         1         1            1           23m

NAMESPACE     NAME                                                        DESIRED   CURRENT   READY     AGE
kube-system   replicaset.apps/heapster-v1.5.2-84f5c8795f                  1         1         1         23m
kube-system   replicaset.apps/kube-dns-864b8bdc77                         1         1         0         23m
kube-system   replicaset.apps/kubernetes-dashboard-6948bdb78              1         1         0         23m
kube-system   replicaset.apps/monitoring-influxdb-grafana-v4-7ffdc569b8   1         1         1         23m

Issue Analytics

  • State:closed
  • Created 5 years ago
  • Reactions:1
  • Comments:7 (2 by maintainers)

github_iconTop GitHub Comments

3reactions
reyoucommented, Dec 27, 2018

@tvansteenburgh OMG! I was hitting my head to wall for a week for this, works like a charm! Thanks a ton!

3reactions
tvansteenburghcommented, Jul 13, 2018

Inspecting the ufw log showed that all the denials were happening on the cbr0 interface.

ubuntu@ip:~$ ifconfig cbr0
cbr0      Link encap:Ethernet  HWaddr 0a:58:0a:01:01:01  
          inet addr:10.1.1.1  Bcast:0.0.0.0  Mask:255.255.255.0
          inet6 addr: fe80::e0d0:96ff:fee2:633e/64 Scope:Link
          UP BROADCAST RUNNING PROMISC MULTICAST  MTU:1500  Metric:1
          RX packets:5577 errors:0 dropped:0 overruns:0 frame:0
          TX packets:4904 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 
          RX bytes:989161 (989.1 KB)  TX bytes:2862109 (2.8 MB)

The 10.1.1.0/32 subnet corresponds to the pod IP addresses:

ubuntu@ip:~$ microk8s.kubectl get po -n kube-system -o wide
NAME                                              READY     STATUS    RESTARTS   AGE       IP         NODE
heapster-v1.5.2-577898ddbf-8mz8j                  4/4       Running   0          9m        10.1.1.7   ip-172-31-19-85
kube-dns-864b8bdc77-4n5s9                         3/3       Running   6          15m       10.1.1.2   ip-172-31-19-85
kubernetes-dashboard-6948bdb78-62n4r              1/1       Running   5          15m       10.1.1.4   ip-172-31-19-85
monitoring-influxdb-grafana-v4-7ffdc569b8-2t2b4   2/2       Running   0          15m       10.1.1.3   ip-172-31-19-85

So the fix was:

sudo ufw allow in on cbr0 && sudo ufw allow out on cbr0
Read more comments on GitHub >

github_iconTop Results From Across the Web

DNS is crashlooping · Issue #67 · canonical/microk8s
1.0/32 subnet corresponds to the pod IP addresses: ***@***.***:~$ microk8s.kubectl get po -n kube-system -o wide NAME READY STATUS RESTARTS AGE ...
Read more >
DNS CrashLoop : r/kubernetes
r/kubernetes - DNS CrashLoop. I have configured Kubernetes a few times, 3 out of 4 times I have a problem with pod/kube-dns going...
Read more >
Troubleshoot DNS failures with Amazon EKS - AWS
How do I troubleshoot DNS failures with Amazon EKS? ... Verify that the DNS endpoints are exposed and pointing to CoreDNS pods:.
Read more >
Kubernetes CoreDNS in CrashLoopBackOff
CoreDns pod gets crashloop backoff error. Steps followed to make the pod into running state: As Tim Chan said in this post and...
Read more >
Kubernetes CrashLoopBackOff Error: What It Is and How to ...
Issue with Third-Party Services (DNS Error). Sometimes, the CrashLoopBackOff error is caused by an issue with one of the third-party services.
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found