question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

No public DNS resolution inside pods

See original GitHub issue

I cannot resolve public DNS inside running pods (with and without dns addon), althought internal k8s DNS works ok with the dns addon enabled. ufw is disabled. Running on DigitalOcean.

$ snap version
snap    2.33.1ubuntu2
snapd   2.33.1ubuntu2
series  16
ubuntu  16.04
kernel  4.4.0-130-generic
$ snap list
Name      Version    Rev   Tracking  Developer  Notes
core      16-2.33.1  4917  stable    canonical  core
microk8s  v1.11.0    104   beta      canonical  classic
any-pod$ curl google.com
curl: (6) Could not resolve host: google.com

Issue Analytics

  • State:closed
  • Created 5 years ago
  • Comments:17 (5 by maintainers)

github_iconTop GitHub Comments

31reactions
ktsakalozoscommented, Jul 24, 2018

There are a couple of things that may be causing this behavior.

  1. Your machine switched its IP. A workaround is to restart microk8s:
sudo snap disable microk8s
sudo snap enable microk8s

Your IP can change if for example you are on a laptop moving from place to place or you suspend/resume. The API server advertises and listens on your external IP. Services such ad DNS and the dashboard that need to contact the API server will stop working properly if its IP changes. If you do a microk8s.kubectl get ep kubernetes you should get the endpoint of the API server if this endpoint is not an IP in your system then you know you have this issue.

This was also discussed in https://github.com/ubuntu/microk8s/issues/72 as well. We know we need to address this in a more elegant way.

  1. Traffic forward is blocked. Do this for a quick check:
sudo iptables -P FORWARD ACCEPT

And see if your pods can now access the internet.

What happens here is that your system is not aware it is functioning as a router so it drops any packets it is not aware of and that includes any traffic from the k8s cluster to the outside world. You would better be more precise on the traffic you want to allow. For example you may want to sudo iptables -A FORWARD -i wlan2 -j ACCEPT and even filter the k8s related traffic.

This is more difficult to address from within microk8s since network interfaces and routing may change without any notice.

Finally, make sure you have your firewall correctly setup so it allows traffic from and to cbr0 as described in the troubleshooting part of the README.

@adrianchifor @toxsick please let me know if you are affected by any of those two issues.

10reactions
toxsickcommented, Jul 24, 2018

Hi,

thanks for the detailed clarification! Here is what I found:

  1. I already found the iptables problem. I thought since my firewall service is disabled this could not be the problem, but my pods immediately had internet access after I ran iptables -P FORWARD ACCEPT on the node.

  2. As you pointed out the kube-dns uses the google dns servers, so I was not able to reach any servers in my local network. After I changed the upstreamNameservers in Config and storage -> Config Maps -> kube-dns (namespace kube-system) to my local dns server everything was working fine!

Thank you for the hints!

regards Hannes

Read more comments on GitHub >

github_iconTop Results From Across the Web

No public DNS resolution inside pods · Issue #75 - GitHub
I cannot resolve public DNS inside running pods (with and without dns addon), althought internal k8s DNS works ok with the dns addon...
Read more >
DNS for Services and Pods - Kubernetes
Kubernetes creates DNS records for Services and Pods. You can contact Services with consistent DNS names instead of IP addresses.
Read more >
Can't resolve dns from inside k8s pod - Stack Overflow
In dnsutils pod exec ping stackoverflow.com
Read more >
DNS resolution works in host but not from Kubernetes Pod
After migrating to a new corporate network, we got into a scenario where DNS resolution worked fine in host (baremetal or VM) but...
Read more >
Kubernetes Pod DNS Resolution - Server Fault
NAMESPACE.pod.cluster.local: No answer deployment.apps "busybox" deleted ... In case you have not been resolved Pods DNS name, you can check ...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found