[PWK] Default kube-dns dnsmasq configuration only allows 150 concurrent DNS req's
See original GitHub issueHi All,
Awesome work with PWK. Tried myself ontop of the existing PWD but was having issues exposing the necessary bits and overlays to satisfy kubeadm. (Noticed this is now CentOS based, that makes sense… But another conversation for another day!)
Wanted to highlight an issue with the default dnsmasq configuration inside kube-dns, it bombs out at a configured limit of 150 concurrent requests, leading to a failed healthcheck and a restart, meaning DNS goes away for 15+ seconds cluster wide at random times.
It’s pretty easy to get to 150 requests across even a demo cluster, especially given real-world DNS requests get stuck for a little bit as theres no forward resolver configured.
Affected clusters will have the following logs:
kubectl --namespace=kube-system logs <kube-dns-podxyz> dnsmasq
I0809 13:43:47.485776 45 nanny.go:108] dnsmasq[64]: Maximum number of concurrent DNS queries reached (max: 150)
I0809 13:43:57.500488 45 nanny.go:108] dnsmasq[64]: Maximum number of concurrent DNS queries reached (max: 150)
I0809 13:44:07.512986 45 nanny.go:108] dnsmasq[64]: Maximum number of concurrent DNS queries reached (max: 150)
kubectl --namespace=kube-system logs <kube-dns-podxyz> sidecar
ERROR: logging before flag.Parse: W0809 13:40:41.250002 7 server.go:64] Error getting metrics from dnsmasq: read udp 127.0.0.1:35703->127.0.0.1:53: i/o timeout
ERROR: logging before flag.Parse: W0809 13:40:54.273636 7 server.go:64] Error getting metrics from dnsmasq: read udp 127.0.0.1:45732->127.0.0.1:53: i/o timeout
ERROR: logging before flag.Parse: W0809 13:41:01.274108 7 server.go:64] Error getting metrics from dnsmasq: read udp 127.0.0.1:46985->127.0.0.1:53: i/o timeout
With an increasing dnsmasq restart count in
kubectl --namespace=kube-system describe po <kube-dns-podxyz>
Issue Analytics
- State:
- Created 6 years ago
- Reactions:1
- Comments:8 (4 by maintainers)
Just completeness, as per #181, I can workaround it to allow extn DNS resolution with:
Hey @luxas is there anything we can do from kubeadm perspective to bootstrap kube-dns with this option by default?. Haven’t seen anything for that AFAIK.