| DNS Server | Status |
|---|---|
| 217.218.155.155 | OK |
| 217.218.127.127 | OK |
| 46.209.209.209 | FAILED |
| 185.161.112.33 | OK |
| 185.161.112.34 | FAILED |
| 185.51.200.10 | FAILED |
| 185.231.182.126 | FAILED |
| 46.224.1.42 | FAILED |
| 194.225.62.80 | OK |
| 213.176.123.5 | OK |
| 91.99.101.12 | FAILED |
| 185.187.84.15 | FAILED |
| 37.156.145.229 | FAILED |
| 185.97.117.187 | FAILED |
| 185.113.59.253 | FAILED |
| 80.191.40.41 | FAILED |
| 194.225.73.141 | FAILED |
| 91.245.229.1 | OK |
| 185.51.200.50 | FAILED |
| 37.156.145.21 | FAILED |
| 2.189.44.44 | OK |
| 2.188.21.131 | FAILED |
| 2.188.21.132 | FAILED |
| 81.91.144.116 | FAILED |
| 2.188.21.130 | FAILED |
| 92.119.56.162 | FAILED |
| 5.200.200.200 | OK |
Checking the current DNS configuration:
# Display the current DNS configuration and status of the systemd-resolved service
resolvectl statusFlushing the DNS cache:
resolvectl flush-caches
# Or
sudo systemd-resolve --flush-cachesApplying changes to the DNS configuration:
sudo netplan applyRestarting the DNS resolver services to apply changes
sudo systemctl restart systemd-resolved
sudo systemctl restart NetworkManagerTesting DNS resolution for a specific domain using a specific DNS server:
dig @<DNS_SERVER_IP> <DOMAIN_TO_TEST> +short
# Or simply
dig <DOMAIN_TO_TEST> +shortEnsure Traffic goes out
ip route get DNS-SERVER-IP- List active connections
nmcli con show- Set DNS servers for a connection
sudo nmcli con mod "ens192" \
ipv4.ignore-auto-dns yes \
ipv4.dns "DNS-SERVER-IP-1 DNS-SERVER-IP-2"
sudo nmcli con mod "ens192" \
ipv4.ignore-auto-dns yes \
ipv4.dns "DNS-SERVER-IP-1 DNS-SERVER-IP-2"- Restart the connection
sudo nmcli con down "ens192"
sudo nmcli con up "ens192"- Verify
resolvectl status
cat /etc/resolv.confCheck What CoreDNS Is Forwarding To
kubectl -n kube-system get configmap coredns -o yamlIt results in something like this:
apiVersion: v1
data:
Corefile: |
.:53 {
errors {
}
health {
lameduck 5s
}
ready
kubernetes cluster.local in-addr.arpa ip6.arpa {
pods insecure
fallthrough in-addr.arpa ip6.arpa
}
prometheus :9153
forward . /etc/resolv.conf {
prefer_udp
max_concurrent 1000
}
cache 30
loop
reload
loadbalance
}
kind: ConfigMap
The forward . /etc/resolv.conf line indicates that CoreDNS is forwarding DNS queries to the DNS servers specified in the /etc/resolv.conf file on the nodes.
You can check the contents of this file to see which DNS servers are being used for forwarding:
cat /etc/resolv.conf- Check the contents of
/etc/resolv.confto see which DNS servers are being used for forwarding:
cat /etc/resolv.confIt results in something like this:
# This is /run/systemd/resolve/stub-resolv.conf managed by man:systemd-resolved(8).
# Do not edit.
#
# This file might be symlinked as /etc/resolv.conf. If you're looking at
# /etc/resolv.conf and seeing this text, you have followed the symlink.
#
# This is a dynamic resolv.conf file for connecting local clients to the
# internal DNS stub resolver of systemd-resolved. This file lists all
# configured search domains.
#
# Run "resolvectl status" to see details about the uplink DNS servers
# currently in use.
#
# Third party programs should typically not access this file directly, but only
# through the symlink at /etc/resolv.conf. To manage man:resolv.conf(5) in a
# different way, replace this symlink by a static file or a different symlink.
#
# See man:systemd-resolved.service(8) for details about the supported modes of
# operation for /etc/resolv.conf.
nameserver 127.0.0.53
options edns0 trust-ad
search default.svc.cluster.local svc.cluster.local
- Check the current DNS configuration and status of the systemd-resolved service:
resolvectl statusIt results in something like this:
Global
Protocols: -LLMNR -mDNS -DNSOverTLS DNSSEC=no/unsupported
resolv.conf mode: stub
Current DNS Server: 169.254.25.10
Global
Protocols: -LLMNR -mDNS -DNSOverTLS DNSSEC=no/unsupported
resolv.conf mode: stub
Current DNS Server: 169.254.25.10
Global
Protocols: -LLMNR -mDNS -DNSOverTLS DNSSEC=no/unsupported
resolv.conf mode: stub
Current DNS Server: 169.254.25.10
DNS Servers: 169.254.25.10
DNS Domain: default.svc.cluster.local svc.cluster.local
...
Loop DNS Issue in Kubernetes occurs when the CoreDNS pods are configured to forward DNS queries to themselves, creating a loop that prevents DNS resolution from working correctly. This can lead to CoreDNS pods crashing and entering a CrashLoopBackOff state.
You likely have this chain:
Pod
↓
NodeLocalDNS (169.254.20.10)
↓
CoreDNS
↓
/etc/resolv.conf
↓
wrong upstream (loopback or blocked DNS)
Or worse:
CoreDNS → /etc/resolv.conf → 127.0.0.53 → systemd-resolved → back to CoreDNS
That creates a loop → CoreDNS exits → CrashLoopBackOff.
If you encounter a DNS resolution loop in Kubernetes, it typically means that the CoreDNS pods are trying to resolve DNS queries through themselves, which can lead to a loop. To resolve this issue, you can follow these steps:
- Edit the
all.ymlfile in thegroup_vars/alldirectory of your Kubespray inventory:
kubespray/inventory/k8scluster/group_vars/all/all.yml
- Update the
upstream_dns_serversvariable to include the IP addresses of reliable upstream DNS servers (e.g., Google's public DNS servers or your preferred DNS servers):
## Upstream dns servers
# upstream_dns_servers:
# - 8.8.8.8
# - 8.8.4.4
upstream_dns_servers:
- 217.218.155.155
- 217.218.127.127- Run the Ansible playbook to apply the changes to your Kubernetes cluster:
ansible-playbook -i inventory/k8scluster/inventory.ini cluster.yml --become --become-user=root --user=root --tags coredns,dns