After provisioning my Kubernetes cluster, I attempted installing Flannel via the usual means:
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
However, I’ve noticed that my pods are unable to reach kube-dns.
$ kubectl run -it --rm test --image=busybox / # ping google.com # fails / # ping kubernetes # fails, should resolve to 10.245.0.1 / # cat /etc/resolv.conf # nameserver matches the cluster-ip for the kube-dns service nameserver 10.245.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5 / # ping 10.245.0.10 # fails / # netstat -anr Kernel IP routing table Destination Gateway Genmask Flags MSS Window irtt Iface 0.0.0.0 10.244.28.1 0.0.0.0 UG 0 0 0 eth0 10.244.28.0 0.0.0.0 255.255.255.0 U 0 0 0 eth0 / # ping 10.244.28.1 PING 10.244.28.1 (10.244.28.1): 56 data bytes 64 bytes from 10.244.28.1: seq=0 ttl=64 time=0.082 ms / # ping 126.96.36.199 PING 188.8.131.52 (184.108.40.206): 56 data bytes 64 bytes from 220.127.116.11: seq=0 ttl=122 time=2.439 ms
As we can see here, the IP for kube-dns appears to be unrouteable from within my pod… Note that we can easily reach the outside world via the configured gateway. The routing tables are configured such that we go through the gateway to reach kube-dns, however my suspicion is that this probably shouldn’t be the case. One big red flag is that the subnet given here by netstat does not match the Cluster IP that kube-dns (or any other service) is using. Which one is right? How would I correct the mismatch?
I have a hunch that maybe we aren’t meant to run flannel if DO has provided private networking between the kubelets already, however it isn’t entirely clear if this is the case.
I’m not sure what direction to go in with this, so any advice would be appreciated. Thanks in advance.
These answers are provided by our Community. If you find them useful, show some love by clicking the heart. If you run into issues leave a comment, or add your own answer to help others.