Question

How to configure Pod networking in managed K8s?

After provisioning my Kubernetes cluster, I attempted installing Flannel via the usual means:

kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

However, I’ve noticed that my pods are unable to reach kube-dns.

$ kubectl run -it --rm test --image=busybox
/ # ping google.com # fails
/ # ping kubernetes # fails, should resolve to 10.245.0.1

/ # cat /etc/resolv.conf # nameserver matches the cluster-ip for the kube-dns service
nameserver 10.245.0.10
search default.svc.cluster.local svc.cluster.local cluster.local
options ndots:5

/ # ping 10.245.0.10 # fails

/ # netstat -anr 
Kernel IP routing table
Destination     Gateway         Genmask         Flags   MSS Window  irtt Iface
0.0.0.0         10.244.28.1     0.0.0.0         UG        0 0          0 eth0
10.244.28.0     0.0.0.0         255.255.255.0   U         0 0          0 eth0

/ # ping 10.244.28.1
PING 10.244.28.1 (10.244.28.1): 56 data bytes
64 bytes from 10.244.28.1: seq=0 ttl=64 time=0.082 ms

/ # ping 8.8.8.8
PING 8.8.8.8 (8.8.8.8): 56 data bytes
64 bytes from 8.8.8.8: seq=0 ttl=122 time=2.439 ms

As we can see here, the IP for kube-dns appears to be unrouteable from within my pod… Note that we can easily reach the outside world via the configured gateway. The routing tables are configured such that we go through the gateway to reach kube-dns, however my suspicion is that this probably shouldn’t be the case. One big red flag is that the subnet given here by netstat does not match the Cluster IP that kube-dns (or any other service) is using. Which one is right? How would I correct the mismatch?

I have a hunch that maybe we aren’t meant to run flannel if DO has provided private networking between the kubelets already, however it isn’t entirely clear if this is the case.

I’m not sure what direction to go in with this, so any advice would be appreciated. Thanks in advance.


Submit an answer

This textbox defaults to using Markdown to format your answer.

You can type !ref in this text area to quickly search our full set of tutorials, documentation & marketplace offerings and insert the link!

Sign In or Sign Up to Answer

These answers are provided by our Community. If you find them useful, show some love by clicking the heart. If you run into issues leave a comment, or add your own answer to help others.

Want to learn more? Join the DigitalOcean Community!

Join our DigitalOcean community of over a million developers for free! Get help and share knowledge in Q&A, subscribe to topics of interest, and get courses and tools that will help you grow as a developer and scale your project or business.

Digitalocean managed kubernetes comes set up with Cilium as the CNI plugin. I am not sure if changing the SDN is supported by managed k8s; it’s probably not supported, but someone from DO can better confirm. I understand this is an old post, but maybe the answer can help someone.

@tjm problem might be that DO cluster comes automatically with preloaded kube-proxy. Right after cluster initialization we installed flannel and our network between nodes did not work.

I think that, you need to uninstall kube-proxy, if you prefer Flannel.

@mgcode I haven’t had time to look into it further yet. I’ll let you know if I figure it out. If you get to it before me, be sure to update this post :)