How to configure Pod networking in managed K8s?

After provisioning my Kubernetes cluster, I attempted installing Flannel via the usual means:

kubectl apply -f

However, I’ve noticed that my pods are unable to reach kube-dns.

$ kubectl run -it --rm test --image=busybox
/ # ping # fails
/ # ping kubernetes # fails, should resolve to

/ # cat /etc/resolv.conf # nameserver matches the cluster-ip for the kube-dns service
search default.svc.cluster.local svc.cluster.local cluster.local
options ndots:5

/ # ping # fails

/ # netstat -anr 
Kernel IP routing table
Destination     Gateway         Genmask         Flags   MSS Window  irtt Iface         UG        0 0          0 eth0   U         0 0          0 eth0

/ # ping
PING ( 56 data bytes
64 bytes from seq=0 ttl=64 time=0.082 ms

/ # ping
PING ( 56 data bytes
64 bytes from seq=0 ttl=122 time=2.439 ms

As we can see here, the IP for kube-dns appears to be unrouteable from within my pod… Note that we can easily reach the outside world via the configured gateway. The routing tables are configured such that we go through the gateway to reach kube-dns, however my suspicion is that this probably shouldn’t be the case. One big red flag is that the subnet given here by netstat does not match the Cluster IP that kube-dns (or any other service) is using. Which one is right? How would I correct the mismatch?

I have a hunch that maybe we aren’t meant to run flannel if DO has provided private networking between the kubelets already, however it isn’t entirely clear if this is the case.

I’m not sure what direction to go in with this, so any advice would be appreciated. Thanks in advance.

Submit an answer

This textbox defaults to using Markdown to format your answer.

You can type !ref in this text area to quickly search our full set of tutorials, documentation & marketplace offerings and insert the link!

Sign In or Sign Up to Answer

These answers are provided by our Community. If you find them useful, show some love by clicking the heart. If you run into issues leave a comment, or add your own answer to help others.

Want to learn more? Join the DigitalOcean Community!

Join our DigitalOcean community of over a million developers for free! Get help and share knowledge in Q&A, subscribe to topics of interest, and get courses and tools that will help you grow as a developer and scale your project or business.

Digitalocean managed kubernetes comes set up with Cilium as the CNI plugin. I am not sure if changing the SDN is supported by managed k8s; it’s probably not supported, but someone from DO can better confirm. I understand this is an old post, but maybe the answer can help someone.

@tjm problem might be that DO cluster comes automatically with preloaded kube-proxy. Right after cluster initialization we installed flannel and our network between nodes did not work.

I think that, you need to uninstall kube-proxy, if you prefer Flannel.

@mgcode I haven’t had time to look into it further yet. I’ll let you know if I figure it out. If you get to it before me, be sure to update this post :)