How to configure Pod networking in managed K8s?

After provisioning my Kubernetes cluster, I attempted installing Flannel via the usual means:

kubectl apply -f

However, I’ve noticed that my pods are unable to reach kube-dns.

$ kubectl run -it --rm test --image=busybox
/ # ping # fails
/ # ping kubernetes # fails, should resolve to

/ # cat /etc/resolv.conf # nameserver matches the cluster-ip for the kube-dns service
search default.svc.cluster.local svc.cluster.local cluster.local
options ndots:5

/ # ping # fails

/ # netstat -anr 
Kernel IP routing table
Destination     Gateway         Genmask         Flags   MSS Window  irtt Iface         UG        0 0          0 eth0   U         0 0          0 eth0

/ # ping
PING ( 56 data bytes
64 bytes from seq=0 ttl=64 time=0.082 ms

/ # ping
PING ( 56 data bytes
64 bytes from seq=0 ttl=122 time=2.439 ms

As we can see here, the IP for kube-dns appears to be unrouteable from within my pod… Note that we can easily reach the outside world via the configured gateway. The routing tables are configured such that we go through the gateway to reach kube-dns, however my suspicion is that this probably shouldn’t be the case. One big red flag is that the subnet given here by netstat does not match the Cluster IP that kube-dns (or any other service) is using. Which one is right? How would I correct the mismatch?

I have a hunch that maybe we aren’t meant to run flannel if DO has provided private networking between the kubelets already, however it isn’t entirely clear if this is the case.

I’m not sure what direction to go in with this, so any advice would be appreciated. Thanks in advance.


Submit an answer
You can type!ref in this text area to quickly search our full set of tutorials, documentation & marketplace offerings and insert the link!

These answers are provided by our Community. If you find them useful, show some love by clicking the heart. If you run into issues leave a comment, or add your own answer to help others.

Digitalocean managed kubernetes comes set up with Cilium as the CNI plugin. I am not sure if changing the SDN is supported by managed k8s; it’s probably not supported, but someone from DO can better confirm. I understand this is an old post, but maybe the answer can help someone.

@tjm problem might be that DO cluster comes automatically with preloaded kube-proxy. Right after cluster initialization we installed flannel and our network between nodes did not work.

I think that, you need to uninstall kube-proxy, if you prefer Flannel.

@mgcode I haven’t had time to look into it further yet. I’ll let you know if I figure it out. If you get to it before me, be sure to update this post :)

Hi @tjm

Did you get flannel to work? We are trying to configure flannel on DO cluster without any success.

I can no longer edit my post, but apparently I misunderstood how the k8s service subnet works… It is not meant to respond to ICMP requests so “ping” is expected to fail and the subnet that kube-dns lives on is not meant to match the pod subnet.

However, I have done some more testing and apparently pods running on one of my nodes can hit kube-dns, but pods on the other node cannot. This the case regardless of which node kube-dns is present on. I suspect there is a problem with that node’s configuration, so I will try recycling it and see what happens…