How to configure Pod networking in managed K8s?

October 22, 2018 1.6k views
Kubernetes Networking

After provisioning my Kubernetes cluster, I attempted installing Flannel via the usual means:

kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

However, I’ve noticed that my pods are unable to reach kube-dns.

$ kubectl run -it --rm test --image=busybox
/ # ping google.com # fails
/ # ping kubernetes # fails, should resolve to 10.245.0.1

/ # cat /etc/resolv.conf # nameserver matches the cluster-ip for the kube-dns service
nameserver 10.245.0.10
search default.svc.cluster.local svc.cluster.local cluster.local
options ndots:5

/ # ping 10.245.0.10 # fails

/ # netstat -anr 
Kernel IP routing table
Destination     Gateway         Genmask         Flags   MSS Window  irtt Iface
0.0.0.0         10.244.28.1     0.0.0.0         UG        0 0          0 eth0
10.244.28.0     0.0.0.0         255.255.255.0   U         0 0          0 eth0

/ # ping 10.244.28.1
PING 10.244.28.1 (10.244.28.1): 56 data bytes
64 bytes from 10.244.28.1: seq=0 ttl=64 time=0.082 ms

/ # ping 8.8.8.8
PING 8.8.8.8 (8.8.8.8): 56 data bytes
64 bytes from 8.8.8.8: seq=0 ttl=122 time=2.439 ms

As we can see here, the IP for kube-dns appears to be unrouteable from within my pod… Note that we can easily reach the outside world via the configured gateway. The routing tables are configured such that we go through the gateway to reach kube-dns, however my suspicion is that this probably shouldn’t be the case. One big red flag is that the subnet given here by netstat does not match the Cluster IP that kube-dns (or any other service) is using. Which one is right? How would I correct the mismatch?

I have a hunch that maybe we aren’t meant to run flannel if DO has provided private networking between the kubelets already, however it isn’t entirely clear if this is the case.

I’m not sure what direction to go in with this, so any advice would be appreciated. Thanks in advance.

4 Answers

I can no longer edit my post, but apparently I misunderstood how the k8s service subnet works… It is not meant to respond to ICMP requests so “ping 10.245.0.10” is expected to fail and the subnet that kube-dns lives on is not meant to match the pod subnet.

However, I have done some more testing and apparently pods running on one of my nodes can hit kube-dns, but pods on the other node cannot. This the case regardless of which node kube-dns is present on. I suspect there is a problem with that node’s configuration, so I will try recycling it and see what happens…

Hi @tjm

Did you get flannel to work? We are trying to configure flannel on DO cluster without any success.

@mgcode I haven’t had time to look into it further yet. I’ll let you know if I figure it out. If you get to it before me, be sure to update this post :)

@tjm problem might be that DO cluster comes automatically with preloaded kube-proxy.
Right after cluster initialization we installed flannel and our network between nodes did not work.

I think that, you need to uninstall kube-proxy, if you prefer Flannel.

Have another answer? Share your knowledge.