round robin from service to backend via kube-proxy not working at DO?

We have the following setup:

load-balancer/Ingress -> varnish -> service -> backends

What we see is that requests from varnish to the service (service as in “Kubernetes Service”) get routed to the same backend POD over and over again (maybe 99% of the time, that is less than 1 out of 100 requests get routed to a different backend POD).

In theory the kube-proxy provided by DO should chose a backend to route the IP to randomly, since DO are using the “iptables” variant of kube-proxy. However that’s not the case at all. The request nearly always gets routed to the same backend PODs.

The result is, that that one backend POD gets overwhelmed with requests and we need to do scaling inside the POD as opposed to using the HorizontalAutoscaler which would be best practice and which will automatically launch new PODs depending on load. However those new PODs will not get routed to by the service proxy (kube-proxy as I assume).

I am out of my depths. Since I have no influence over the configuration of kube-proxy, it being provided by DO, I have no influence on how the requests going to the service IP address get distributed by kube-proxy.

What can I do? Pointers? *t

PS: I can provide configs, but out setup being quite large, I don’t know which would be relevant here. I do not want to post all the config.

Submit an answer

This textbox defaults to using Markdown to format your answer.

You can type !ref in this text area to quickly search our full set of tutorials, documentation & marketplace offerings and insert the link!

Sign In or Sign Up to Answer

These answers are provided by our Community. If you find them useful, show some love by clicking the heart. If you run into issues leave a comment, or add your own answer to help others.

Want to learn more? Join the DigitalOcean Community!

Join our DigitalOcean community of over a million developers for free! Get help and share knowledge in Q&A, subscribe to topics of interest, and get courses and tools that will help you grow as a developer and scale your project or business.

We have similar problem on the recently created environment - all traffic is redirected to the single pod all the time no meter what load we have. However, service discovers pods successfully and if one of the pods fails traffic got redirected to other pod.

Has anyone solved this problem ?

Here is our setup:

Service (Also tried to create with type: Loadbalancer) :

apiVersion: v1
kind: Service
  name: api
    app: api
    - port: 80
      targetPort: 8080


apiVersion: extensions/v1beta1
kind: Ingress
  name: nginx-ingress
  namespace: default
  annotations: "true" "nginx" "true" "letsencrypt" "http01" "true" "true" "<Service name>" "notifications-service" "$arg_token" "1800" "1800"
  - hosts:
    - <HOST>
    secretName: letsencrypt-prod-cert
  - host: <HOST>
      - path: /
          serviceName: api
          servicePort: 80

Kube-proxy configured to use iptables

I appear to also be getting this. It appears that the nodes aren’t being rotated via round robin anywhere nearly as often as they should be. Subsequent requests should be split evenly across each Node backend, but it appears they’re not being frequently.

Edit: It appears for the port you have to use the HTTP port type. Using TCP doesn’t make it work, however HTTP reliabably forwards across node backends. I edited that in the Digital Ocean console and suddenly it all started to work.