Question

Kubernetes load balancer says all but one node is down

I have a Kubernetes cluster with nginx-ingress. When the load balancer comes up it says all nodes except for one are down. I’m assuming this is because only one node has an nginx-ingress controller running on it. What is the standard way to have a healthy load balancer on Kubernetes? Do I need to make sure that the nginx-ingress controller is running on all nodes?


Submit an answer

This textbox defaults to using Markdown to format your answer.

You can type !ref in this text area to quickly search our full set of tutorials, documentation & marketplace offerings and insert the link!

Sign In or Sign Up to Answer

These answers are provided by our Community. If you find them useful, show some love by clicking the heart. If you run into issues leave a comment, or add your own answer to help others.

Want to learn more? Join the DigitalOcean Community!

Join our DigitalOcean community of over a million developers for free! Get help and share knowledge in Q&A, subscribe to topics of interest, and get courses and tools that will help you grow as a developer and scale your project or business.

Hello @Tyranthosaur ,

The LB is typically checking a specific nodePort service not just checking whether a node is healthy. To save on overhead from unnecessary network hops between nodes, only nodes hosting pods for the LoadBalancer service will report as healthy.

When the externalnetworkpolicy setting on kubernetes services is set to “Local”. This means a node without a pod running locally for that service will reject the LB healthcheck and show as down.

externaltrafficpolicy set to “cluster” would allow the nodes to forward that traffic to other nodes that are hosting pods for that service. In this case even a node not hosting a pod for that particular service would show as “UP” as it would then just forward it to a node than can, including that extra network hop.

To change this setting for a particular service use the following command:

kubectl patch svc myservice -p ‘{“spec”:{“externalTrafficPolicy”:“Cluster”}}’

An important thing to note here is that if using the externaltrafficpolicy of “cluster” you will lose the original client IP address due to this extra network hop. So if your application is checking for or dependent on knowing the client IP, the “Local” setting is required.

You can find more information on externaltrafficpolicy here: https://kubernetes.io/docs/tutorials/services/source-ip/#source-ip-for-services-with-type-nodeport

I hope this helps!

Best Regards, Purnima Kumari Developer Support Engineer II, DigitalOcean

In the Helm chart, you could set controller.kind: DaemonSet instead of Deployment. And that will deploy nginx-controller at every node. So the liveness probe will be passed and all nodes will appear as healthy.

This comment has been deleted