Question

Kubernetes load balancer says all but one node is down

I have a Kubernetes cluster with nginx-ingress. When the load balancer comes up it says all nodes except for one are down. I’m assuming this is because only one node has an nginx-ingress controller running on it. What is the standard way to have a healthy load balancer on Kubernetes? Do I need to make sure that the nginx-ingress controller is running on all nodes?

Subscribe
Share

Submit an answer
You can type!ref in this text area to quickly search our full set of tutorials, documentation & marketplace offerings and insert the link!

These answers are provided by our Community. If you find them useful, show some love by clicking the heart. If you run into issues leave a comment, or add your own answer to help others.

Hello @Tyranthosaur ,

The LB is typically checking a specific nodePort service not just checking whether a node is healthy. To save on overhead from unnecessary network hops between nodes, only nodes hosting pods for the LoadBalancer service will report as healthy.

When the externalnetworkpolicy setting on kubernetes services is set to “Local”. This means a node without a pod running locally for that service will reject the LB healthcheck and show as down.

externaltrafficpolicy set to “cluster” would allow the nodes to forward that traffic to other nodes that are hosting pods for that service. In this case even a node not hosting a pod for that particular service would show as “UP” as it would then just forward it to a node than can, including that extra network hop.

To change this setting for a particular service use the following command:

kubectl patch svc myservice -p ‘{“spec”:{“externalTrafficPolicy”:“Cluster”}}’

An important thing to note here is that if using the externaltrafficpolicy of “cluster” you will lose the original client IP address due to this extra network hop. So if your application is checking for or dependent on knowing the client IP, the “Local” setting is required.

You can find more information on externaltrafficpolicy here: https://kubernetes.io/docs/tutorials/services/source-ip/#source-ip-for-services-with-type-nodeport

I hope this helps!

Best Regards, Purnima Kumari Developer Support Engineer II, DigitalOcean

In the Helm chart, you could set controller.kind: DaemonSet instead of Deployment. And that will deploy nginx-controller at every node. So the liveness probe will be passed and all nodes will appear as healthy.

This comment has been deleted

@Tyranthosaur @snesterov I’ll try installing Nginx-Ingress with Helm chart and report back here.

Hi, did you get your application running locally using Docker for Desktop, Kind, or K3d? If not, this is always my very first step in testing out my overall Kubernetes resource definitions. Next, what steps did you follow to install and configure Nginx-Ingress controller for your cluster? BTW, a single Nginx-Ingress controller should be sufficient for your entire cluster because it simply acts as a traffic cop that directs requests to your underlying services. Furthermore, this would work hand-in-hand with a single DO load balancer. Please provide me with additional details about your overall install and configuration of Nginx-Ingress and I would be glad to assist your further. Well, I wish that this helps and all the best.

Think different and code well,

-Conrad