Kubernetes load balancer says all but one node is down

Posted July 20, 2020 3.2k views
Load BalancingKubernetes

I have a Kubernetes cluster with nginx-ingress. When the load balancer comes up it says all nodes except for one are down. I’m assuming this is because only one node has an nginx-ingress controller running on it. What is the standard way to have a healthy load balancer on Kubernetes? Do I need to make sure that the nginx-ingress controller is running on all nodes?

These answers are provided by our Community. If you find them useful, show some love by clicking the heart. If you run into issues leave a comment, or add your own answer to help others.

Submit an Answer
5 answers

Hello @Tyranthosaur ,

The LB is typically checking a specific nodePort service not just checking whether a node is healthy. To save on overhead from unnecessary network hops between nodes, only nodes hosting pods for the LoadBalancer service will report as healthy.

When the externalnetworkpolicy setting on kubernetes services is set to “Local”. This means a node without a pod running locally for that service will reject the LB healthcheck and show as down.

externaltrafficpolicy set to “cluster” would allow the nodes to forward that traffic to other nodes that are hosting pods for that service. In this case even a node not hosting a pod for that particular service would show as “UP” as it would then just forward it to a node than can, including that extra network hop.

To change this setting for a particular service use the following command:

kubectl patch svc myservice -p ’{“spec”:{“externalTrafficPolicy”:“Cluster”}}’

An important thing to note here is that if using the externaltrafficpolicy of “cluster” you will lose the original client IP address due to this extra network hop. So if your application is checking for or dependent on knowing the client IP, the “Local” setting is required.

You can find more information on externaltrafficpolicy here:

I hope this helps!

Best Regards,
Purnima Kumari
Developer Support Engineer II, DigitalOcean

  • Thank you @Purnima! Very helpful!

    In the context of Kubernetes is having a Digital Ocean load balancer with “DOWN” nodes a bad thing? It sounds like it’s more of a misnomer. Should we consider it normal and operating correctly to have nodes without the ingress load balancer to be reported as “DOWN”? The load balancer UI communicates “DOWN” nodes as something bad when it seems like it may not actually be a problem or something we need to address.

    • @Tyranthosaur Yes, you are right! When in DOKS LB nodes are showing down, it doesn’t mean actually your Node is down. It is just notifying that the LB health check is getting rejected. If you are using the “externalnetworkpolicy” as “Local” then you can safely ignore this.

      Best Regards,
      Purnima Kumari
      Developer Support Engineer II, DigitalOcean

Hi, did you get your application running locally using Docker for Desktop, Kind, or K3d? If not, this is always my very first step in testing out my overall Kubernetes resource definitions. Next, what steps did you follow to install and configure Nginx-Ingress controller for your cluster? BTW, a single Nginx-Ingress controller should be sufficient for your entire cluster because it simply acts as a traffic cop that directs requests to your underlying services. Furthermore, this would work hand-in-hand with a single DO load balancer. Please provide me with additional details about your overall install and configuration of Nginx-Ingress and I would be glad to assist your further. Well, I wish that this helps and all the best.

Think different and code well,


  • I don’t have a Digital Ocean load balancer available for me to run locally so I’m not sure why it matters if my application works on a local cluster. Regardless, it does work on my local cluster.

    I have a 6 node cluster. When nginx-ingress starts it creates a Digital Ocean load balancer automatically. After the load balancer is created it shows that 5 of the 6 nodes are down. The node that the load balancer shows as healthy is the node the ingress controller is running on.

    In order for all nodes to show as healthy does the ingress controller need to be running on all nodes? Is there another way for a node to show as healthy that doesn’t require running an ingress controller on each node?

    • @Tyranthosaur One doesn’t need a DO load balancer locally to run the application within a Kurbernetes cluster. For example, one can setup an Nginx-Ingress controller using Minikube by following the steps found here.

      Next, the nodes within your cluster are not dependent on the ingress controller being present in it. If a node is down or not healthy, then the scheduler shouldn’t try to create resources on an unhealthy node. After K8s cluster creation, my rule of thumb is to always check that all nodes are healthy. Have you tried recreating the K8s cluster because it’s a lot of nodes down for a 6 node cluster? If this issue persists, I would definitely file a ticket with support.

      Think different and code well,


      • This isn’t a problem with my application. It’s a problem with the DO load balancer. All nodes are healthy. However, the DO load balancer only sees nodes with a running ingress controller as healthy. If I run 1 ingress controller than 1 node is reported as healthy by the load balancer. If I run an ingress controller on all 6 nodes than all 6 nodes report as healthy. Is there another way to tell the DO load balancer that a node is healthy other than running an ingress controller on it?

        • Hi, I was able to spin up several 6 node clusters with Nginx-Ingress controllers on the following platforms using the exact same configuration:

          • locally (MiniKube, Kind, and K3d)
          • Google Cloud Platform
          • Digital Ocean
          • Linode

          For each of the above, I performed the following:

          • create the cluster
          • verified that all nodes are healthy
          • create service-1 with associate deployment
          • create service-2 with associate deployment
          • create the ingress resource

          Next, I suspect that there may be an error within your K8s resource configuration. The nodes within your cluster are not dependent on an ingress resource being added to the cluster but you need the resource present within the cluster for your application. However, the ingress resource is dependent on the nodes being up and operational here. How are you installing ingress resource within your cluster? Helm chart or some other method? Next, did you update your DNS records for your load balancer by creating A records for each service endpoint?

          Think different and code well,


@Tyranthosaur @snesterov I’ll try installing Nginx-Ingress with Helm chart and report back here.

In the Helm chart, you could set controller.kind: DaemonSet instead of Deployment. And that will deploy nginx-controller at every node. So the liveness probe will be passed and all nodes will appear as healthy.