Deploying ingress-nginx on new cluster creates load balancer that fails health check

Posted October 28, 2020 1.2k views
DigitalOcean Managed KubernetesDigitalOcean Managed Load Balancers

When I deploy ingress-nginx, a load balancer is created that points to two nodes. One healthy and one down.

Using this command:
kubectl apply -f

Following this guide:

Do I need to configure this further? Not sure where the problem is, or how one is successful and one fails. I’ve done this many times with the same results.

Steps to reproduce:

  1. Create a cluster (I did 2 nodes)
  2. run kubectl apply -f
  3. Check new load balancer

These answers are provided by our Community. If you find them useful, show some love by clicking the heart. If you run into issues leave a comment, or add your own answer to help others.

Submit an Answer
2 answers

I had the same problem.

The loadbalancer health check port is checking the port of ingress-nginx service.
Because ingress-nginx service is running only on one node all other nodes will be shown as down.

You need to change the healthcheck port.
I changed mine to the port of node-exporter which is running on every node as a daemon-set.

  • Hey there, I am also struggling with this issue. Can you maybe go into more details on how you resolved the issue?

    You need to change the healthcheck port.
    you mean the port of the livenessProbe and readynessProbe of the deployment/ingress-nginx-controller, which defaults to

      path: /healthz
      port: 10254
      scheme: HTTP


    I changed mine to the port of node-exporter which is running on every node as a daemon-set.
    Hm… I don’t find that node-exporter daemon-set. the only daemon-sets I see are:

    k -n kube-system get daemonsets
    cilium          2         2         2       2            2           <none>                        132m
    csi-do-node     2         2         2       2            2           <none>                        132m
    do-node-agent   2         2         2       2            2    132m
    kube-proxy      2         2         2       2            2           <none>                        132m

    How do I get the port of that node-exporter you mentioned?

    • The problem is the health check of the load balancer.

      Default value for Health checks is something like: tcp://

      If you run the command: kubectl get svc –all-namespaces|grep nginx

      default nginx-ingress-controller LoadBalancer x.y.z.q x2.y2.z2.q2 80:30957/TCP,443:32421/TCP

      Here you can see that 30957 is the port of the nginx-ingress-controller.

      Node-exporter can be installed separately with helm or with Monitoring stack in the marketplace.

I just ended up using the the nginx ingress from the marketplace instead and it worked fine