Question

Deploying ingress-nginx on new cluster creates load balancer that fails health check

Posted October 28, 2020 2.4k views
DigitalOcean Managed KubernetesDigitalOcean Managed Load Balancers

When I deploy ingress-nginx, a load balancer is created that points to two nodes. One healthy and one down.

Using this command:
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v0.40.2/deploy/static/provider/do/deploy.yaml

Following this guide:
https://www.digitalocean.com/community/tutorials/how-to-set-up-an-nginx-ingress-with-cert-manager-on-digitalocean-kubernetes

Do I need to configure this further? Not sure where the problem is, or how one is successful and one fails. I’ve done this many times with the same results.

Steps to reproduce:

  1. Create a cluster (I did 2 nodes)
  2. run kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v0.40.2/deploy/static/provider/do/deploy.yaml
  3. Check new load balancer

These answers are provided by our Community. If you find them useful, show some love by clicking the heart. If you run into issues leave a comment, or add your own answer to help others.

×
Submit an Answer
3 answers

I had the same problem.

The loadbalancer health check port is checking the port of ingress-nginx service.
Because ingress-nginx service is running only on one node all other nodes will be shown as down.

You need to change the healthcheck port.
I changed mine to the port of node-exporter which is running on every node as a daemon-set.

  • Hey there, I am also struggling with this issue. Can you maybe go into more details on how you resolved the issue?

    You need to change the healthcheck port.
    you mean the port of the livenessProbe and readynessProbe of the deployment/ingress-nginx-controller, which defaults to

    httpGet:
      path: /healthz
      port: 10254
      scheme: HTTP
    

    ?

    I changed mine to the port of node-exporter which is running on every node as a daemon-set.
    Hm… I don’t find that node-exporter daemon-set. the only daemon-sets I see are:

    k -n kube-system get daemonsets
    NAME            DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR                 AGE
    cilium          2         2         2       2            2           <none>                        132m
    csi-do-node     2         2         2       2            2           <none>                        132m
    do-node-agent   2         2         2       2            2           beta.kubernetes.io/os=linux   132m
    kube-proxy      2         2         2       2            2           <none>                        132m
    

    How do I get the port of that node-exporter you mentioned?

    • The problem is the health check of the load balancer.

      Default value for Health checks is something like: tcp://0.0.0.0:30957

      If you run the command: kubectl get svc –all-namespaces|grep nginx

      default nginx-ingress-controller LoadBalancer x.y.z.q x2.y2.z2.q2 80:30957/TCP,443:32421/TCP

      Here you can see that 30957 is the port of the nginx-ingress-controller.

      Node-exporter can be installed separately with helm or with Monitoring stack in the marketplace.

I just ended up using the the nginx ingress from the marketplace instead and it worked fine

I was running into this issue, and from what it seems, the load balancer will run a health check at the NodePort on the ingress-nginx-controller at the http:80 destination, but for some reason that only seems to run/live on one node.

It turns out (for me) a HealthCheck NodePort is created,
and works, but the load balancer will need to be manually updated run health checks on that. Example output:

➜ ~ kubectl -n ingress-nginx describe svc ingress-nginx-controller
Name:                     ingress-nginx-controller
Namespace:                ingress-nginx
# [blah blah blah..]
Port:                     http  80/TCP
TargetPort:               http/TCP
NodePort:                 http  31869/TCP # <= LB Health check targets this
Endpoints:                10.244.3.183:80
Port:                     https  443/TCP
TargetPort:               https/TCP
NodePort:                 https  31015/TCP
Endpoints:                10.244.3.183:443
Session Affinity:         None
External Traffic Policy:  Local
HealthCheck NodePort:     31387 # <= LB Health check should target that

I also did not have this issue with the marketplace/one-click app version on the nginx ingress controller.

  • My solution was to edit the manifest for svc/ingress-nginx-controller and change /spec/externalTrafficPolicy to be Cluster, which results in other nodes proxying to the single controller instance. Not sure how sticky this is since I used the one-click install.

    • What I wonder, however, is if this will cause the load balancer to route traffic to the node without the ingress controller, then have kube-proxy route to the node with the controller, which then routes traffic somewhere else. Not a good outcome just to get a green dot in the console!