Question

How to get Loadbalancer Health Checks to work with externalTrafficPolicy: Local

Hi there,

I am trying to set up a Loadbalancer/Nginx-ingress. Everything works good so far. But there is one Problem. Following some tutorials and hints on DO a recommended setting for the loadbalancer is externalTrafficPolicy: Local. With this set the DO-Loadbalancer fails Health-Checks.

What I researched so far is when I switch to Cluster (health checks work then), I will loose the client IP and maybe get an additional hop inside the cluster and also the LB maybe forwards traffic to a node with less or no pods, but the ingress will route correctly.

With my current setup everything works quite good and I also have the forwarded headers with the correct IP but I wonder if setting externalTraffic to cluster will impact performance and scaling.

Any hints on this would be great, even if the answer is that it is currently not possible :)

loadbalancer:

apiVersion: v1
metadata:
  name: ingress-nginx
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
  annotations:
    service.beta.kubernetes.io/do-loadbalancer-enable-proxy-protocol: 'true'
spec:
  externalTrafficPolicy: Cluster
  type: LoadBalancer
  selector:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
  ports:
    - name: http
      port: 80
      protocol: TCP
      targetPort: http
    - name: https
      port: 443
      protocol: TCP
      targetPort: https

nginx-ingress:

apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
  name: ingress-name
  annotations:
    kubernetes.io/ingress.class: 'nginx'
    nginx.ingress.kubernetes.io/from-to-www-redirect: 'true'
spec:
  tls:
    - hosts:
        - www.some-domain.com
        - some-domain.com
      secretName: some-cert
  rules:
    - host: www.some-domain.com
      http:
        paths:
          - backend:
              serviceName: service-name
              servicePort: 3000

and the ingress config map:

kind: ConfigMap
apiVersion: v1
metadata:
  name: nginx-configuration
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
data:
  proxy-hide-headers: 'Server'
  server-tokens: 'False'
  use-forwarded-headers: 'true'
  compute-full-forwarded-for: 'true'
  use-proxy-protocol: 'true'


Submit an answer

This textbox defaults to using Markdown to format your answer.

You can type !ref in this text area to quickly search our full set of tutorials, documentation & marketplace offerings and insert the link!

Sign In or Sign Up to Answer

These answers are provided by our Community. If you find them useful, show some love by clicking the heart. If you run into issues leave a comment, or add your own answer to help others.

Want to learn more? Join the DigitalOcean Community!

Join our DigitalOcean community of over a million developers for free! Get help and share knowledge in Q&A, subscribe to topics of interest, and get courses and tools that will help you grow as a developer and scale your project or business.

I’m happy you got this working, also it may not always be a necessary for all of your nodes to field traffic. That could end up being quite significant overhead for the cluster. Could perhaps a deployment with a few replicas could also suffice and not require X replicas for X Nodes.

Regards,

John Kwiatkoski Senior Developer Support Engineer - Kubernetes

Deploying ingress-nginx on new cluster creates load balancer that fails health check When I deploy ingress-nginx, a load balancer is created that points to two nodes. One healthy and one down.

I had the same issue when deploying a brand new kubernetes with AWS EKS using terraform AWS module version 4.1.0 and terraform AWS EKS module version 18.8.1. In this cluster, I have installed the ingress-nginx via helm chart version 4.0.18 which means version 1.1.2 of the ingress-nginx controller.

The cluster installation was totatlly default, and the ingress-nginx installation was also totally default. In this conditions I was expecting to work at start, no reason for extra configurations or manual adjustments on AWS Console.

With further investigation, I realized that the default Security Groups created by the terraform AWS EKS module were too restrictive. After I add the rules to allow node-to-node communications, the health checks immediately started to show all instances as healthy.

Other people commented that saw the ingress-nginx controller running on only one node. And as far I can tell, this is a expected and normal behavior. Maybe at a super large scale, more controllers may be needed, but this is not related to the correct functioning of the ingress-nginx. What happens is that when a request reaches a node, it is always redirected to the ingress-nginx controller through the kubernetes internal network. This is why is importante to verify the network security rules and ensure the node-to-node communication is available.

Ok after playing around and have a deep look at my deployments I realized that my nginx-ingress-controller was only available at one node.

In this case the LB was totally correct to fail health checks with externalTrafficPolicy: Local. I know applied a DeamonSet for the controller and now everything seems to work fine.