Question

How to get Loadbalancer Health Checks to work with externalTrafficPolicy: Local

Posted February 10, 2020 606 views
Load BalancingKubernetes

Hi there,

I am trying to set up a Loadbalancer/Nginx-ingress. Everything works good so far.
But there is one Problem. Following some tutorials and hints on DO a recommended setting for the loadbalancer is externalTrafficPolicy: Local. With this set the DO-Loadbalancer fails Health-Checks.

What I researched so far is when I switch to Cluster (health checks work then), I will loose the client IP and maybe get an additional hop inside the cluster and also the LB maybe forwards traffic to a node with less or no pods, but the ingress will route correctly.

With my current setup everything works quite good and I also have the forwarded headers with the correct IP but I wonder if setting externalTraffic to cluster will impact performance and scaling.

Any hints on this would be great, even if the answer is that it is currently not possible :)

loadbalancer:

apiVersion: v1
metadata:
  name: ingress-nginx
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
  annotations:
    service.beta.kubernetes.io/do-loadbalancer-enable-proxy-protocol: 'true'
spec:
  externalTrafficPolicy: Cluster
  type: LoadBalancer
  selector:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
  ports:
    - name: http
      port: 80
      protocol: TCP
      targetPort: http
    - name: https
      port: 443
      protocol: TCP
      targetPort: https

nginx-ingress:

apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
  name: ingress-name
  annotations:
    kubernetes.io/ingress.class: 'nginx'
    nginx.ingress.kubernetes.io/from-to-www-redirect: 'true'
spec:
  tls:
    - hosts:
        - www.some-domain.com
        - some-domain.com
      secretName: some-cert
  rules:
    - host: www.some-domain.com
      http:
        paths:
          - backend:
              serviceName: service-name
              servicePort: 3000

and the ingress config map:

kind: ConfigMap
apiVersion: v1
metadata:
  name: nginx-configuration
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
data:
  proxy-hide-headers: 'Server'
  server-tokens: 'False'
  use-forwarded-headers: 'true'
  compute-full-forwarded-for: 'true'
  use-proxy-protocol: 'true'

edited by MattIPv4

These answers are provided by our Community. If you find them useful, show some love by clicking the heart. If you run into issues leave a comment, or add your own answer to help others.

×
2 answers

I’m happy you got this working, also it may not always be a necessary for all of your nodes to field traffic. That could end up being quite significant overhead for the cluster. Could perhaps a deployment with a few replicas could also suffice and not require X replicas for X Nodes.

Regards,

John Kwiatkoski
Senior Developer Support Engineer - Kubernetes

  • Hi John.

    So if I understand correctly, It’s completely normal to have some nodes reporting failing health checks?

    Let’s say I have a cluster of 4 nodes and only two instances of the nginx-ingress controller running on nodes 1 and 2, so I’ll have 2 failing health checks on nodes 3 and 4.

    This is an acceptable state for a production application?

Ok after playing around and have a deep look at my deployments I realized that my nginx-ingress-controller was only available at one node.

In this case the LB was totally correct to fail health checks with externalTrafficPolicy: Local.
I know applied a DeamonSet for the controller and now everything seems to work fine.

Submit an Answer