By Marcus Klein
Hi there,
I am trying to set up a Loadbalancer/Nginx-ingress. Everything works good so far.
But there is one Problem. Following some tutorials and hints on DO a recommended setting for the loadbalancer is externalTrafficPolicy: Local. With this set the DO-Loadbalancer fails Health-Checks.
What I researched so far is when I switch to Cluster (health checks work then), I will loose the client IP and maybe get an additional hop inside the cluster and also the LB maybe forwards traffic to a node with less or no pods, but the ingress will route correctly.
With my current setup everything works quite good and I also have the forwarded headers with the correct IP but I wonder if setting externalTraffic to cluster will impact performance and scaling.
Any hints on this would be great, even if the answer is that it is currently not possible :)
loadbalancer:
apiVersion: v1
metadata:
name: ingress-nginx
namespace: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
annotations:
service.beta.kubernetes.io/do-loadbalancer-enable-proxy-protocol: 'true'
spec:
externalTrafficPolicy: Cluster
type: LoadBalancer
selector:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
ports:
- name: http
port: 80
protocol: TCP
targetPort: http
- name: https
port: 443
protocol: TCP
targetPort: https
nginx-ingress:
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: ingress-name
annotations:
kubernetes.io/ingress.class: 'nginx'
nginx.ingress.kubernetes.io/from-to-www-redirect: 'true'
spec:
tls:
- hosts:
- www.some-domain.com
- some-domain.com
secretName: some-cert
rules:
- host: www.some-domain.com
http:
paths:
- backend:
serviceName: service-name
servicePort: 3000
and the ingress config map:
kind: ConfigMap
apiVersion: v1
metadata:
name: nginx-configuration
namespace: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
data:
proxy-hide-headers: 'Server'
server-tokens: 'False'
use-forwarded-headers: 'true'
compute-full-forwarded-for: 'true'
use-proxy-protocol: 'true'
This textbox defaults to using Markdown to format your answer.
You can type !ref in this text area to quickly search our full set of tutorials, documentation & marketplace offerings and insert the link!
I’m happy you got this working, also it may not always be a necessary for all of your nodes to field traffic. That could end up being quite significant overhead for the cluster. Could perhaps a deployment with a few replicas could also suffice and not require X replicas for X Nodes.
Regards,
John Kwiatkoski Senior Developer Support Engineer - Kubernetes
Ok after playing around and have a deep look at my deployments I realized that my nginx-ingress-controller was only available at one node.
In this case the LB was totally correct to fail health checks with externalTrafficPolicy: Local. I know applied a DeamonSet for the controller and now everything seems to work fine.
Deploying ingress-nginx on new cluster creates load balancer that fails health check When I deploy ingress-nginx, a load balancer is created that points to two nodes. One healthy and one down.
I had the same issue when deploying a brand new kubernetes with AWS EKS using terraform AWS module version 4.1.0 and terraform AWS EKS module version 18.8.1. In this cluster, I have installed the ingress-nginx via helm chart version 4.0.18 which means version 1.1.2 of the ingress-nginx controller.
The cluster installation was totatlly default, and the ingress-nginx installation was also totally default. In this conditions I was expecting to work at start, no reason for extra configurations or manual adjustments on AWS Console.
With further investigation, I realized that the default Security Groups created by the terraform AWS EKS module were too restrictive. After I add the rules to allow node-to-node communications, the health checks immediately started to show all instances as healthy.
Other people commented that saw the ingress-nginx controller running on only one node. And as far I can tell, this is a expected and normal behavior. Maybe at a super large scale, more controllers may be needed, but this is not related to the correct functioning of the ingress-nginx. What happens is that when a request reaches a node, it is always redirected to the ingress-nginx controller through the kubernetes internal network. This is why is importante to verify the network security rules and ensure the node-to-node communication is available.
Get paid to write technical tutorials and select a tech-focused charity to receive a matching donation.
Full documentation for every DigitalOcean product.
The Wave has everything you need to know about building a business, from raising funding to marketing your product.
Stay up to date by signing up for DigitalOcean’s Infrastructure as a Newsletter.
New accounts only. By submitting your email you agree to our Privacy Policy
Scale up as you grow — whether you're running one virtual machine or ten thousand.
Sign up and get $200 in credit for your first 60 days with DigitalOcean.*
*This promotional offer applies to new accounts only.