It seems DOs Loadbalancers do not honor “externalTrafficPolicy: Local” Setting in Kubernetes Service definition.
With this setting in GKE, the original client IP Adress is contained in the TCP Source IP Header. With DOs Loadbalancers, that is not the case.

How can I preserve the original client IP Adress when using SSL Passthrough (where you cannot modify the HTTP Headers and therefore X-Forwarded-For is not an option)?

spec:
type: LoadBalancer
externalTrafficPolicy: Local # preserves source ip
ports:

  • name: ambassador-plain port: 80
  • name: ambassador-tls port: 443
1 comment

These answers are provided by our Community. If you find them useful, show some love by clicking the heart. If you run into issues leave a comment, or add your own answer to help others.

×
4 answers

Hi @moisey - I have just tried this on my 1.15.4-do.0 cluster and I still do not see external IPs.

Right now the source IP is showing as 10.135.29.9 (an internal IP).

I have tried deleting and recreating the service, but that hasn’t affected it.

My service definition is here: https://github.com/andrewmichaelsmith/kubedefs/blob/master/honeytrap/honeytrap.yaml#L113

Any ideas? This is a critical feature for me, I’d rather not use Azure, but this is a blocker for this project.

There was a large release to DOKS to support auto-scaling which is now also available and it should now respect “service.spec.externalTrafficPolicy” being set to local.

I have also noticed this issue. I have set my service with type LoadBalancer and externalTrafficPolicy: Local, but the internal IP is forwarded, not the external. Setting that with NodePort works, but NodePort isn’t what most people need/want.

I also had this issue and it was not acceptable, as we need to collect the user’s IP as evidence of signature in our application.

Digging in the documentation I found out how to fix this - you need to change your load balancer’s settings to enable proxy protocol. Then you have to set up your service to use proxy protocol. In my case, I’m using the official kubernetes’ nginx-ingress.

I changed the load balancer manually in the management web UI (https://cloud.digitalocean.com/networking/load_balancers/) I had to make this small change in the ConfigMap:

---
kind: ConfigMap
apiVersion: v1
metadata:
  name: nginx-configuration
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
data:  # <<< added this section
  # in order for the Digital Ocean load balancer to pass along
  # the client IP, we need to set the load balancer to proxy
  # mode, and use the proxy protocol in nginx.
  # https://www.digitalocean.com/docs/networking/load-balancers/resources/#proxy-protocol
  use-proxy-protocol: "true"

It would be great to know from Digital Ocean whether we can add some annotation in our service resource so that it will create the Load Balancer with proxy protocol enabled from the start without us having to edit it later manually.

Submit an Answer