How to set static IP for loadbalancer in Kubernetes?

November 27, 2018 5.1k views
Kubernetes Load Balancing Deployment Configuration Management High Availability

Imagine a following situation: I have a Kubernetes cluster and I want to have a statically assigned IP to LoadBalancers within those clusters. How or where I can obtain those static IPs?

Whenever I tear down Kubernetes Pods, IPs are assigned dynamically, which is fine, but if I want to assign the Kubernetes LoadBalancer a static IP, I have no way of knowing which IP address should be set, hence I need to change DNS records every time Kubernetes Master assigns a new IP address.

4 Answers

What if we deleted the cluster and reuse the loadbalancer in the newly created cluster. I couldn’t find a way to achieve this. I tried to specify static IP in the service level but no matter what kubernetes creates a new LB. Cannot use the existing one.

@PeterBocan How did you solve this? I was thinking that I might grab a floating IP, standup a droplet and set it up with nginx to proxy to my K8S load balancer. This way when the K8S IP changes, I only need to modify my proxy server and not DNS records.

There seems like there should be a better way.

  • Hello, yes the answer is very simple - when you are deploying the very first time, you are deploying both kubernetes service(s) which provides the IP, and kubernetes pod(s) deployment(s).

    Every time you deploy new changes, just change the pod deployment and reapply the changes with the kubectl, the service IP will stay unchanged, thus you can tie that IP address with the DO DNS.

Same problem here as @fkucuk . DigitalOcean are you listening? How can we solve this problem?

We are forced to delete and recreate the cluster because we’re still facing the “deleting and recreating a NFS pv will break mounting that NFS PV forever” problem. This when NFS breaks, we have to re-setup everying from zero.

And it sucks when all our external clients break because we have to update our DNS to point to the new LoadBalancer IP.

Same problem as here:
Also see where users will run into this very problem when upgrading their clusters.

Have another answer? Share your knowledge.