Is it possible to have a static outgoing ip in kubernetes

Posted February 5, 2020 11.9k views

Some external services need to whitelist ip addresses to allow incoming requests. I need to consume a web service with this requirement within my application running as a K8S workload.

Is there any way in DigitalOcean kubernetes implementation to meet this need, i.e. to have requests coming from an http client running into a pod to use a fixed ip address for all requests?

  • @fabn Did you find a solution? Thanks, I searched for a while, but, per usual, no quick and clear answer.

  • I am running into this issue as well. Have a script that connects to a external DB that is out of my control. Whitlisting an IP is a slow manual process for the external DB. I need to have a static IP in order to avoid a IP change every time a pod is recycled.

  • This is pretty disappointing. Like many others here I need to connect to a service outside our control that expects a white listed IP. I don’t want to be in a situation of sending our third party a new IP any time our nodes change. And the proxy solutions recommended below are overcomplicated. Especially considering competing k8s platforms offer this basic functionality.

    TBH I wouldn’t even mind if this were a paid add-on, I just need it to work so I can focus on my job.

These answers are provided by our Community. If you find them useful, show some love by clicking the heart. If you run into issues leave a comment, or add your own answer to help others.

Submit an Answer
6 answers

We currently do not have any service in which to control/monitor kubernetes egress traffic nor do we have a guaranteed IP range of a cluster that can be whitelisted. However you do have options to implement this.

First option would be to manually whitelist the specific nodes ip and update them when new nodes are added/removed or current nodes recycled. I would not recommend this but it could work for testing/development

The second option would be to setup and configure an external proxy service. Then, set the proxy variables in your DOKS deployments to use the configured proxy. After that is configured you only need to whitelist the proxy IP to allow your DOKS services through.

You can control egress traffic being denied/accepted within the cluster using networkpolicy objects, or by installing istio. The documentation for those can be found here:

  • Is something like this on the roadmap? I bumped into this with an external database that needed an IP CIDR to access it. I took a look at the links here but didn’t understand what to do. Any thoughts?

    • It is something the team would like to do and we have a ticket on the backlog to address this but developer cycles have not been allocated for it.

      If this is a managed database on DO you can simply use our tags feature to add this cluster to the list of trusted sources.

      Otherwise, if I were to attempt this today, I would write a script that queries the DOKS API to retrieve the node IP addresses from /api/v1/nodes via a simple curl call. Having this run as a cron would keep the current nodes updated.

      For example this call would get you all the externalIp’s of your nodes:

      > kubectl get nodes  -o jsonpath='{.items[*].status.addresses[?(@.type == "ExternalIP")].address}'

      Hope this helps!

      • Very helpful, appreciate the quick reply!

      • Hi @jkwiatkoski!
        Do you have any news about the progress of the implementation?
        Having a fixed IP address for outgoing traffic is now crucial for our deployment. It’s especially painful during rollout upgrade of the entire cluster, all resources with whitelisted access are not available, so we try to handle it as you mentioned by updating whitelist policies, but it not convenient and making another potential place for failure in case of production cluster rolling out.
        Hope that DO team will find the simple and convenient way to get rid that off.

        • This idea has been discussed a bit internally on the backend but in order to achieve this would involve spoofing the IP address of the LB which leads to security issues in our LB product. We do not see this as a valid tradeoff. There are folks looking for alternatives to achieve a similar functionality. But I would for now plan on using proxy protocol and getting the IP from HTTP headers. Sorry I do not have a happier report on this yet.

          • John
          • Hello John,
            do you have any updates on that topic? Is there any option to leave IP addresses of K8S nodes unchanged during update?
            Using external proxy services is an option, but very weak, it means extra costs and another potential point of failure :(. Also another issue might occur, e.g. in AMS3 region there is a lot of compromised IP addresses e.g.: due to “Credit card fraud gang hosting”. When we have a trust IP addresses and suddenly get new addressed which are compromised, then out production traffic might be seriously impacted.

            AKS, GKE and EKS support this feature.
            We just need to keep out IP addresses :(.

This is becoming quite urgent as of now; I will try hacking up a solution by using, InitControllers and, but it looks like a long road to take.
As of now, DO’s Kubernetes is unusable for anything involving sending/receiving e-mail and communicating to third parties with IP address whitelisting.

It is urgent also for us. We have a couple of k8s clusters in DO. A faw days ago we facing with the issue related to blacklisted a few IP addresses from DO:
So in order to upgrade the K8S cluster you might get the new compromised IP address (!).
Where is no rollback of the upgrade operation for getting the old IP addresses. So after the new cluster is rolled out, you might be really surprised…

Any well known K8S cluster don’t have that limitation wg. AKS, GKE, EKS.

So our workaround at the moment is not upgrading the clusters :(.

My reading of the original question is how to send all k8s application traffic out ONE ip address.

We have a customer with a database at their office we need to hit as we develop an application for them. This database has an external IP with access controlled by a whitelist by the client/customer.

The question is how to send all data from the multiple pods running in our DigitalOcean k8s cluster out one single IP address. So the customer only has to forever whitelist this single ip address.


For now (for http requests) I’m using this service to have a static outgoing ip:

It would be nice to have some DO support for this issue.

I think this could be worked around using Floating IPs - per this answer then it should be possible to send traffic through a consistent set of IP addresses.

To make this work in a kubernetes environment, you would also want to: