Cristian Marius Tiutiu, Bikram Gupta, and Anish Singh Walia
This tutorial will teach you to:
What is an Egress gateway, and why is it important?
No matter where your resources (DOKS, Droplets, etc.) are deployed and running, they live in a private network or VPC. The private network can be at home behind your ISP router, in a private data center (on-premise), or in a cloud-based environment. The main role of a VPC is to isolate different networks across different regions. Different VPCs can talk to each other via gateways (which are routers).
Next, you need to get familiar with some terminology associated with inbound and outbound traffic, explained below:
Ingress
deals with inbound
traffic entering your VPC.Egress
deals with outbound
traffic exiting your VPC.When using Kubernetes, you manage incoming traffic via an Ingress resource. But, for outgoing traffic there is no Egress resource in the Kubernetes spec. It can be implemented in the CNI layer. For example Cilium, which is used by DOKS, has an Egress Gateway spec. But, in the case of DigitalOcean Kubernetes, it’s not quite stable or production-ready yet.
There are three key aspects related to egress functionality:
Restricting egress (or outgoing) traffic is not covered in this blueprint but, in essence, is a way of restricting outbound traffic for cluster Pods. It can be achieved via network policies or firewalls. A firewall is the most common use case, where you allow connections only to a particular external IP address range or to external services. Firewalls cannot distinguish between individual Pods, so the rules apply equally.
To build an egress gateway, you need NAT functionality, implemented using a NAT gateway. In essence, a NAT gateway sits at the edge of your private network (or VPC), through which outbound (or egress) traffic flows to the public internet. The main role of NAT, which stands for Network Address Translation, is to make network packets routable outside your private network (or VPC). The process also needs to work in reverse order, meaning incoming response packets need to be routed back to the originating private IP inside your VPC.
Your VPC uses a specific IP range (e.g., 10.116.0.0/20
), which needs to be translated to a public IP address so that packets can flow to a specific destination outside your private network (i.e., Internet). When a response packet comes in, the NAT layer must translate the public IP address from the packet header to the private network address of the host where the traffic originated. This is what NAT is for.
A dedicated machine configured for this purpose is called a NAT gateway. When attached to a DOKS cluster, it can route all (or specific destinations only) traffic via a single routable public IP. Hence, we can call it an Egress Gateway in this context.
Moving further with a practical example, suppose you need to use an external service such as a database. The database is usually in another data center outside your private network (or VPC). The database administrator configured the firewall so that only specific public IPs can connect. You already egress from your DOKS cluster nodes because they are connected directly to the Internet, but it’s not practical because nodes are volatile, hence the public IPs will change.
Consequently, on the other end (i.e. database service), you need to change network ACLs again to allow the new public IPs - not good. An egress gateway ensures that all traffic from your application Pods inside the Kubernetes cluster is seen as coming from a single public IP. That is, the public IP of the egress gateway. You can go even further and use a reserved IP for the egress gateway, so it will never change.
Below is a diagram showing the main setup for egressing DOKS cluster traffic to an external service (i.e., database):
To complete this tutorial, you will need:
DOKS
clusters. You can learn more here.Droplets
. You can learn more here.bash
).The main idea behind Crossplane is infrastructure management the Kubernetes way. It means, you can define and create CRDs using a declarative approach, and let Crossplane deal with the inner details. With Crossplane it’s possible to create Droplets, Managed Databases, Load Balancers, even Kubernetes clusters (DOKS), via the DigitalOcean provider. Crossplane was designed with flexibility in mind, and it can be extended via providers.
Next, it’s important to understand some important concepts behind Crossplace to create DigitalOcean resources (e.g., Droplets). There are four main concepts to know about:
Below picture shows a simplified operational overview for Crossplane:
A typical Droplet CRD consumed by Crossplane looks like below:
apiVersion: compute.do.crossplane.io/v1alpha1
kind: Droplet
metadata:
name: egress-gw-nyc1
spec:
forProvider:
region: nyc1
size: s-1vcpu-1gb
image: ubuntu-20-04-x64
sshKeys:
- "7e:9c:b7:ee:74:16:a5:f7:62:12:b1:72:dc:51:71:85"
providerConfigRef:
name: do-provider-config
Explanations for the above configuration:
spec.forProvider
- defines all metadata required by the DigitalOcean provider to provision a new Droplet, such as: region
, size
, image
, etc. Fields value map directly to the DigitalOcean Droplet specification. Also, if you have SSH keys deployed to your DigitalOcean account, you can specify the fingerprint in the sshKeys
field from the spec.spec.providerConfigRef
- specifies a reference to a provider configuration CRD (explained in Step 3 - Creating an Egress Gateway using Crossplane). The provider configuration instructs the Droplet CRD how to connect to the DigitalOcean REST API, and what credentials to use (e.g. DO API token).Hint: You can also check this nice CRD viewer to see all the fields available for the Droplet kind, in a human readable format (available only for latest released version, which is 0.1.0 at this time of writing). Under the hood, Crossplane is delegating the real work to the DigitalOcean provider, which in turn will use the provider REST API to create a new droplet based on the requirements from the Droplet CRD spec field. With all features presented above at hand, you are able to build your own cloud platform using one or multiple providers. The possibilities are almost limitless. Please visit the official documentation page for more information about the product and available features. The DigitalOcean provider home page for Crossplane is available here.
In the next step, you will learn how to install and configure Crossplane for your DOKS cluster using Helm.
Crossplane is available as a Helm chart for easy installation, as well as for future upgrades. Follow below steps to install Crossplane, via Helm (version 3.x is required):
helm repo add crossplane-stable https://charts.crossplane.io/stable
helm repo update
crossplane-stable
Helm repository for available charts to install:helm search repo crossplane-stable
The output looks similar to:
OutputNAME CHART VERSION APP VERSION DESCRIPTION
crossplane-stable/crossplane 1.9.0 1.9.0 Crossplane is an open source Kubernetes add-on ...
HELM_CHART_VERSION="1.9.0"
helm install crossplane crossplane-stable/crossplane \
--version "${HELM_CHART_VERSION}" \
--namespace crossplane-system \
-create-namespace
Note:
A specific version for the Crossplane Helm chart is used. In this case 1.9.0
is picked, which maps to the 1.9.0
version of the application. It’s good practice in general, to lock on a specific version. This helps to have predictable results, and allows versioning control via Git
.
Now, check if the Crossplane Helm chart was deployed to your cluster:
helm ls -n crossplane-system
The output looks similar to (STATUS
column value should be set to deployed
):
OutputNAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION
crossplane crossplane-system 1 2022-09-10 20:40:20.903871 +0300 EEST deployed crossplane-1.9.0 1.9.0
Finally, verify the Crossplane deployment status:
kubectl get deployments -n crossplane-system
The output looks similar to:
OutputNAME READY UP-TO-DATE AVAILABLE AGE
crossplane 1/1 1 1 3d19h
crossplane-rbac-manager 1/1 1 1 3d19h
All pods must be up and running (check the READY
column). In the next step, a short introduction is given about the static routes operator used in this guide.
The main role of the Static Routes Operator is to manage entries in the Linux routing table of each worker node based on CRD spec. It is deployed as a DaemonSet
, hence it will run on each node of your DOKS cluster.
The diagram below illustrates the operational concept:
Configuring static routes is done via StaticRoute
CRDs. A typical example is shown below:
apiVersion: networking.digitalocean.com/v1
kind: StaticRoute
metadata:
name: static-route-ifconfig.me
spec:
destinations:
- "34.160.111.145"
gateway: "10.116.0.5"
Explanations for the above configuration:
spec.destinations
- A list of host IPs (or subnet CIDRs) to route through the gateway.spec.gateway
- Gateway IP address used for routing the host(s)/subnet(s), specified in the destinations
field.Because the operator has access to the Linux routing table, special care must be taken and policies set so that only administrators have access. It’s very easy to misconfigure the routing table, rendering the DOKS cluster unstable or unusable.
VERY IMPORTANT TO REMEMBER: You need to make sure not to add static routes containing CIDRs that overlap with DigitalOcean REST API endpoints (including DOKS)! Doing so, will affect DOKS cluster functionality (Kubelets), and/or other internal services (e.g. Crossplane).
In the next step, you will learn how to install and configure the static routes operator.
Static routes operator is available as a single manifest file, and it is installed via kubectl
. A dedicated namespace, named static-routes
, is created as well. Please follow below steps to install the static routes controller:
1.0.0
version:kubectl apply -f https://raw.githubusercontent.com/digitalocean/k8s-staticroute-operator/main/releases/v1/k8s-staticroute-operator-v1.0.0.yaml
You can check the latest version in the releases path from the k8s-staticroute-operator GitHub repo.
kubectl get pods -l name=k8s-staticroute-operator -n static-routes
Output looks similar to:
OutputNAME READY STATUS RESTARTS AGE
k8s-staticroute-operator-9vp7g 1/1 Running 0 22m
k8s-staticroute-operator-mlfff 1/1 Running 0 22m
kubectl logs -f ds/k8s-staticroute-operator -n static-routes
Output looks similar to:
Output[2022-08-24 14:42:13,625] kopf._core.reactor.r [DEBUG ] Starting Kopf 1.35.6.
[2022-08-24 14:42:13,625] kopf._core.engines.a [INFO ] Initial authentication has been initiated.
[2022-08-24 14:42:13,626] kopf.activities.auth [DEBUG ] Activity 'login_via_pykube' is invoked.
[2022-08-24 14:42:13,628] kopf.activities.auth [DEBUG ] Pykube is configured in cluster with service account.
[2022-08-24 14:42:13,629] kopf.activities.auth [INFO ] Activity 'login_via_pykube' succeeded.
[2022-08-24 14:42:13,629] kopf.activities.auth [DEBUG ] Activity 'login_via_client' is invoked.
[2022-08-24 14:42:13,631] kopf.activities.auth [DEBUG ] Client is configured in cluster with service account.
[2022-08-24 14:42:13,632] kopf.activities.auth [INFO ] Activity 'login_via_client' succeeded.
[2022-08-24 14:42:13,632] kopf._core.engines.a [INFO ] Initial authentication has finished.
[2022-08-24 14:42:13,789] kopf._cogs.clients.w [DEBUG ] Starting the watch-stream for customresourcedefinitions.v1.apiextensions.k8s.io cluster-wide.
[2022-08-24 14:42:13,791] kopf._cogs.clients.w [DEBUG ] Starting the watch-stream for staticroutes.v1.networking.digitalocean.com cluster-wide.
...
If the output looks like the above, you installed the static routes operator successfully. In the next step, you will learn how to provision your first egress gateway Droplet using Crossplane.
To provision an Egress gateway Droplet on the DigitalOcean platform using Kubernetes and Crossplane, you need to follow a few steps:
Next, you will learn how to accomplish each of the above steps, to better understand the concepts and separation of concerns.
Important note: For each step, you will be using the default
namespace in order to simplify testing. In practice, is recommended to create the resources in a dedicated namespace, with proper RBAC policies set.
In Crossplane terms, a provider package is a bundle of assets required by Crossplane to provide additional functionality, such as CRD manifests and associated controller(s). Under the hood, everything is packed in a Docker image and distributed as such.
Typical Crossplane provider package CRD looks like below:
apiVersion: pkg.crossplane.io/v1
kind: Provider
metadata:
name: provider-do
spec:
package: "crossplane/provider-digitalocean:v0.1.0"
Explanations for the above configuration:
spec.package
- defines the provider package to download, which in essence is a Docker image, so it follows the standard naming convention.To install the DigitalOcean provider used in this guide, please follow below steps:
container-blueprints
repository, using curl
:curl -O https://raw.githubusercontent.com/digitalocean/container-blueprints/main/DOKS-Egress-Gateway/assets/manifests/crossplane/do-provider-install.yaml
code do-provider-install.yaml
kubectl
:kubectl apply -f do-provider-install.yaml
Or, directly from the container-blueprints
repo (if you’re OK with the default values):
kubectl apply -f https://raw.githubusercontent.com/digitalocean/container-blueprints/main/DOKS-Egress-Gateway/assets/manifests/crossplane/do-provider-install.yaml
Finally, check if the provider was installed successfully in your DOKS cluster:
kubectl get providers
The output looks similar to:
OutputNAME INSTALLED HEALTHY PACKAGE AGE
provider-do True True crossplane/provider-digitalocean:v0.2.0-rc.0.42.g0932045-main 4d17h
The INSTALLED
and HEALTHY
columns should both report True
. Also, the PACKAGE
should list the specific version used by the container-blueprints repo - crossplane/provider-digitalocean:v0.2.0-rc.0.42.g0932045-main
.
The provider package CRD used in the containe-blueprints
repository is using a specific release from the main branch of the crossplane-contrib/provider-digitalocean repository. This is due to the fact that it contains some important fixes merged from the following PRs - #60, and #61.
Next, you will learn how to configure the DigitalOcean provider to have access to the DigitalOcean REST API and manage resources on your behalf.
The DigitalOcean provider package installed previously needs a ProviderConfig
CRD to operate properly. In order to perform REST operations, it needs to authenticate against the DigitalOcean REST API endpoint. For that, a valid DO API token is required, stored in a Kubernetes secret (as a base64 encoded value).
Typical ProviderConfig CRD looks like below:
apiVersion: do.crossplane.io/v1alpha1
kind: ProviderConfig
metadata:
name: do-provider-config
spec:
credentials:
source: Secret
secretRef:
namespace: crossplane-system
name: do-api-token
key: token
Explanations for the above configuration:
spec.credentials.source
- defines a source where credentials are stored (e.g. Kubernetes Secret).spec.credentials.secretRef
- tells the provider how to access the secret, such as what namespace
it was created, its name
, and what key
contains the DO API token value as a base64 encoded string.To install the DigitalOcean provider configuration used in this guide, please follow below steps:
container-blueprints
repository, using curl
:curl -O https://raw.githubusercontent.com/digitalocean/container-blueprints/main/DOKS-Egress-Gateway/assets/manifests/crossplane/do-provider-config.yaml
code do-provider-config.yaml
<>
placeholders in the do-api-token
Kubernetes Secret CRD. Please run below command, to generate the base64 encoded string from your DO API token:echo "<YOUR_DO_API_TOKEN_HERE>" | base64
kubectl
:kubectl apply -f do-provider-config.yaml
Finally, check if the provider configuration resource was created successfully:
kubectl get providerconfigs -o wide
The output looks similar to:
OutputNAME AGE SECRET-NAME
do-provider-config 4d18h do-api-token
If the output looks like the above, you successfully configured the DigitalOcean provider. Notice the SECRET-NAME
column value - it references the secret storing your DO API token.
Please note that all the steps taken so far must be performed only once. Once a provider is installed and configured, you can reference it in any resource you want to create, such as Droplets, managed databases, etc. The only intervention required over time is when you need to upgrade a provider package to get new functionality or if you need to update your DO API token.
Next, you will take the final step and provision the egress gateway droplet for your egress setup.
The egress gateway Droplet and DOKS cluster must be in the same VPC (or on the same network) for the whole setup to work.
Now that the DigitalOcean provider is installed and properly configured, you can create the Droplet resource to act as an egress gateway for the demo used in this blueprint.
The Crossplane Droplet CRD used in this blueprint looks like below:
apiVersion: compute.do.crossplane.io/v1alpha1
kind: Droplet
metadata:
name: egress-gw-nyc1
spec:
forProvider:
region: nyc1
size: s-1vcpu-1gb
image: ubuntu-20-04-x64
vpcUuid: 4b5e125e-c52e-4578-93a7-01341ee927ac
sshKeys:
- "7e:9c:b7:ee:74:16:a5:f7:62:12:b1:72:dc:51:71:85"
userData: |
#!/usr/bin/env bash
# Install dependencies
echo iptables-persistent iptables-persistent/autosave_v4 boolean true | debconf-set-selections
echo iptables-persistent iptables-persistent/autosave_v6 boolean true | debconf-set-selections
apt-get update
apt-get -y install iptables iptables-persistent curl
# Enable IP forwarding
echo "net.ipv4.ip_forward=1" >> /etc/sysctl.conf
sysctl -p /etc/sysctl.conf
# Configure iptables for NAT
PRIVATE_NETWORK_INTERFACE_IP="$(curl -s http://169.254.169.254/metadata/v1/interfaces/private/0/ipv4/address)"
PRIVATE_NETWORK_CIDR="$(ip route show src $PRIVATE_NETWORK_INTERFACE_IP | awk '{print $1}')"
PUBLIC_INTERFACE_NAME="$(ip route show default | awk '{print $5}')"
iptables -t nat -A POSTROUTING -s "$PRIVATE_NETWORK_CIDR" -o "$PUBLIC_INTERFACE_NAME" -j MASQUERADE
iptables-save > /etc/iptables/rules.v4
providerConfigRef:
name: do-provider-config
The fields meaning is the same as explained at the beginning of this blueprint guide, when Crossplane was introduced). The only exception is the spec.forProvider.userData
field. What this field does is it allows you to specify custom cloud-init scripts for the Droplet (e.g., Bash
scripts, denoted by the first line value - #!/usr/bin/env bash
).
In the above example, the cloud init script main role is to initialize the Droplet to act as a NAT gateway. The example is heavily based on this guide provided by DigitalOcean.
#!/usr/bin/env bash
.iptables
, iptables-persistent
(persists iptables rules on reboot), etc. Please bear in mind that the installation commands are Linux distribution specific (above example is Ubuntu specific, which in turn is based on Debian).net.ipv4.ip_forward
sysctl setting. Configuration is persisted in the /etc/sysctl.conf
file to survive machine reboots.iptables-save
command, to persist on reboots.To create the egress gateway Droplet via Kubernetes, please follow below steps:
container-blueprints
repository, using curl
:curl -O https://raw.githubusercontent.com/digitalocean/container-blueprints/main/DOKS-Egress-Gateway/assets/manifests/crossplane/egress-gw-droplet.yaml
code egress-gw-droplet.yaml
If you have specific SSH keys you want to add at Droplet creation time, you can do so by uncommenting the sshKeys
field from the spec. Then, replace the <>
placeholders with your SSH key fingerprint. To list the available SSH keys associated with your account and their fingerprint, you can do so by issuing the following command - doctl compute ssh-key list
.
The egress gateway Droplet and DOKS cluster must be in the same VPC. Use the following command to get your VPC ID - doctl vpcs list
and set this by uncommenting the vpcUuid
field from the spec.
kubectl
:kubectl apply -f egress-gw-droplet.yaml
Or, directly from the container-blueprints
repo (if you’re OK with the default values):
kubectl apply -f https://raw.githubusercontent.com/digitalocean/container-blueprints/main/DOKS-Egress-Gateway/assets/manifests/crossplane/egress-gw-droplet.yaml
Finally, check if the Droplet Kubernetes resource was created successfully:
kubectl get droplets -o wide
The output looks similar to:
OutputNAME PRIVATE IPV4 PUBLIC IPV4 READY REGION SIZE SYNCED
egress-gw-nyc1 10.116.0.4 192.81.213.125 True nyc1 s-1vcpu-1gb True
PRIVATE IPV4
and PUBLIC IPV4
columns will receive values only after the Droplet is successfully provisioned.Events
output from below command:kubectl describe droplet egress-gw-nyc1
The READY
and SYNCED
columns should print True
. If the output looks like above, you configured the Droplet CRD successfully. Also, if you navigate to the Droplets web panel from your DigitalOcean account, you should see the new resource created. You should receive an email as well, with additional details about your newly created Droplet.
Finally, you can check if the cloud init script ran successfully, and did the required changes to enable IP forwarding and NAT functionality. Please follow below steps (all commands must be run as root, or via sudo):
cat /proc/sys/net/ipv4/ip_forward
# Should output: 1, meaning it's enabled
iptables -L -t nat
The output looks similar to (the POSTROUTING
chain should print the MASQUERADE
target for your VPC subnet):
Output...
Chain POSTROUTING (policy ACCEPT)
target prot opt source destination
MASQUERADE all -- 10.116.0.0/20 anywhere
If all checks pass, you configured the egress gateway Droplet successfully. Next, you will learn how to define and create static routes for single as well as multiple destinations using the egress gateway created earlier via Crossplane.
The sample CRDs provided in this blueprint create a static route to two different websites which report back your public IP - ifconfig.me/ip, and ipinfo.io/ip.
To test the setup, download each sample manifest:
# Example for ifconfig.me
curl -O https://raw.githubusercontent.com/digitalocean/container-blueprints/main/DOKS-Egress-Gateway/assets/manifests/static-routes/ifconfig-me-example.yaml
# Example for ipinfo.io
curl -O https://raw.githubusercontent.com/digitalocean/container-blueprints/main/DOKS-Egress-Gateway/assets/manifests/static-routes/ipinfo-io-example.yaml
After downloading the manifests, replace the <>
placeholders in each manifest file. To find out the private IPv4 address of your egress gateway droplet, please run the below command (assuming the Droplet name is egress-gw-nyc1
):
kubectl get droplet egress-gw-nyc1 -o jsonpath="{.status.atProvider.privateIPv4}"
Next, save changes and apply each manifest using kubectl
:
# Example for ifconfig.me
kubectl apply -f ifconfig-me-example.yaml
# Example for ipinfo.io
kubectl apply -f ipinfo-io-example.yaml
The above command will create the static route custom resources in the default
namespace. In production environments (and not only), it’s best to have a dedicated namespace with RBAC policies set.
Next, check if the static route resources were created:
kubectl get staticroutes -o wide
The output looks similar to (egress gateway has private IP 10.116.0.5
in below example):
NAME DESTINATIONS GATEWAY AGE
static-route-ifconfig.me ["34.160.111.145"] 10.116.0.5 7m2s
static-route-ipinfo.io ["34.117.59.81"] 10.116.0.5 4s
Next, check if the static route resources were created:
kubectl get staticroutes -o wide
The output looks similar to:
NAME DESTINATIONS GATEWAY AGE
static-route-ifconfig.me ["34.160.111.145"] 10.116.0.5 7m2s
static-route-ipinfo.io ["34.117.59.81"] 10.116.0.5 4s
Finally, check if the custom static routes were created on each worker node, after SSH-ing:
route -n
The output looks similar to (the irrelevant lines were omitted from the output for better visibility):
OutputKernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
0.0.0.0 206.81.0.1 0.0.0.0 UG 0 0 0 eth0
...
34.117.59.81 10.116.0.5 255.255.255.255 UGH 0 0 0 eth1
34.160.111.145 10.116.0.5 255.255.255.255 UGH 0 0 0 eth1
static-route-ifconfig.me
as an example:kubectl describe staticroute static-route-ifconfig.me
The output looks similar to (only last relevant lines are shown for simplicity):
Output Spec:
Destinations:
34.160.111.145
Gateway: 10.116.0.4
Status:
create_fn:
Destination: 34.160.111.145
Gateway: 10.116.0.4
Status: Ready
Events: <none>
Looking at the above output, you can see each route details, including status (Ready
or NotReady
), in the Status
field.
kubectl get events -n static-routes
kubectl logs -n static-routes ds/k8s-staticroute-operator > k8s-staticroute-operator.log
code k8s-staticroute-operator.log
To test your egress setup, you need to check if all the requests
originating from your DOKS
cluster travel via a single node - the Egress Gateway
. The simplest way to test, is by making a curl
request to ifconfig.me/ip, and see if the response contains your Egress Gateway public IP
address.
First, you need to create the curl-test
pod in your DOKS
cluster (the default
namespace is used):
kubectl apply -f https://raw.githubusercontent.com/digitalocean/container-blueprints/main/DOKS-Egress-Gateway/assets/manifests/curl-test.yaml
Verify that the pod is up and running (in the default
namespace):
kubectl get pods
The output looks similar to (curl-test
pod Status
should be Running
):
OutputNAME READY STATUS RESTARTS AGE
curl-test 1/1 Running 0 7s
Then, perform a HTTP
request to ifconfig.me, from the curl-test
pod:
kubectl exec -it curl-test -- curl ifconfig.me
The output looks similar to:
Output192.81.213.125
The resulting IP address that gets printed should be the public IP
address that was assigned to your egress gateway droplet. You can check your NAt gateway Droplet public IPv4 address via kubectl
:
kubectl get droplet egress-gw-nyc1 -o jsonpath="{.status.atProvider.publicIPv4}"
You need to make sure not to add static routes containing CIDRs that overlap with DigitalOcean REST API endpoints (including DOKS)! Doing so, will affect DOKS cluster functionality (Kubelets), and/or other internal services (e.g. Crossplane).
So far, you configured the static routes controller to egress cluster traffic for specific destinations only. But, you can also use the static routes controller to egress cluster traffic for multiple destinations (or public CIDRs) as well (some limitations apply, though, and explained below).
If changing the default gateway in the Linux routing table on each node to point to the custom egress gateway private IP, then you can route all outbound traffic via the custom gateway. But, there’s an issue with this approach - the internal services running in the cluster that need access to DigitalOcean public API will not work anymore, thus making it unstable. Also, resources provisioned via Crossplane won’t work as well.
To solve the above issue, you can use another approach where you create static routes for all public CIDRs, except the ones that overlap with DigitalOcean API public endpoints. There is a ready-to-use sample provided in this tutorial that allows us to achieve this goal, called public-egress-example.
Usually, you need to set this only once, but please be mindful of what IP ranges you use. The static routes controller cannot distinguish if some ranges overlap or not or if some are overlapping with DigitalOcean REST API public endpoints. Just to be safe, you can have a small cluster somewhere, which is safe to discard if something goes bad. The overlapping CIDRs are already commented in the provided example. You can subdivide those CIDRs even further and remove the exact ranges that overlap with DigitalOcean REST API endpoints.
Follow the below steps to apply the public CIDRs example from this guide:
curl -O https://raw.githubusercontent.com/digitalocean/container-blueprints/main/DOKS-Egress-Gateway/assets/manifests/static-routes/public-egress-example.yaml
code public-egress-example.yaml
<>
placeholders for the gateway
spec field, then save and apply the manifest using kubectl
:kubectl apply -f public-egress-example.yaml
Now, check if all routes were created by SSH-ing to each node, and run route -n
. All entries from the public-egress.yaml manifest should be present. Also, you can use kubectl describe
on the resource and check its status.
How to check all public IP address ranges used by DigitalOcean? There are two options available:
If you want to clean up all the resources associated with this guide, you can do so for each major component as follows.
To clean up the operator and associated resources, please run the following kubectl
command (make sure you’re using the same release version as in the install step):
kubectl delete -f https://raw.githubusercontent.com/digitalocean/k8s-staticroute-operator/main/releases/v1/k8s-staticroute-operator-v1.0.0.yaml
Note:
Above command will also delete the associated namespace (static-routes
). Make sure to backup your CRDs first, if needed later.
The output looks similar to:
Outputcustomresourcedefinition.apiextensions.k8s.io "staticroutes.networking.digitalocean.com" deleted
serviceaccount "k8s-staticroute-operator" deleted
clusterrole.rbac.authorization.k8s.io "k8s-staticroute-operator" deleted
clusterrolebinding.rbac.authorization.k8s.io "k8s-staticroute-operator" deleted
daemonset.apps "k8s-staticroute-operator" deleted
Check the routes on each worker node after SSH-ing:
route -n
The custom static routes should not be present in the routing table output.
Finally, the curl-test
pod should report back the public IP of the worker node where it runs:
# Inspect the node where the curl-test Pod runs:
kubectl get pod curl-test -o wide
The output looks similar to (write down the node name from the NODE
column):
OutputNAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
curl-test 1/1 Running 2 (45m ago) 165m 10.244.0.140 basicnp-7micg <none> <none>
Above example reports - basicnp-7micg
.
Check the worker node public IP:
kubectl get nodes -o wide
The output looks similar to (note the public IP of the associated node where the curl-test
Pod runs):
OutputNAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
basicnp-7micg Ready <none> 3h20m v1.23.9 10.116.0.2 206.189.231.90 Debian GNU/Linux 10 (buster) 5.10.0-0.bpo.15-amd64 containerd://1.4.13
basicnp-7micw Ready <none> 3h20m v1.23.9 10.116.0.3 206.81.2.154 Debian GNU/Linux 10 (buster) 5.10.0-0.bpo.15-amd64 containerd://1.4.13
Above example reports - 206.189.231.90
.
Exec the ifconfig.me/ip
curl:
kubectl exec -it curl-test -- curl ifconfig.me/ip
The output looks similar to:
Output206.189.231.90
The response should include the original public IP of the worker node where the curl-test
Pod runs.
Removing the egress gateway droplet is just a matter of deleting the associated CRD (please bear in mind that this process is destructive, and you cannot revert):
kubectl delete -f https://raw.githubusercontent.com/digitalocean/container-blueprints/main/DOKS-Egress-Gateway/assets/manifests/crossplane/egress-gw-droplet.yaml
After running the above command, the associated Droplet resource should be destroyed and removed from your DigitalOcean account.
Whenever you delete a resource in Kubernetes, the associated controller puts a finalizer on the respective object in the metadata field. Bellow snippet below shows the Finalizers
field in the Static Route CRD:
Name: staticroutes.networking.digitalocean.com
Namespace:
Labels: <none>
Annotations: provider: digitalocean
API Version: apiextensions.k8s.io/v1
Kind: CustomResourceDefinition
Metadata:
Creation Timestamp: 2022-09-19T15:05:34Z
Deletion Timestamp: 2022-09-27T14:51:10Z
Finalizers:
customresourcecleanup.apiextensions.k8s.io
...
Kubernetes checks for the finalizer field first and if found, it will not delete the resource from the cluster until the associated controller finishes its job internally. If everything goes as planned, the controller removes the finalizer field from the object, and then and only then, Kubernetes moves forward with the actual deletion of the resource.
The main role of a finalizer is to allow the associated controller to finish its job internally before actual object deletion.
If, for some reason, controller logic is unable to process the request and remove the finalizer field, the resource hangs in a Terminating
state. Then, kubectl
freezes because the Kubernetes API cannot process the request.
As a side effect, this warning is reported in the static routes controller logs:
[2022-09-27 14:58:11,405] kopf._core.reactor.o [WARNING ] Non-patchable resources will not be served: {staticroutes.v1.networking.digitalocean.com}
Why this happens, and how do I recover?
It can happen for various reasons - one such example is when the controller is down because of an upgrade, hence it is unable to process requests. In this case the CRD remains in an inconsistent state, and the only way to recover is via:
kubectl patch staticroute <YOUR_STATIC_ROUTE_RESOURCE_NAME_HERE> -p '{"metadata": {"finalizers": []}}' --type merge
The above command removes the finalizer field from the resource, thus allowing Kubernetes to proceed with actual object deletion. Consequently, the Linux routing table still holds the old route entries. You have several options here:
kubectl rollout restart -n static-routes ds/k8s-staticroute-operator
.In this tutorial, you learned how to use Crossplane to create and manage an egress Gateway resource for your DOKS cluster. This way, external services (e.g. database), will see a single source IP in the packets coming from your DOKS cluster, thus making firewall rules management easier on the other end. Also, you learned how to use the static routes operator to manage specific routes for the egress functionality.
Thanks for learning with the DigitalOcean Community. Check out our offerings for compute, storage, networking, and managed databases.
This textbox defaults to using Markdown to format your answer.
You can type !ref in this text area to quickly search our full set of tutorials, documentation & marketplace offerings and insert the link!
What scopes does the DO API token need?