Technical Writer II

Blue-green deployment on Kubernetes means running two full application environments (blue and green) and switching live traffic from one to the other in a single step. You avoid the partial rollouts and mixed-version traffic of rolling updates, and you get a clear cutover point plus instant rollback by switching traffic back.
Many teams deploy by SSH-ing into Droplets, pulling code, and restarting processes. That pattern introduces planned downtime, inconsistent state, and no formal rollback path. Moving to DigitalOcean Kubernetes (DOKS) with Gateway API and a managed database gives you a repeatable, zero-downtime release path: deploy the new version alongside the old one, validate it, then point traffic at the new environment. If something goes wrong, you point traffic back without redeploying.
This tutorial walks you through creating blue and green Deployments and Services, a Gateway API Gateway and HTTPRoute for routing, and connecting both environments to a DigitalOcean Managed PostgreSQL or MySQL instance in the same VPC. By the end you can run a cutover and rollback with a single routing change.
Blue-green removes deploy-time downtime. Two environments (blue and green) run at once; only one receives live traffic. Deploy and test on the inactive side, then switch traffic in one change.
Gateway API gives explicit routing. The Gateway is the entry point; HTTPRoute decides which Service receives traffic. Switch between blue and green by updating the HTTPRoute only.
DOKS plus a managed database in the same VPC provides private connectivity and a stable base for blue and green apps that share one database or use separate DBs.
Cutover is a single routing change; rollback is the same change in reverse. No pod restarts or redeploys required.
Validate readiness and DB connectivity before cutover. If green is not ready or cannot reach the database, switching traffic will cause errors.
Blue-green deployment in Kubernetes is the practice of maintaining two complete application environments (blue and green), each backed by its own Deployment and Service. Only one environment receives production traffic at any time. You deploy the new version to the inactive environment; that environment acts as your final staging area. It mirrors production (same cluster, same database, same routing layer) so you can run smoke tests and validation there before any user traffic hits it. When satisfied, you release by switching traffic to it. The previously active environment stays running, so rollback is the same mechanism in reverse: switch traffic back. No redeploy, no pod restarts.
The switch is a routing change. In this guide, Gateway API’s HTTPRoute backs a Gateway (and thus the load balancer). You change the route’s backend from app-blue to app-green (cutover) or back to app-blue (rollback); the Gateway and load balancer stay the same.
Gateway API is a Kubernetes SIG-Network project that generalizes Ingress. The Gateway represents the entry point (on DOKS, tied to a Load Balancer); HTTPRoute describes how requests are forwarded to Services. You can have multiple HTTPRoutes attached to one Gateway and target different Services. That makes it straightforward to route 100% of traffic to the blue Service or 100% to the green Service by changing only the HTTPRoute; no need to edit the Gateway or the cluster’s default Ingress.
When to use blue-green: Zero-downtime releases with a single cutover and instant rollback; stateless apps or apps whose state is in a shared database with backward-compatible schema (or separate DBs per environment). Reconsider when: Long-lived connections (e.g., WebSockets) cannot be drained gracefully; schema changes are not backward compatible and you cannot run two app versions against the same DB; or resource constraints prevent running two full copies.
| Strategy | Traffic during deploy | Cutover moment | Rollback | Use case |
|---|---|---|---|---|
| Blue-Green | One env at a time | Single switch | Switch route back | Zero downtime, clear rollback |
| Rolling | Mixed old and new pods | Gradual | Rollback Deployment | Default Kubernetes; version mixing |
| Canary | Split % to new version | Gradual shift | Route/weight change | Risk reduction, gradual validation |
| Recreate | All pods terminated first | After new up | Redeploy previous | Dev/test; downtime acceptable |
| Component | Role |
|---|---|
| DOKS cluster | VPC-native Kubernetes cluster; runs blue and green Deployments and Services. |
| Gateway | Gateway API resource that provisions the entry point; on DOKS this is implemented by the load balancer. |
| HTTPRoute | Routes traffic from the Gateway to a specific Service (blue or green). One route, one backend; switch by changing the backend Service. |
| Blue / Green Deployments | Two Deployments, each running one version of the app (e.g., app-blue, app-green). |
| Blue / Green Services | Two Services (e.g., app-blue, app-green) selecting the corresponding Deployment’s pods. |
| Managed Database | DigitalOcean Managed PostgreSQL or MySQL in the same VPC; apps connect via private hostname and port. |
Traffic flow: External traffic hits the load balancer (created by the Gateway). The Gateway forwards to the backend referenced by the HTTPRoute (e.g. app-green Service). The Service sends traffic to pods of the green Deployment. Pods connect to the managed database over the VPC (private network). Blue and green can share one database (same connection string) or use separate databases.
Before starting, ensure you have:
production. Create it if needed. If you use another namespace, replace production in all commands and manifests.Verify cluster access:
kubectl get nodes
Expected output (example):
NAME STATUS ROLES AGE VERSION
pool-xxx-yyyy-abcde Ready <none> 5d v1.28.x
pool-xxx-yyyy-fghij Ready <none> 5d v1.28.x
If the list is empty or nodes are NotReady, configure kubectl for your DOKS cluster and ensure the cluster is healthy before continuing.
Create the namespace:
kubectl create namespace production
Expected output:
namespace/production created
If the namespace already exists, you will see namespace/production unchanged.
DOKS does not ship Gateway API by default. Install the CRDs, then a controller that implements the Gateway (this guide uses Cilium). Alternatives include Istio and the Gateway API project’s Envoy provider.
kubectl get gatewayclass
If you see a table with at least one GatewayClass (e.g. NAME ... CONTROLLER ... AGE), you may already have a controller. Note the NAME (e.g. cilium) for the Gateway’s spec.gatewayClassName, then skip the remainder of Step 1 and continue with Step 2. If you see No resources found or an error that the resource type is unknown, continue below.
kubectl apply -f https://github.com/kubernetes-sigs/gateway-api/releases/download/v1.2.0/standard-install.yaml
Expected output (example):
customresourcedefinition.apiextensions.k8s.io/backendtlspolicies.gateway.networking.k8s.io created
customresourcedefinition.apiextensions.k8s.io/gatewayclasses.gateway.networking.k8s.io created
...
This example uses the Cilium Helm chart with Gateway API support. Replace 1.19.0 with a supported Cilium version if needed. On DOKS, Cilium can run alongside the default CNI for L7 Gateway API; for CNI replacement scenarios, see Cilium installation documentation.
Add the Helm repository and install:
helm repo add cilium https://helm.cilium.io/
helm repo update
helm install cilium cilium/cilium --version 1.19.0 \
--namespace kube-system \
--set gatewayAPI.enabled=true
Expected output (example):
NAME: cilium
LAST DEPLOYED: ...
NAMESPACE: kube-system
STATUS: deployed
...
Wait for Cilium to be ready:
kubectl -n kube-system rollout status ds/cilium --timeout=300s
Expected output:
daemonset "cilium" successfully rolled out
kubectl get gatewayclass
Expected output (example):
NAME CONTROLLER AGE
cilium io.cilium/gateway-controller 2m
Note the NAME value (e.g. cilium). Use this as spec.gatewayClassName when creating the Gateway in Step 4. If the list is empty, the controller is not ready; check Cilium pods with kubectl get pods -n kube-system -l k8s-app=cilium and controller logs.
Alternatives: Istio and the Gateway API Envoy provider also implement the Gateway API. Install their CRDs and controller according to their documentation, then use their GatewayClass name in the Gateway manifest.
Pods need the database connection string from your DigitalOcean Managed Database. Use the private network hostname and add the cluster to Trusted sources so traffic stays inside the VPC.
Private vs public hostname: The private hostname (e.g. db-postgresql-xxx.db.ondigitalocean.com) is only reachable from resources in the same VPC. Use it for all app connections. The public hostname is for access from outside the VPC; do not use it for blue/green apps in the cluster.
Example connection string format:
postgresql://USER:PASSWORD@PRIVATE_HOST:25060/DATABASE?sslmode=requiremysql://USER:PASSWORD@PRIVATE_HOST:25060/DATABASEReplace USER, PASSWORD, PRIVATE_HOST, and DATABASE with the values from the control panel. Port may differ (e.g. 25060 for PostgreSQL).
Important — URL-encode special characters in passwords
If your database password contains special characters such as
@,:,/,?, or#, you must URL-encode the password before placing it in the connection string. Otherwise, the connection string may be parsed incorrectly and cause authentication failures.For example, a password like:
p@ss:word/123Must be encoded as:
p%40ss%3Aword%2F123You can generate a URL-encoded password using local tools such as
python -c "import urllib.parse; print(urllib.parse.quote('p@ss:word/123'))".
This allows the cluster’s nodes and pods to reach the database over the private network. See Secure with trusted sources.
Create a Secret in the production namespace. Use the exact hostname, port, database name, user, and password from the Private network tab.
PostgreSQL example:
kubectl create secret generic db-credentials -n production \
--from-literal=url='postgresql://USER:PASSWORD@PRIVATE_HOST:25060/DATABASE?sslmode=require'
MySQL example:
kubectl create secret generic db-credentials -n production \
--from-literal=url='mysql://USER:PASSWORD@PRIVATE_HOST:25060/DATABASE'
Replace USER, PASSWORD, PRIVATE_HOST, 25060 (if different), and DATABASE with your values. Both blue and green Deployments will reference this Secret.
Verify the Secret:
kubectl get secret db-credentials -n production
Expected output:
NAME TYPE DATA AGE
db-credentials Opaque 1 10s
If the Secret is missing, repeat the kubectl create secret command and check the namespace.
Define two Deployments that differ by name and the version label (blue vs green) so the Services can select them. Use the same image and the db-credentials Secret in both. When you release a new version, update the green Deployment’s image tag and apply.
Placeholders: Replace registry.digitalocean.com/your-registry/myapp:v1 with your image. Replace v1 with the version tag for blue; use a newer tag for green when cutting over.
Save the following as app-blue.yaml (excerpt shows structure; include full probe and env blocks):
apiVersion: apps/v1
kind: Deployment
metadata:
name: app-blue
namespace: production
labels:
app: myapp
version: blue
spec:
replicas: 2
selector:
matchLabels:
app: myapp
version: blue
template:
metadata:
labels:
app: myapp
version: blue
spec:
containers:
- name: app
image: registry.digitalocean.com/your-registry/myapp:v1
ports:
- containerPort: 8080
env:
- name: DATABASE_URL
valueFrom:
secretKeyRef:
name: db-credentials
key: url
readinessProbe:
httpGet:
path: /ready
port: 8080
initialDelaySeconds: 5
periodSeconds: 10
livenessProbe:
httpGet:
path: /health
port: 8080
initialDelaySeconds: 10
periodSeconds: 10
If the app does not expose /ready or /health, use the path it does expose or a tcpSocket probe on the app port. Readiness controls whether the pod receives traffic from the Service; the load balancer only sends traffic to Ready pods.
Create a similar manifest for app-green (name app-green, label version: green, and when deploying a new release use the new image tag for green). Apply both:
kubectl apply -f app-blue.yaml
kubectl apply -f app-green.yaml
Expected output (example):
deployment.apps/app-blue created
deployment.apps/app-green created
Verify both Deployments are ready:
kubectl get deployments -n production -l app=myapp
Expected output (example):
NAME READY UP-TO-DATE AVAILABLE AGE
app-blue 2/2 2 2 1m
app-green 2/2 2 2 1m
If READY is less than the replicas count, check pod status and logs: kubectl get pods -n production -l app=myapp and kubectl logs -n production -l version=green --tail=50.
Each environment has its own Service so the HTTPRoute can target one or the other.
Save the following as services.yaml:
apiVersion: v1
kind: Service
metadata:
name: app-blue
namespace: production
labels:
app: myapp
spec:
selector:
app: myapp
version: blue
ports:
- port: 80
targetPort: 8080
---
apiVersion: v1
kind: Service
metadata:
name: app-green
namespace: production
labels:
app: myapp
spec:
selector:
app: myapp
version: green
ports:
- port: 80
targetPort: 8080
Apply and verify:
kubectl apply -f services.yaml
Expected output:
service/app-blue created
service/app-green created
kubectl get svc -n production -l app=myapp
Expected output (example):
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
app-blue ClusterIP 10.245.x.x <none> 80/TCP 30s
app-green ClusterIP 10.245.x.y <none> 80/TCP 30s
Create a Gateway and an HTTPRoute that sends traffic to the blue Service initially. Replace cilium in gatewayClassName with the GatewayClass name from Step 1 (GatewayClass verification) if different.
Save the following as gateway.yaml:
apiVersion: gateway.networking.k8s.io/v1
kind: Gateway
metadata:
name: public
namespace: production
spec:
gatewayClassName: cilium
listeners:
- name: http
protocol: HTTP
port: 80
Production note: This example uses HTTP (port 80) for simplicity. In a real production environment, you should configure HTTPS/TLS with certificates (for example using cert-manager and a TLS listener on the Gateway). Serving production traffic over plain HTTP is not recommended.
Save the following as httproute.yaml:
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
name: myapp
namespace: production
spec:
parentRefs:
- name: public
namespace: production
rules:
- matches:
- path:
type: PathPrefix
value: /
backendRefs:
- name: app-blue
port: 80
Apply both:
kubectl apply -f gateway.yaml
kubectl apply -f httproute.yaml
Expected output (example):
gateway.gateway.networking.k8s.io/public created
httproute.gateway.networking.k8s.io/myapp created
Confirm the Gateway has an address and is programmed:
kubectl get gateway -n production
Expected output (example):
NAME CLASS ADDRESS PROGRAMMED AGE
public cilium 159.89.123.45 True 45s
If ADDRESS is empty or PROGRAMMED is False, the controller has not finished. Wait 3–5 minutes and run the command again. On DigitalOcean, Load Balancer provisioning typically takes several minutes, so an immediate ADDRESS may not appear. If it stays False, run kubectl get gatewayclass and ensure the Gateway’s spec.gatewayClassName matches a GatewayClass name; check Cilium pods and logs in kube-system.
Test the green environment without sending it production traffic. From inside the cluster you can hit the green Service by name.
Run a one-off pod and curl the green Service (replace the path if your app uses something other than /):
kubectl run -it --rm curl --image=curlimages/curl --restart=Never -n production -- curl -s http://app-green.production.svc.cluster.local/
Expected output: the HTTP response body from the green app (e.g. HTML or JSON). If you see connection refused or timeout, check that green pods are Ready and that the Service selector matches the green Deployment’s labels.
If the app has a health or version endpoint, call it as well. Fix any DB connection or startup issues before cutover. This is the final staging check before green receives user traffic.
Before switching traffic to green:
To see the cutover, run a loop that hits the load balancer every few seconds. Replace EXTERNAL_IP with the Gateway’s ADDRESS from kubectl get gateway -n production.
In a terminal:
while true; do curl -s http://EXTERNAL_IP/; echo; sleep 2; done
You should see responses from the current version (blue). Leave this running. In another terminal, update the HTTPRoute to send traffic to green:
kubectl patch httproute myapp -n production --type=json -p='[{"op":"replace","path":"/spec/rules/0/backendRefs/0/name","value":"app-green"}]'
Expected output:
httproute.gateway.networking.k8s.io/myapp patched
Within a few seconds the loop output should switch to the new version. Stop the loop with Ctrl+C. Traffic is now on green.
Rollback: Point the route back to blue. No redeploy or pod restarts:
kubectl patch httproute myapp -n production --type=json -p='[{"op":"replace","path":"/spec/rules/0/backendRefs/0/name","value":"app-blue"}]'
Traffic returns to blue. Then investigate green (logs, DB, metrics) before the next release attempt.
After cutover, watch metrics and be ready to roll back by switching the HTTPRoute back to blue.
kubectl get pods -n production -l app=myapp; avoid repeated restarts on green.Run this from a machine that can reach the Gateway’s external IP. Replace EXTERNAL_IP and adjust the path if needed.
#!/bin/bash
EXTERNAL_IP="159.89.123.45"
REQ_PATH="/"
for i in 1 2 3 4 5; do
CODE=$(curl -s -o /dev/null -w "%{http_code}" "http://${EXTERNAL_IP}${REQ_PATH}")
if [ "$CODE" != "200" ]; then
echo "HTTP $CODE at $(date)"
exit 1
fi
sleep 2
done
echo "OK"
If the script exits non-zero, investigate app and Gateway before or after rollback.
| Observation | Action |
|---|---|
| Error rate or latency spike | Roll back (patch HTTPRoute to app-blue) |
| Green pods not Ready | Fix green or roll back |
| DB errors in green logs | Fix DB/credentials or roll back |
| No issues after validation | Keep green; scale down blue when appropriate |
After green is stable, you can scale the blue Deployment to zero to save resources. To roll back later, scale blue up again and then switch the HTTPRoute back to blue.
kubectl scale deployment app-blue -n production --replicas=0
To bring blue back:
kubectl scale deployment app-blue -n production --replicas=2
kubectl patch httproute myapp -n production --type=json -p='[{"op":"replace","path":"/spec/rules/0/backendRefs/0/name","value":"app-blue"}]'
Same database: If blue and green share one database, every schema change must be backward compatible so both can run against the same schema. Use an expand-contract approach: (1) Expand: add new columns or tables without dropping existing ones; deploy code that supports both. (2) Migrate: backfill the new structure. (3) Contract: after traffic is on the new version, drop the old columns or tables in a later release. Avoid long-running or locking migrations during the switch; use a maintenance window or online migration tools.
Separate databases: If blue and green use separate managed DB clusters or logical databases, you can run breaking schema changes on the green DB and migrate data as needed. Rollback means switching traffic back to blue; blue’s database is unchanged.
Rollback and the database: On rollback you only switch traffic back to blue. With a shared database, blue’s code must still be compatible with the current schema. With separate DBs, no DB rollback is required for the app tier.
This section shows a technical scenario: moving from SSH-based deployment to blue-green on DOKS. Outputs are illustrative.
Assumptions: Blue is live; HTTPRoute points to app-blue. New image myapp:v1.1 is built and pushed.
1. Update green Deployment to new image:
kubectl set image deployment/app-green app=registry.digitalocean.com/your-registry/myapp:v1.1 -n production
deployment.apps/app-green image updated
2. Wait for green rollout:
kubectl rollout status deployment/app-green -n production --timeout=120s
deployment "app-green" successfully rolled out
3. Run migrations (if any, backward compatible):
kubectl exec -n production deployment/app-green -- python manage.py migrate --noinput
Operations to perform: ...
Running migrations: ...
OK
4. Verify green via Service (internal):
kubectl run -it --rm curl --image=curlimages/curl --restart=Never -n production -- curl -s http://app-green.production.svc.cluster.local/ | head -5
<!DOCTYPE html>
<html>
...
5. Cutover: Patch HTTPRoute to green.
kubectl patch httproute myapp -n production --type=json -p='[{"op":"replace","path":"/spec/rules/0/backendRefs/0/name","value":"app-green"}]'
httproute.gateway.networking.k8s.io/myapp patched
6. Confirm via external IP (replace with your Gateway ADDRESS):
curl -s http://159.89.123.45/ | head -3
Before cutover (blue): response reflects blue-v1.0. After cutover (green): response reflects green-v1.1. Cutover time is typically 2–3 seconds.
7. Rollback (if needed):
kubectl patch httproute myapp -n production --type=json -p='[{"op":"replace","path":"/spec/rules/0/backendRefs/0/name","value":"app-blue"}]'
Traffic returns to blue without redeploying.
| Symptom | Likely cause | Fix |
|---|---|---|
| Gateway has no ADDRESS or PROGRAMMED is False | No Gateway API controller or wrong gatewayClassName | Run kubectl get gatewayclass. Install controller (e.g. Cilium with gatewayAPI.enabled=true) or set Gateway spec.gatewayClassName to an existing class name. Check controller logs. |
| Traffic not switching after HTTPRoute patch | Wrong backend name or namespace | Confirm: kubectl get httproute myapp -n production -o jsonpath='{.spec.rules[0].backendRefs[0].name}' (expect app-green or app-blue). Ensure HTTPRoute and Services are in the same namespace. |
| Pods cannot connect to database | Public hostname, cluster not in Trusted sources, or wrong Secret | Use private hostname from Connection Details → Private network. Add cluster to Databases → Settings → Trusted sources. Verify Secret: kubectl get secret db-credentials -n production -o jsonpath='{.data.url}' | base64 -d. |
| Readiness probe failing; endpoints empty | Wrong probe path/port or app slow to start | Check pod logs: kubectl logs -n production -l version=green --tail=100. Use correct path or tcpSocket probe; increase initialDelaySeconds if needed. |
| 502 or 503 after cutover to green | Green pods not Ready or wrong backendRef | Check pods: kubectl get pods -n production -l version=green. Check endpoints: kubectl get endpoints app-green -n production. Confirm HTTPRoute backendRefs[0].name is app-green and port matches Service. |
Q: What is blue-green deployment on Kubernetes?
Blue-green means two full environments (blue and green), each with its own Deployment and Service. Only one receives live traffic. You deploy to the inactive environment, validate it, then switch traffic in one step. Rollback is the same step in reverse; no redeploy.
Q: Why use Gateway API instead of Ingress for blue-green on DOKS?
Gateway API separates the entry point (Gateway) from routing rules (HTTPRoute). You point the HTTPRoute at the blue or green Service and switch by updating that resource. Cutover and rollback are explicit and easy to automate.
Q: Can blue and green share the same DigitalOcean Managed Database?
Yes. Both Deployments can use the same connection string and connect over the VPC. Schema changes must be backward compatible so both versions can run against the same DB. For breaking changes, use separate databases for blue and green.
Q: How do I roll back after switching to green?
Patch the HTTPRoute back to the blue Service:
kubectl patch httproute myapp -n production --type=json -p='[{"op":"replace","path":"/spec/rules/0/backendRefs/0/name","value":"app-blue"}]'
Traffic returns to blue immediately. No pod restarts or redeploys.
Q: Do I need two load balancers for blue-green?
No. One Gateway (one load balancer) is enough. The HTTPRoute decides which Service receives traffic. You only change the route’s backend, not the Gateway.
Q: How do I test the green environment before cutover?
Use the internal Service URL: http://app-green.production.svc.cluster.local from inside the cluster (e.g. a temporary pod with curl). Run smoke tests against that endpoint before changing the HTTPRoute to green.
Blue-green deployment on DigitalOcean Kubernetes with Gateway API gives you a clear path from two environments to one traffic switch and back. You avoid the downtime of SSH-based Droplet deploys and the mixed-version window of rolling updates, and you keep instant rollback by reverting the HTTPRoute. Combining DOKS with a managed database in the same VPC keeps data access private and operations simple.
Use the checklist before each cutover, validate green’s readiness and database connectivity, and treat rollback as a normal option, not an exception. Once this workflow is in place, releases become predictable and safe to run in production.
Thanks for learning with the DigitalOcean Community. Check out our offerings for compute, storage, networking, and managed databases.
Building future-ready infrastructure with Linux, Cloud, and DevOps. Full Stack Developer & System Administrator. Technical Writer @ DigitalOcean | GitHub Contributor | Passionate about Docker, PostgreSQL, and Open Source | Exploring NLP & AI-TensorFlow | Nailed over 50+ deployments across production environments.
This textbox defaults to using Markdown to format your answer.
You can type !ref in this text area to quickly search our full set of tutorials, documentation & marketplace offerings and insert the link!
Get paid to write technical tutorials and select a tech-focused charity to receive a matching donation.
Full documentation for every DigitalOcean product.
The Wave has everything you need to know about building a business, from raising funding to marketing your product.
Stay up to date by signing up for DigitalOcean’s Infrastructure as a Newsletter.
New accounts only. By submitting your email you agree to our Privacy Policy
Scale up as you grow — whether you're running one virtual machine or ten thousand.
Sign up and get $200 in credit for your first 60 days with DigitalOcean.*
*This promotional offer applies to new accounts only.