DigitalOcean Kubernetes (DOKS) is a managed Kubernetes service that lets you deploy Kubernetes clusters without the complexities of handling the control plane and containerized infrastructure. Clusters are compatible with standard Kubernetes toolchains and integrate natively with DigitalOcean Load Balancers and block storage volumes.
The DigitalOcean Cloud Controller supports provisioning DigitalOcean Load Balancers in a cluster’s resource configuration file.
kubectlor from the control panel's Kubernetes page.
You can specify the following advanced settings in the
metadata stanza of your configuration file under
The load balancer's algorithm determines how it distributes traffic across your nodes. There are two algorithms available:
The default round robin algorithm sends requests to each available node in turn.
The least connections algorithm sends requests to the node with the least number of active connections. This can be a better choice for traffic with longer sessions.
do-loadbalancer-algorithm annotation to explicitly define the load balancer's algorithm (either
least_connections), otherwise the load balancer defaults to
. . . metadata: name: least-connections-snippet annotations: service.beta.kubernetes.io/do-loadbalancer-algorithm: "least_connections" . . .
Sticky sessions send subsequent requests from the same client to the same node by setting a cookie with a configurable name and TTL (Time-To-Live) duration. The TTL parameter defines the duration the cookie remains valid in the client's browser. This option is useful for application sessions that rely on connecting to the same node for each request.
externalTrafficPolicy: Localto preserve the client source IP addresses when incoming traffic is forwarded to other nodes.
By default, the load balancer routes each client request to the backend nodes following the configured algorithm. Use the
do-loadbalancer-sticky-sessions-type annotation to explicitly enable (
cookies) or disable (
none) sticky sessions, otherwise the load balancer defaults to disabling sticky sessions.
metadata: name: sticky-session-snippet annotations: service.beta.kubernetes.io/do-loadbalancer-protocol: "http" service.beta.kubernetes.io/do-loadbalancer-sticky-sessions-type: "cookies" service.beta.kubernetes.io/do-loadbalancer-sticky-sessions-cookie-name: "example" service.beta.kubernetes.io/do-loadbalancer-sticky-sessions-cookie-ttl: "60"
Health checks verify that your nodes are online and meet any customized health criteria. Load balancers will only forward requests to nodes that pass health checks.
The load balancer performs health checks against a port on your service (defaults to the first node port on the worker nodes as defined in the service).
You can configure most health check settings in the metadata stanza's
metadata: name: health-check-snippet annotations: service.beta.kubernetes.io/do-loadbalancer-healthcheck-port: "80" service.beta.kubernetes.io/do-loadbalancer-healthcheck-protocol: "http" service.beta.kubernetes.io/do-loadbalancer-healthcheck-path: "/health" service.beta.kubernetes.io/do-loadbalancer-healthcheck-check-interval-seconds: "3" service.beta.kubernetes.io/do-loadbalancer-healthcheck-response-timeout-seconds: "5" service.beta.kubernetes.io/do-loadbalancer-healthcheck-unhealthy-threshold: "3" service.beta.kubernetes.io/do-loadbalancer-healthcheck-healthy-threshold: "5"
See full configuration examples for the health check annotations.
You can encrypt traffic to your Kubernetes cluster by using an SSL certificate with the load balancer. You'll have to create the SSL certificate or upload it first, then reference the certificate's ID in the load balancer's configuration file. You can obtain the IDs of uploaded SSL certificates using
doctl or the API.
The example below creates a load balancer using an SSL certificate.
--- kind: Service apiVersion: v1 metadata: name: https-with-cert annotations: service.beta.kubernetes.io/do-loadbalancer-protocol: "http" service.beta.kubernetes.io/do-loadbalancer-algorithm: "round_robin" service.beta.kubernetes.io/do-loadbalancer-tls-ports: "443" service.beta.kubernetes.io/do-loadbalancer-certificate-id: "your-certificate-id" spec: type: LoadBalancer selector: app: nginx-example ports: - name: http protocol: TCP port: 80 targetPort: 80 - name: https protocol: TCP port: 443 targetPort: 80 . . .
See the full configuration example.
The SSL option redirects HTTP requests on port 80 to HTTPS on port 443. When you enable this option, HTTP URLs are forwarded to HTTPS with a 307 redirect. You must have at least one HTTP to HTTPS forwarding rule configured to force SSL connections.
The example below contains the configuration settings that must be true for the redirect to work.
. . . name: https-with-redirect-snippet annotations: service.beta.kubernetes.io/do-loadbalancer-protocol: "http" service.beta.kubernetes.io/do-loadbalancer-algorithm: "round_robin" service.beta.kubernetes.io/do-loadbalancer-tls-ports: "443" service.beta.kubernetes.io/do-loadbalancer-certificate-id: "your-certificate-id" service.beta.kubernetes.io/do-loadbalancer-redirect-http-to-https: "true" . . .
Enabling the PROXY protocol allows the load balancer to forward client connection information (such as client IP addresses) to your nodes. The software running on the nodes must be properly configured to accept the connection information from the load balancer.
false. Defaults to
--- . . . metadata: name: proxy-protocol annotations: service.beta.kubernetes.io/do-loadbalancer-enable-proxy-protocol: "true" . . .
By default, DigitalOcean Load Balancers ignore the
Connection: keep-alive header of HTTP responses from Droplets to load balancers and close the connection upon completion. When you enable backend keepalive, the load balancer honors the
Connection: keep-alive header and keeps the connection open for reuse. This allows the load balancer to use fewer active TCP connections to send and to receive HTTP requests between the load balancer and your target Droplets.
Enabling this option generally improves performance (requests per second and latency) and is more resource efficient. For many use cases, such as serving web sites and APIs, this can improve the performance the client experiences. However, it is not guaranteed to improve performance in all situations, and can increase latency in certain scenarios.
The option applies to all forwarding rules where the target protocol is HTTP or HTTPS. It does not apply to forwarding rules that use TCP, HTTPS, or HTTP/2 passthrough.
When enabled, the maximum number of connections between the LB and each server is limited to 10,000 divided by the number of target droplets. For example, if you have 5 target droplets, each one is limited to 2,000 connections.
false. Defaults to
--- . . . metadata: name: backend-keepalive annotations: service.beta.kubernetes.io/do-loadbalancer-enable-backend-keepalive: "true" . . .
Because of an existing limitation in upstream Kubernetes, pods cannot talk to other pods via the IP address of an external load-balancer set up through a
As a workaround, you can set up a DNS record for a custom hostname (at a provider of your choice) and have it point to the external IP address of the load balancer. Then, instruct the service to return the custom hostname by specifying the hostname in the
service.beta.kubernetes.io/do-loadbalancer-hostname annotation and retrieving the service's
status.Hostname field afterwards.
The workflow for setting up the
service.beta.kubernetes.io/do-loadbalancer-hostname annotation is generally:
kind: Service apiVersion: v1 metadata: name: hello annotations: service.beta.kubernetes.io/do-loadbalancer-certificate-id: "1234-5678-9012-3456" service.beta.kubernetes.io/do-loadbalancer-protocol: "https" service.beta.kubernetes.io/do-loadbalancer-hostname: "hello.example.com" spec: type: LoadBalancer selector: app: my-app-example ports: - name: https protocol: TCP port: 443 targetPort: 80 . . .
For more about managing load balancers, see:
What is Load Balancing? for a conceptual overview of load balancing.
DigitalOcean Load Balancer overview for the features and limits of DigitalOcean Load Balancers.
DigitalOcean Cloud Controller Load Balancer Service Annotations for more examples.
How to Set Up an Nginx Ingress with Cert-Manager on DigitalOcean Kubernetes is a good example use case for DigitalOcean Load Balancers on Kubernetes. The Nginx Ingress LoadBalancer Service routes all load balancer traffic to nodes running Nginx Ingress Pods. Other nodes will deliberately fail load balancer health checks so that the ingress traffic does not get routed to them.