How to Configure Advanced Load Balancer Settings in Kubernetes Clusters

DigitalOcean Kubernetes (DOKS) is a managed Kubernetes service that lets you deploy Kubernetes clusters without the complexities of handling the control plane and containerized infrastructure. Clusters are compatible with standard Kubernetes toolchains and integrate natively with DigitalOcean Load Balancers and block storage volumes.

The DigitalOcean Cloud Controller supports provisioning DigitalOcean Load Balancers in a cluster’s resource configuration file.

Warning
In the DigitalOcean Control Panel, cluster resources (worker nodes, load balancers, and block storage volumes) are listed outside the Kubernetes page. If you rename or otherwise modify these resources in the control panel, you may render them unusable to the cluster or cause the reconciler to provision replacement resources. To avoid this, manage your cluster resources exclusively with kubectl or from the control panel's Kubernetes page.

You can specify the following advanced settings in the metadata stanza of your configuration file under annotations:

  • Algorithm
  • Sticky sessions
  • Health checks
  • SSL Certificates
  • Forced SSL connections
  • PROXY Protocol

Algorithm

The load balancer's algorithm determines how it distributes traffic across your nodes. There are two algorithms available:

  • The default round robin algorithm sends requests to each available node in turn.

  • The least connections algorithm sends requests to the node with the least number of active connections. This can be a better choice for traffic with longer sessions.

Use the do-loadbalancer-algorithm annotation to explicitly define the load balancer's algorithm (either round_robin or least_connections), otherwise the load balancer defaults to round_robin.

  
    
. . .
metadata:
  name: least-connections-snippet
  annotations:
    service.beta.kubernetes.io/do-loadbalancer-algorithm: "least_connections"
. . .

  

See a full configuration example for least connections.

Sticky Sessions

Sticky sessions send subsequent requests from the same client to the same node by setting a cookie with a configurable name and TTL (Time-To-Live) duration. The TTL parameter defines the duration the cookie remains valid in the client's browser. This option is useful for application sessions that rely on connecting to the same node for each request.

  • Sticky sessions will route consistently to the same nodes, not pods, so you should avoid having more than one pod per node serving requests.
  • Sticky sessions require your service to configure externalTrafficPolicy: Local to preserve the client source IP addresses when incoming traffic is forwarded to other nodes.

By default, the load balancer routes each client request to the backend nodes following the configured algorithm. Use the do-loadbalancer-sticky-sessions-type annotation to explicitly enable (cookies) or disable (none) sticky sessions, otherwise the load balancer defaults to disabling sticky sessions.

  
    
metadata:
  name: sticky-session-snippet
  annotations:
    service.beta.kubernetes.io/do-loadbalancer-protocol: "http"
    service.beta.kubernetes.io/do-loadbalancer-sticky-sessions-type: "cookies"
    service.beta.kubernetes.io/do-loadbalancer-sticky-sessions-cookie-name: "example"
    service.beta.kubernetes.io/do-loadbalancer-sticky-sessions-cookie-ttl: "60"

  

See a full configuration example for sticky sessions.

Health Checks

Health checks verify that your nodes are online and meet any customized health criteria. Load balancers will only forward requests to nodes that pass health checks.

The load balancer performs health checks against a port on your service (defaults to the first node port on the worker nodes as defined in the service).

You can configure most health check settings in the metadata stanza's annotations section.

  
    
metadata:
  name: health-check-snippet
  annotations:
    service.beta.kubernetes.io/do-loadbalancer-healthcheck-port: "80"
    service.beta.kubernetes.io/do-loadbalancer-healthcheck-protocol: "http"
    service.beta.kubernetes.io/do-loadbalancer-healthcheck-path: "/health"
    service.beta.kubernetes.io/do-loadbalancer-healthcheck-check-interval-seconds: "3"
    service.beta.kubernetes.io/do-loadbalancer-healthcheck-response-timeout-seconds: "5"
    service.beta.kubernetes.io/do-loadbalancer-healthcheck-unhealthy-threshold: "3"
    service.beta.kubernetes.io/do-loadbalancer-healthcheck-healthy-threshold: "5"

  

See full configuration examples for the health check annotations.

SSL Certificates

You can encrypt traffic to your Kubernetes cluster by using an SSL certificate with the load balancer. You'll have to create the SSL certificate or upload it first, then reference the certificate's ID in the load balancer's configuration file. You can obtain the IDs of uploaded SSL certificates using doctl or the API.

The example below creates a load balancer using an SSL certificate.

  
    
---
kind: Service
apiVersion: v1
metadata:
  name: https-with-cert
  annotations:
    service.beta.kubernetes.io/do-loadbalancer-protocol: "http"
    service.beta.kubernetes.io/do-loadbalancer-algorithm: "round_robin"
    service.beta.kubernetes.io/do-loadbalancer-tls-ports: "443"
    service.beta.kubernetes.io/do-loadbalancer-certificate-id: "your-certificate-id"
spec:
  type: LoadBalancer
  selector:
    app: nginx-example
  ports:
    - name: http
      protocol: TCP
      port: 80
      targetPort: 80
    - name: https
      protocol: TCP
      port: 443
      targetPort: 80
. . .

  

See the full configuration example.

Forced SSL Connections

The SSL option redirects HTTP requests on port 80 to HTTPS on port 443. When you enable this option, HTTP URLs are forwarded to HTTPS with a 307 redirect. You must have at least one HTTP to HTTPS forwarding rule configured to force SSL connections.

The example below contains the configuration settings that must be true for the redirect to work.

  
    
. . .
  name: https-with-redirect-snippet
  annotations:
    service.beta.kubernetes.io/do-loadbalancer-protocol: "http"
    service.beta.kubernetes.io/do-loadbalancer-algorithm: "round_robin"
    service.beta.kubernetes.io/do-loadbalancer-tls-ports: "443"
    service.beta.kubernetes.io/do-loadbalancer-certificate-id: "your-certificate-id"
    service.beta.kubernetes.io/do-loadbalancer-redirect-http-to-https: "true"
. . .

  

See the full configuration example for forced SSL connections.

PROXY Protocol

Enabling the PROXY protocol allows the load balancer to forward client connection information (such as client IP addresses) to your nodes. The software running on the nodes must be properly configured to accept the connection information from the load balancer.

Options are true or false. Defaults to false.

  
    
---
. . .
metadata:
  name: proxy-protocol
  annotations:
    service.beta.kubernetes.io/do-loadbalancer-enable-proxy-protocol: "true"
. . .

  

Backend Keepalive

By default, DigitalOcean Load Balancers ignore the Connection: keep-alive header of HTTP responses from Droplets to load balancers and close the connection upon completion. When you enable backend keepalive, the load balancer honors the Connection: keep-alive header and keeps the connection open for reuse. This allows the load balancer to use fewer active TCP connections to send and to receive HTTP requests between the load balancer and your target Droplets.

Enabling this option generally improves performance (requests per second and latency) and is more resource efficient. For many use cases, such as serving web sites and APIs, this can improve the performance the client experiences. However, it is not guaranteed to improve performance in all situations, and can increase latency in certain scenarios.

The option applies to all forwarding rules where the target protocol is HTTP or HTTPS. It does not apply to forwarding rules that use TCP, HTTPS, or HTTP/2 passthrough. 

When enabled, the maximum number of connections between the LB and each server is limited to 10,000 divided by the number of target droplets. For example, if you have 5 target droplets, each one is limited to 2,000 connections. 

Options are true or false. Defaults to false.

  
    
---
. . .
metadata:
  name: backend-keepalive
  annotations:
    service.beta.kubernetes.io/do-loadbalancer-enable-backend-keepalive: "true"
. . .

  

Accessing by Hostname

Because of an existing limitation in upstream Kubernetes, pods cannot talk to other pods via the IP address of an external load-balancer set up through a LoadBalancer-typed service.

As a workaround, you can set up a DNS record for a custom hostname (at a provider of your choice) and have it point to the external IP address of the load balancer. Then, instruct the service to return the custom hostname by specifying the hostname in the service.beta.kubernetes.io/do-loadbalancer-hostname annotation and retrieving the service's status.Hostname field afterwards.

The workflow for setting up the service.beta.kubernetes.io/do-loadbalancer-hostname annotation is generally:

  1. Deploy the manifest with your service (example below).
  2. Wait for the service's external IP to become available.
  3. Add an A or AAAA DNS record for your hostname pointing to the external IP.
  4. Add the hostname annotation to your manifest (example below). Deploy it.
  
    
kind: Service
apiVersion: v1
metadata:
  name: hello
  annotations:
    service.beta.kubernetes.io/do-loadbalancer-certificate-id: "1234-5678-9012-3456"
    service.beta.kubernetes.io/do-loadbalancer-protocol: "https"
    service.beta.kubernetes.io/do-loadbalancer-hostname: "hello.example.com"
spec:
  type: LoadBalancer
  selector:
    app: my-app-example
  ports:
    - name: https
      protocol: TCP
      port: 443
      targetPort: 80
. . .

  

References

For more about managing load balancers, see: