DigitalOcean Load Balancers are a fully-managed, highly available network load balancing service. Load balancers distribute traffic to groups of Droplets, which decouples the overall health of a backend service from the health of a single server to ensure that your services stay online.
After you create a load balancer and add Droplets to it, you can manage and modify it on its detail page.
First, click Networking in the main navigation, and then click Load Balancers to go to the load balancer index page. Click on an individual load balancer's name to go to its detail page, which has three tabs:
Droplets, where you can view the Droplets currently attached to the load balancer and modify the backend Droplet pool.
Graphs, where you can view graphs of traffic patterns and infrastructure health.
Settings, where you can set or customize the forwarding rules, balancing algorithm, sticky sessions, health checks, SSL forwarding, and PROXY protocol.
Load balancers automatically connect to Droplets that reside in the same VPC network as the load balancer.
To validate that private networking has been enabled on a Droplet from the control panel, click Droplets in the main nav, then click the Droplet you want to check from the list of Droplets.
From the Droplet's page, click Networking in the left menu. If the private network interface is enabled, the Private Network section populates with the Droplet's private IPv4 address and VPC network name. If the private network interface has not been enabled, a “Turn off” button is displayed.
In the Droplets tab, you can view and modify the load balancer's backend Droplet pool.
This page displays information about the status of each Droplet, its downtime, and other health metrics. Clicking on a Droplet name will take you to the Droplet's detail page.
If you are managing backend Droplets by name, you can add additional Droplets by clicking the Add Droplets button on this page. If you are managing by tag, you will instead have an Edit Tag button.
Click the Graphs tab to get a visual representation of traffic patterns and infrastructure health.
The Frontend section displays graphs related to requests to the load balancer itself:
The Droplets section displays graphs related to the backend Droplet pool:
Click the Settings tab to modify the way that the load balancer functions.
Forward rules define how traffic is routed from the load balancer to its backend Droplets. The left side of each rule defines the listening port and protocol on the load balancer itself, and the right side defines where and how the requests will be routed to the backends.
The load balancer's algorithm determines how it distributes traffic across your Droplets. There are two algorithms available:
The default round robin algorithm sends requests to each available Droplet in turn.
The least connections algorithm sends requests to the Droplet with the least number of active connections. This can be a better choice for traffic with longer sessions.
Sticky sessions send subsequent requests from the same client to the same Droplet by setting a cookie with a configurable name and TTL (Time-To-Live) duration. The TTL parameter defines the duration the cookie remains valid in the client's browser. This option is useful for application sessions that rely on connecting to the same Droplet for each request.
Health checks verify that your Droplets are online and meet any customized health criteria. Load balancers will only forward requests to Droplets that pass health checks.
In the Target section, you choose the Protocol (HTTP, HTTPS, or TCP), Port (80 by default), and Path (
/ by default) that Droplets should respond on.
In the Additional Settings section, you choose:
The success criteria for HTTP and HTTPS health checks is a status code response in the range 200 - 399. The success criteria for TCP health checks is completing a TCP handshake to connect.
The SSL option redirects HTTP requests on port 80 to HTTPS on port 443. When you enable this option, HTTP URLs are forwarded to HTTPS with a 307 redirect. You must have at least one HTTP to HTTPS forwarding rule configured to force SSL connections.
Enabling the PROXY protocol allows the load balancer to forward client connection information (such as client IP addresses) to your Droplets. The software running on the Droplets must be properly configured to accept the connection information from the load balancer.
Backend services need to accept PROXY protocol headers or the Droplets will fail the load balancer's health checks.
By default, DigitalOcean Load Balancers ignore the
Connection: keep-alive header of HTTP responses from Droplets to load balancers and close the connection upon completion. When you enable backend keepalive, the load balancer honors the
Connection: keep-alive header and keeps the connection open for reuse. This allows the load balancer to use fewer active TCP connections to send and to receive HTTP requests between the load balancer and your target Droplets.
Enabling this option generally improves performance (requests per second and latency) and is more resource efficient. For many use cases, such as serving web sites and APIs, this can improve the performance the client experiences. However, it is not guaranteed to improve performance in all situations, and can increase latency in certain scenarios.
The option applies to all forwarding rules where the target protocol is HTTP or HTTPS. It does not apply to forwarding rules that use TCP, HTTPS, or HTTP/2 passthrough.
When enabled, the maximum number of connections between the LB and each server is limited to 10,000 divided by the number of target droplets. For example, if you have 5 target droplets, each one is limited to 2,000 connections.