An Introduction to DigitalOcean Load Balancers
DigitalOcean Load Balancers are a fully-managed, highly available service that distribute traffic to pools of Droplets. Providing a stable interface with automatic failover, Load Balancers accept incoming traffic and divide it among backend servers which handle the requests.
This lightens the burden placed on each Droplet and makes it easy to alter the backend without affecting the availability of your services. DigitalOcean Load Balancers simplify the set up scalable infrastructure and react to changing demands.
In this guide, we will explore what DigitalOcean Load Balancers are, what problems they solve, and how they work.
What Are DigitalOcean Load Balancers?
DigitalOcean Load Balancers let you distribute incoming traffic to backend Droplets. Traffic routing is controlled by configurable rules that specify the ports and protocols that the Load Balancer should listen on, as well as the way that it should select and forward requests to the backend servers.
A few things you should know about DigitalOcean Load Balancers:
- Price: $20 per month. No additional bandwidth charges apply.
- Regional availability: Load Balancers are available in every region.
- Supported protocols: HTTP, HTTPS, TCP
- Balancing algorithms: Round robin, least connections
- Backend management: Manual Droplet selection or tag-based management
- Backend membership requirements: All backends must be in a single region.
Load Balancers can be created and managed through the DigitalOcean Control Panel or using the DigitalOcean API.
Why Would I Need a Load Balancer?
The generic term load balancer refers to a traffic directing component that accepts network requests and distributes them among a pool of interchangeable backend servers. The processing workload is shared among a group of machines rather than relying on a single server to handle every request.
The main benefits of placing services behind a load balancer are:
- Availability: Load balancing can help decouple service health from the health of a single machine. If an application or web server crashes on a single machine, the load balancer can direct traffic elsewhere until service has been restored. When the load balancer itself has a built-in failover mechanism, the chances of service interruption are even more reduced.
- Performance: Dividing incoming traffic among a group of backend servers can help prevent any one machine from being overwhelmed by requests. Each backend server only receives a portion of incoming requests, meaning that more resources are available on each machine.
- Flexibility: Using a load balancer as a gateway gives you flexibility to change the backend infrastructure at will. This can help with anything from rolling out deployments seamlessly to large architecture redesigns. You can also easily scale your infrastructure by adjusting the number backend servers.
DigitalOcean's Load Balancer service provides the above advantages in a fully managed environment. Users can modify the Load Balancer behavior to suit their needs without the burden of managing the operational complexities.
Load Balancer Features
Aside from the basic traffic directing functionality, DigitalOcean Load Balancers offer the following advantages.
A DigitalOcean Load Balancer monitors backend Droplets to ensure that each service is operating healthy. Users can define health check endpoints and set the parameters around what constitutes a healthy response. The Load Balancer will automatically remove machines from rotation that fail health checks until those health checks indicate that service has been restored.
While this helps ensure the health of the backend pool, the Load Balancer itself must also be responsive to failures. DigitalOcean Load Balancers are configured with automatic failover in order to maintain availability even when failures occur at the balancing layer. Internally, the active balancing component is monitored and fails over to a standby if necessary. You can grow your infrastructure without introducing a new single point of failure.
Flexible Multi-Protocol Routing
A single DigitalOcean Load Balancer can be configured to handle multiple protocols and ports.
Standard HTTP balancing directs requests based on standard HTTP mechanisms. The Load Balancer sets the
X-Forwarded-Port headers to give the backends information about the original request. If user sessions depend on the client always connecting to the same backend, a cookie can be sent to the client to enable sticky sessions.
In addition to all of the HTTP balancing features, HTTPS balancing also provides encryption. DigitalOcean Load Balancers are able to handle HTTPS traffic in a few different configurations depending on your needs.
One option is to allow the Load Balancer to simply accept and forward HTTPS encrypted traffic to your backend Droplets. This is known as SSL passthrough and is a good option if you need end-to-end encryption and wish to spread the SSL overhead out among your various machines.
Load Balancers can also be used for SSL offloading, becoming what is known as an SSL termination point. The Load Balancer itself handles the SSL overhead, sending the decrypted requests to the backend machines over HTTP. In this configuration, users add their SSL certificate and private key to the Load Balancer itself. These secrets are placed in a secure, encrypted storage system and are not accessible to anyone, including DigitalOcean staff.
SSL certificates can be managed within your account by going to Settings > Security> TLS/SSL certificates.
Additionally, Load Balancers can be configured to redirect HTTP traffic on port 80 to HTTPS on port 443. This way, the Load Balancer can listen for traffic on both ports but redirect unencrypted traffic for better security.
Finally, TCP balancing is available for applications that do not speak HTTP. For example, deploying a Load Balancer in front of a database cluster like Galera would allow you spread requests across all available machines.
There are two different ways to define backend Droplets for a Load Balancer. The first is to explicitly add the desired Droplets to the Load Balancer by name, using the Control Panel or API.
However, a more powerful way of managing backends is to select by tag. Instead of selecting individual Droplets, a single tag can be used as the selection criteria. The Load Balancers evaluate tags at runtime, meaning that whenever a tag is added or removed from a Droplet, the Load Balancer will adjust the routing accordingly, without further configuration.
This makes it far simpler to scale the backend by adding or removing tags from Droplets. Rolling deployments can be configured by removing a tag from a Droplet, applying the update, and re-applying the tag. Once the Droplet passes its health check, the next server can be updated in the same way.
Managing Load Balancers in the DigitalOcean Control Panel
Load Balancers are available within the DigitalOcean Control Panel by clicking Networking in the top menu and then selecting Load Balancers:
We will give you a quick walk through of the basic interfaces below. Additional procedural guides are linked at the end of this article.
Creating a New Load Balancer
You can create a Load Balancer using the Create menu at any time or use the Create Load Balancer button on the Load Balancers overview page.
Whichever way you navigate opens the Load Balancer creation screen where you can choose a name for the Load Balancer. Names can be composed of letters, digits, periods, and dashes. Once created, this name can be changed at any time by clicking on the existing name on the Load Balancer page.
The Droplet backends can be added during creation or this can deferred until later. Because Load Balancers and their backends are restricted to a single DigitalOcean region, choosing the region to deploy the Load Balancer is the minimum required action for this step.
If you wish to pick the backends during creation, your first option is to select Droplets by name. Selecting your first Droplet will automatically select the appropriate region:
Alternatively, you can choose a tag and then filter by region. Any Droplets in the selected region with that tag will be added as a backend. Droplet tag changes are reflected immediately within the Load Balancer during operation:
The Load Balancer will connect to the backend over the private network if it is enabled on the Droplet in question when it is added to the Load Balancer. If private networking is disabled, the Load Balancer will contact the Droplet using its public IP address.
Next, define the Load Balancer forwarding rules. The left side of each rule defines the listening port and protocol on the Load Balancer itself, while the right side defines where and how the requests will be routed to the backends.
By default a rule routing HTTP port 80 on the Load Balancer to HTTP port 80 on the backends is defined:
At least one rule is required to create the Load Balancer. If necessary, you can change the listening protocol and the backend protocol using the drop down menus and change the port mapping using the two port fields. If you use HTTPS, you will additionally be asked to provide the certificate files or configure SSL passthrough. You can add any additional rules with the New Rule drop down.
Clicking Edit Advanced Settings allows you to modify some additional parameters for the Load Balancer (the defaults work well for most cases):
The options are:
- Algorithm: The default round robin algorithm sends requests to each available backend in turn. The alternative least connections algorithm sends requests to the backend with the least number of active connections.
- Sticky sessions: If your application's sessions rely on connecting to the same backend for each request, sticky sessions can be enabled. This sets a cookie with a configurable name and TTL (to define how long the cookie is valid) so that the Load Balancer can send future requests to the same machine.
- Health checks: The Load Balancer will only forward requests to healthy backends. You can modify the criteria the Load Balancer uses to remove and re-add backends, as well as the endpoint it checks for a response.
- SSL Redirect: You can redirect HTTP requests on port 80 to HTTPS port 443 by enabling this option.
Once you've selected your settings, you can click Create Load Balancer to begin the creation process. The Load Balancer will take a few minutes to provision.
Afterwards, it will begin to check the health of the backend Droplets. Once the backends have passed the health check the required number of times, they will be marked healthy and the Load Balancer will begin forwarding requests to them.
Managing Existing Load Balancers
Existing Load Balancers can be managed by going to the Load Balancer index page. Click Networking on the top menu and then click Load Balancers.
All of your existing Load Balancers will be displayed:
Click on an individual Load Balancer name to view the Droplets currently attached to that Load Balancer:
Clicking on a Droplet name takes you to the Droplet's detail page. If you are managing backend Droplets by name, you can add additional Droplets by clicking the Add Droplets button. If you are managing by tag, you will instead have an Edit Tag button to change the selector tag.
Click the Graphs tab to get a visual representation of traffic patterns and infrastructure health:
The Frontend section holds graphs related to requests to the Load Balancer itself, while the Droplets section beneath it provides insight into the traffic that each Droplet handles.
Clicking Settings gives you the opportunity to modify the way that the Load Balancer functions:
You will be able to modify almost all of the settings you selected during the creation process. Additionally, you can delete the Load Balancer here if you no longer need it.
Managing SSL Certificates
When using the Load Balancer for SSL termination, the SSL certificate, private key, and certificate chain must all be uploaded to your DigitalOcean account. These secrets are placed in a secure, encrypted storage system and are not accessible to anyone, including DigitalOcean staff.
The necessary SSL files can be added during the Load Balancer creation process, or ahead of time by clicking on your user icon in the upper-right corner and selecting Settings. On the settings page, select Security from the left-hand menu:
In the TLS/SSL certificates section, the you can see your existing certificates. The name that you gave each certificate and the certificate's SHA1 fingerprint is also available. You can find the fingerprint of your certificates to compare it with the value in the Control Panel by typing the following command on the machine where the certificate is located:
- openssl x509 -noout -sha1 -fingerprint -in certificate_file.pem
To add a new certificate ahead of time, click Add Certificate. You will be prompted to choose a name and then enter the certificate, private key, and certificate chain to continue. These files must be entered in PEM format to be accepted:
To delete a certificate, click More and then Delete from the certificate list:
The certificate will be removed from your account.
Where to Go From Here
You should now have a general idea about what DigitalOcean Load Balancers are and how to use them. For more specific information about how to integrate Load Balancers with your infrastructure, check out the guides below: