We hope you find this tutorial helpful. In addition to guides like this one, we provide simple cloud infrastructure for developers. Learn more →

How To Create Your First DigitalOcean Load Balancer

PostedFebruary 14, 2017 22.7k views DigitalOcean Load Balancing Ubuntu Ubuntu 16.04

Introduction

Load balancers distribute traffic among multiple backend servers. If one of those servers goes down, a load balancer will redirect traffic to the others, assuring that your services continue to be available. A load balancer also allows you to add additional resources to handle a temporary traffic spike or a more consistent increase in demand. In addition, placing a load balancer between visitors and the backend can allow you make changes to the backend without exposing your visitors to those changes.

In this tutorial, we'll demonstrate how to create a DigitalOcean Load Balancer using the web interface and the default forwarding rules to distribute unencrypted web traffic between three backend web servers.

Prerequisites

In this tutorial, we will use:

Three Ubuntu 16.04 Droplets in a single data center, each with a sudo user and basic firewall set up using the Initial Server Setup with Ubuntu 16.04 guide.  We've created our Droplets in the SFO1 data center and called them web-01, web-02, and web-03.

Important: Droplets and DigitalOcean Load Balancers MUST reside in the same data center.

Once the Droplets are in place, you're ready to follow along.

Step 1 — Setting up the Backend Servers

We'll begin by installing Nginx and creating the test content we need to watch the Load Balancer work.

On Each Droplet

On each of your Droplets, refresh the apt package index and then install the Nginx web server by typing:

  • sudo apt-get update
  • sudo apt-get install nginx

Once the installation completes, we'll allow HTTP traffic through the UFW firewall:

  • sudo ufw allow 'Nginx HTTP'
Output
Rule added Rule added (v6)

To confirm that the rule was added as expected, we'll check the status:

  • sudo ufw status
Output
Status: active To Action From -- ------ ---- OpenSSH ALLOW Anywhere Nginx HTTP ALLOW Anywhere OpenSSH (v6) ALLOW Anywhere (v6) Nginx HTTP (v6) ALLOW Anywhere (v6)

To ensure a consistent visitor experience, the backends should generally be identical, but we’ll create a unique web page on each Droplet to demonstrate the distributed request handling.

In each server's document root we'll create a file called lb.html. For content, we'll use the hostname so that we can see which server is handling a request. If the document root is located elsewhere, substitute it for the default location, /var/www/html , in the paths below:

Add a page to web-01

  • sudo nano /var/www/html/lb.html
/var/www/html/lb.html on web-01
<h1 style="color:blue">web-01</h1>

Save and exit the file. Visit the page in your web browser using the Droplet's IP address to be sure it's loading as expected.

Screencap of web-01 Test Page

Repeat on web-02

  • sudo nano /var/www/html/lb.html
/var/www/html/lb.html on web-02
<h1 style="color: orange">web-02</h1>

Again, visit the page at the Droplet's IP address to be sure it loads as expected:

Screencap of web-02 Test Page

Repeat on web-03

  • sudo nano /var/www/html/lb.html
/var/www/html/lb.html on web-03
<h1 style="color: green">web-03</h1>

Screencap of web-03 Test Page

Once we've added a page to each server, we'll create the Load Balancer.

Step 2 — Creating the Load Balancer

We’ll navigate to the Load Balancers page by selecting "Networking" from the top navigation, then clicking "Load Balancers".

navigate

This will take us to the main page, where we can create our first DigitalOcean Load Balancer. As we'll see in a moment, once we have at least one Load Balancer, this page will serve as the overview of all the Load Balancers we have created.

create-lb-button1

Next, we'll click the "Create Load Balancer" button, which takes us to the creation page. Every choice we need to make for this tutorial is directly visible on the creation page.

create-lb-empty-screen

We're going to:

  • Name the Load Balancer
  • Add our Droplets
  • Review the default remaining defaults
  • Create the Load Balancer

Note that the "Create Load Balancer" button will not be available until we have filled in the "Name" field and selected the “Region”.

Let's take a look at each of these steps:

Choose a Name

We'll start by giving our Load Balancer an identifying Name. Like Droplets, a name is required and may only use alpha-numeric characters, dashes, and periods.

We'll call ours test-balance. This name can be changed on the Load Balancer's detail page later, once it's been created.

name-lb

Add Droplets

Next, we'll add the first two of our Droplets and save the third to illustrate how to add additional Droplets to an existing Load Balancer. We'll start typing the beginning of our hostnames, web which will bring up a list of tags and Droplets that contain those letters. We're going to add our Droplets by name.

We'll start by selelcting web-01.

add-droplets

Now that we've added the first droplet:

  • the region is automatically filled with that Droplet's region
  • the select list no longer includes tags
  • the choices will be narrowed to the Droplets in the same data center, and
  • our Load Balancer will be created in that region

To add web-02 we'll type the beginning letters again, then select it. We’ll save web-03 for later to learn how to add Droplets to existing Load Balancers:

choose-droplets

Note: It's not necessary to add any tags or Droplets at this time. We can create an empty Load Balancer, which can be useful when it comes to setting up DNS or automating infrastructure setup.

If we don’t choose any Droplets or Tags, we will need to select a Region, however. The region determines where the Load Balancer will reside and what Droplets will be available to it.

Review the Remaining Defaults

We're going to leave the rest of the settings as-is when we create our Load Balancer, but before we do, let's take a brief look at what they mean.

Default Forwarding Rules

The default forwarding rule is going to take incoming HTTP requests directed at Port 80 on test-balance and redirect them as HTTP requests to the Droplets we've added. Additional rules can be added at a later time. Any requests sent to load balancer that haven't been configured will be denied.

Default Advanced Settings
The forwarding rules defines what kind of incoming traffic is redirected to where.

  • Algorithm: Round Robin
    This determines how the traffic is routed. Round-robin means that each request comes in will be forwarded to the next backend on the list, in contrast to “Least Connections” which forwards requests to the server with the least connections currently made between the Load Balancer and the backend.

  • Sticky Sessions: Off
    Sticky sessions use cookies to create an affinity between a users and a load-balanced server. When the Load Balancer receives a request it checks for the cookie and if it finds one, sends the request to the server specified in the cookie. Sticky sessions can be useful if the backends cannot be completely stateless.

  • Health Checks: http://0.0.0.0:80/
    Health Checks ensure that Droplets are available. By default, they test endpoints every 10 seconds. The health check for our forwarding rule will ping each Droplet’s web server on port 80, and if the server fails to respond after three tries, it will be removed from rotation. The Load Balancer will continue to ping the server, and once it has successfully received a response 5 consecutive times, the server will be returned to the pool.

  • SSL: No Redirect
    In this tutorial we'll be working with HTTP requests only. In the "Advanced settings" once SSL has been configured, we can choose to have our Load Balancer redirect all incoming HTTP requests to HTTPS before passing them to the backend servers.

Create the Load Balancer

Now that the Name and Region have been filled in, along with our selected Droplets, we will click "Create Load Balancer".

lb-final-screen

It takes a couple of minutes for the Load Balancer to be created.

Step 3 — Testing the Balancing

Once the Load Balancer is created, its IP address will appear automatically on the, "Load Balancers" overview page:

lb-IP

Once created, the IP address will also appear on the Detail page of the specific Load Balancer. As with Droplets, we can change the name of the Load Balancer on the detail page by clicking it and typing a new one.

lb-IP-detialpage

Now that we have the IP address, we can watch the load balancer in action.

We'll start by visiting that IP address in a web browser

http://203.0.113.256/lb.html

One of the two backends should load.

web-01-via-load-balancer

When we refresh the page, we should see the load switch to the next backend.

web-02-via-load-balancer

This alternation of content confirms that we're balancing traffic between our backend servers. Next, we'll take a look at what happens when one of the backend servers fails.

Step 4 — Testing Failover

To test failover, we're going to stop Nginx on web-01 . This will cause the health check to fail. web-01 will be automatically removed from the pool and when we reload the page, we should see web-02serving our request.

We'll start by stopping the web server:

  • sudo systemctl stop nginx

Since systemctl doesn't provide output, let's make sure the server isn't running:

  • sudo system status nginx

If we see something like the following at the end of the status output, we know it's down:

Output of system status nginx
Feb 02 20:44:40 web-01 systemd[1]: Stopping A high performance web server and a reverse proxy server... Feb 02 20:44:40 web-01 systemd[1]: Stopped A high performance web server and a reverse proxy server.

Once Nginx stops, the health checks will fail. By default, they check every 10 seconds. After three failures in a row, the Load Balance overview page will register an issue:

lb-with-issue

Note: The frequency and number of Health Checks can be customized in the Advanced Settings

The Load Balancer's detail page will provide more detail, showing that web-01 is "Down":

droplet-1-down

This leaves us with just web-02, so when we reload the test page, we should receive web-02's content:

[web-02-via-load-balancer](web-02-via-load-balancer.png)

Finally, we'll bring web-01 up again. After 5 successful health checks, it will be added back into the backend pool.

Step 5 — Adding More Droplets

To add more Droplets to an existing Load Balancer, we click the "Add Droplets" button. The pop-up dialogue will focus our choice on the Droplets available in the Load Balancer's data center. We'll select our third Droplet, web-03.

add-droplet-3

Again, the Droplet will not be added to the backend pool until it returns 5 successful health checks.

droplet-3-down

One it has been verified as healthy it will be added to the backend and all three servers will be in rotation. We can reload the site in our browser. Once we've reloaded a couple of times, we should see our last Droplet, web-03, serving its test content.

web-03-via-load-balancer

Conclusion

In this tutorial, we learned how to use the web-based interface to create a DigitalOcean Load Balancer using the default settings to balance unencrypted web traffic.

Once your Load Balancer is configured, you can follow the guide on setting up a domain name with DigitalOcean to create an A record for your domain or consult the documentation of your DNS service. When creating your record, be sure to use the Load Balancer IP address, and not the IP of one of the specific Droplets.

21 Comments

Creative Commons License