August 27, 2012

Beginner

How To Set Up Nginx Load Balancing

About Load Balancing


Loadbalancing is a useful mechanism to distribute incoming traffic around several capable Virtual Private servers.By apportioning the processing mechanism to several machines, redundancy is provided to the application -- ensuring fault tolerance and heightened stability. The Round Robin algorithm for load balancing sends visitors to one of a set of IPs. At its most basic level Round Robin, which is fairly easy to implement, distributes server load without implementing considering more nuanced factors like server response time and the visitors’ geographic region.

Setup


The steps in this tutorial require the user to have root privileges on your VPS. You can see how to set that up in the Users Tutorial.

Prior to setting up nginx loadbalancing, you should have nginx installed on your VPS. You can install it quickly with apt-get:
sudo apt-get install nginx

Upstream Module


In order to set up a round robin load balancer, we will need to use the nginx upstream module. We will incorporate the configuration into the nginx settings.

Go ahead and open up your website’s configuration (in my examples I will just work off of the generic default virtual host):
nano /etc/nginx/sites-available/default

We need to add the load balancing configuration to the file.

First we need to include the upstream module which looks like this:
upstream backend  {
  server backend1.example.com;
  server backend2.example.com;
  server backend3.example.com;
}

We should then reference the module further on in the configuration:
 server {
  location / {
    proxy_pass  http://backend;
  }
}

Restart nginx:
sudo service nginx restart

As long as you have all of the virtual private servers in place you should now find that the load balancer will begin to distribute the visitors to the linked servers equally.

Directives


The previous section covered how to equally distribute load across several virtual servers. However, there are many reasons why this may not be the most efficient way to work with data. There are several directives that we can use to direct site visitors more effectively.

Weight


One way to begin to allocate users to servers with more precision is to allocate specific weight to certain machines. Nginx allows us to assign a number specifying the proportion of traffic that should be directed to each server.

A load balanced setup that included server weight could look like this:
upstream backend  {
  server backend1.example.com weight=1;
  server backend2.example.com weight=2;
  server backend3.example.com weight=4;
}
The default weight is 1. With a weight of 2, backend2.example will be sent twice as much traffic as backend1, and backend3, with a weight of 4, will deal with twice as much traffic as backend2 and four times as much as backend 1.

Hash


IP hash allows servers to respond to clients according to their IP address, sending visitors back to the same VPS each time they visit (unless that server is down). If a server is known to be inactive, it should be marked as down. All IPs that were supposed to routed to the down server are then directed to an alternate one.

The configuration below provides an example:
upstream backend {
  ip_hash;
  server   backend1.example.com;
  server   backend2.example.com;
  server   backend3.example.com  down;
 }

Max Fails


According to the default round robin settings, nginx will continue to send data to the virtual private servers, even if the servers are not responding. Max fails can automatically prevent this by rendering unresponsive servers inoperative for a set amount of time. There are two factors associated with the max fails: max_fails and fall_timeout.

Max fails refers to the maximum number of failed attempts to connect to a server should occur before it is considered inactive.

Fall_timeout specifies the length of that the server is considered inoperative. Once the time expires, new attempts to reach the server will start up again. The default timeout value is 10 seconds.

A sample configuration might look like this:
upstream backend  {
  server backend1.example.com max_fails=3  fail_timeout=15s;
  server backend2.example.com weight=2;
  server backend3.example.com weight=4;

See More


This has been a short overview of simple Round Robin load balancing. Additionally, there are other ways to speed and optimize a server:





By Etel Sverdlov

Share this Tutorial

Vote on Hacker News

Try this tutorial on an SSD cloud server.

Includes 512MB RAM, 20GB SSD Disk, and 1TB Transfer for $5/mo! Learn more

Create an account or login:

22 Comments

Write Tutorial
  • Gravatar marwan 12 months

    Do I have to copy the same website files to the different backend servers? If yes, do i have to ask all the apache servers read from the same mysql server?

  • Gravatar Peter 12 months

    it seems to only cover http:// proxy_pass http://backend; how about dealing with https?

  • Gravatar Peter 12 months

    @marwan, this article is for nginx and not apache - to answer your questions, yes you do have to keep the same set of files on each of the server, and yes, a database (only) server sitting behind it

  • Gravatar Peter 12 months

    Self answer To deal with https that should probably look something like: proxy_pass $scheme://backend; Not tested.

  • Gravatar [email protected] 10 months

    How do I manage user sessions across all servers?

  • Gravatar contato 10 months

    anil.virtuali: memcache sessions.

  • Gravatar asep.gelo 10 months

    contato: redis as a session keeper to get more persistent.

  • Gravatar asep.gelo 10 months

    marwan: you can use NFS to keep them sync.

  • Gravatar Justin Klemm 7 months

    So simple, so useful. Thank you! For those asking about sessions, storing them in a central location is ideal: mysql, mongodb, memcache, etc. If (for whatever reason) you need to store them locally, I suspect the "ip_hash" directive mentioned above should keep them working (as it keeps your visitors tied to one machine).

  • Gravatar Andrei Soare 5 months

    How do we make sure the load balancer never fails? For example in case of an outage on DigitalOcean, or any problem with the droplet nginx runs on. For me the main use case of a load balancer is to avoid downtime when a droplet with my web server fails. More droplets, more redundancy. But now nginx will be the single point of failure, am I wrong?

  • Gravatar Kamal Nasser 5 months

    @Andrei: Since we do not support floating IPs, that is correct. However you can make use of the round robin DNS feature so that if one load balancer fails, visitors will get redirected to the one that is still up.

  • Gravatar kev 4 months

    @Kamal: I think if one of the servers went down RR DNS would still route traffic to it, so half of the http/s requests would still fail.

  • Gravatar John Conway 3 months

    Given that the load balancer will be generally be a smaller droplet than the application servers, will it inherit their transfer allowance?

  • Gravatar hebert.luke 3 months

    proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Queue-Start "t=${msec}000"; These should be added so that all of the data is sent foward for proxy pass

  • Gravatar Kamal Nasser 3 months

    @John: Unfortunately, no. We do not support bandwidth pooling.

  • Gravatar John Conway 3 months

    @Kamal, too bad. I know you know this already, but this is a reminder that DigitalOcean really needs a load balancer droplet type.

  • Gravatar Kamal Nasser 3 months

    @John: Load balancers are on the roadmap :) You can follow the progress on that here: http://digitalocean.uservoice.com/forums/136585-digitalocean/suggestions/2670745-load-balancer There's no ETA currently, though.

  • Gravatar ollie 3 months

    This tutorial was far easier than I thought! Best feeling in the world when I refreshed and I saw which droplet was serving my page. Thanks!

  • Gravatar diegodacm 3 months

    I have followed every steps of this tutorial but fall on error "The page isn't redirecting properly Firefox has detected that the server is redirecting the request for this address in a way that will never complete. This problem can sometimes be caused by disabling or refusing to accept cookies." In my server block under location section: location / { try_files $uri $uri/ /index.php?q=$uri&$args; }. should i have to remove this line? i have tried both but the result is same. My server configuration is nginx, php5-fpm, mysql, wordpress and varnish. i tried also tried by stopping varnish but chance.

  • Gravatar Kamal Nasser 3 months

    @diegodacm: That line shouldn't cause that. Do you have any redirect/rewrite rules? Can you pastebin all of your virtualhosts?

  • Gravatar CharlesVien 2 months

    Excellent tutorial.

  • Gravatar caherrerapa 2 months

    how big has to be this machine? or how to size it?

Leave a Comment

Create an account or login:
Ajax-loader