Nginx as load balancer in front of ... nginx?

Posted June 8, 2016 13.6k views
NginxLoad Balancing

Hi people,
I’m just studying a lot nginx, having heard a lot about its speed. Now I’d like to test it as load balancer, but does it make sense to have a nginx server as load balancer in front of another nginx as web server? I cannot found anything about this practice so I can imagine it’s not a good idea… anyway, if it would be possible and in case it’s a good idea, how could be the configuration to first check for itself and then going to another nginx location?


These answers are provided by our Community. If you find them useful, show some love by clicking the heart. If you run into issues leave a comment, or add your own answer to help others.

Submit an Answer
5 answers


NGINX can definitely be used as a load balancer – in fact, one of its primary utilities is functioning as a reverse caching proxy / load balancer. I would, however, recommend using more than two servers as a load balanced configuration isn’t going to be all that effective if you simply setup a load balancer that simply points to a single web server.

If you’re simply looking to test the waters, I’d recommend a total of 4 Droplets and 2 Floating IP’s. You can use 512MB Droplets and setup 2x Load Balancers and 2x Web Servers. I’d recommend starting with the web servers first since you’ll need their hostnames to properly configure your load balancers.

Sidenote: This is how I’d setup a test configuration. You can simply deploy 2x Droplets and omit the Floating IP entirely. I’m simply providing a way to setup a test environment that could be used for a small-scale production environment.

To get started, deploy a single Droplet with the OS of your choice (I’d recommend Ubuntu 16.x 64-bit – many of the tutorials use Ubuntu so this will make it easier to follow along).

While in the DigitalOcean Control Panel:

  • Click on Networking (located in the top navigation bar)
  • The first option you’ll see on this page is the option to assign a Floating IP. From the drop-down menu, select the newly created Droplet and assign it a Floating IP.

Now to setup NGINX as we web server on this Droplet, I’d recommend following the guides below, written by @jellingwood.

Once you’ve set up a working NGINX web server on your first Droplet, head back to the DigitalOcean Control Panel and:

  • Click on Droplets, and then click on your newly created Droplet.
  • Click on Snapshot
  • Directly under “Take Snapshot” there’s a message that allows you to quickly power off the Droplet (a requirement to take a full snapshot).
  • Once the Droplet is shut down, click the button labeled “Take Snapshot”. This may take 2-3 minutes or more depending on the size of the Droplet you just created.

We will now use this snapshot to create the second Web Server Droplet. To create our second Web Server Droplet:

  • Click on the green button labeled “Create Droplet” in the top right hand of the webpage.
  • Under “Choose an image”, click on the “Snapshots” tab.
  • Click on the box with the snapshot you just created and select your configuration options as you normally would.

As soon as the Droplet is created, you’ll have two Droplets with identical configuration, resulting in two web servers. You’ll then need to setup proper DNS entries for the two Droplets by creating an A entry for your domain that points to the Floating IP you assigned to the first Droplet (not the Droplet IP’s – I’ll tell you why below). This will allow you to access your Droplets by Hostname as well as IP – and ideally you want to configure the Load Balancers to use hostnames, not IP’s.


To create your Load Balancers, you’ll simply follow the guide below, written by @etel, as well as the same steps listed above that allowed you to create two mirrored Web Server Droplets.

For the purpose of utility, I’d recommend using ip_hash from the guide above. So when you see the block that looks like

upstream backend {
  server  down;

… that’s the one you want to use. Simply swap out with the hostname of your first Droplet and with the hostname of your second Droplet. You can delete the third line as you won’t need it.

Once you’ve completed the guide, your Domain Name should be configured to point to the Floating IP assigned to the first Load Balancer as the LB will now serve as the entry point instead of the Web Server Droplet. Once the DNS has resolved, you should be able to access your domain and the Load Balancer will pick up the request and direct it to the first available web server listed in the upstream backend block.

Now....the reason we’re using Floating IP’s – Things happen and that sometimes results in downtime. By using floating IP’s, if one of your Load Balancers goes down, you can redirect traffic to the Floating IP to the second by heading to the Network page. Ideally, you’d use a programming language of your choice and use DigitalOcean’s API to automate this request, though for testing purposes, changing it by hand will allow you to get a feel for what happens and provides you with a more realistic example of how you’d go about setting up a production deployment.

In all reality, using DigitalOcean’s API, you could automate this entire process with a bash script or using PHP, NodeJS, Python etc.


If you have any questions, I’ll be more than happy to help!

by Etel Sverdlov
This article covers how to set up a simple load balancer on a DigitalOcean droplet with nginx. The tutorial covers setting up a round robin loadbalancer that can then direct site visitors to one of a set of IPs

It’s definitely possible to set up something like that. Nginx would play a different role on the different servers that it’s set up on. For the loadbalancer, it would distribute the traffic, but on your individual server it would act as the webserver itself.

Based on your question, it sounds like you have two servers– one with the load-balancer and nginx serving content and then a separate server with nginx serving content. It’s possible to set it up this way, but it might make more sense to set it up on separate servers all together.

I would recommend taking a look at this tutorial to see how you can set up your infrastructure– you can imagine nginx in place of HAproxy where they discuss it in the piece:

At the same time, this tutorial on nginx loadbalancing can walk you through how to set it up:

I want to look into this more for you, but wanted to share some tutorials to help you get started. I’ll add more as I find it.

by Mitchell Anicas
When deciding which server architecture to use for your environment, there are many factors to consider, such as performance, scalability, availability, reliability, cost, and ease of management. Here is a list of commonly used server setups, with a short description of each, including pros and cons. Keep in mind that all of the concepts covered here can be used in various combinations with one another, and there is no single, correct configuration.

Thanks, I appreciate.

thanks @jtittle! ATM I got 3 machines, running debian 8.4 Apache2 + MariaDB 10.1 + php7. I’m running server based on this kind of architecture since a few years. I’ve switched one of them to Nginx some weeks ago in order to test speed, and I’m now studying load balancing in order to have the best environment possible with my servers, and configure them as clusters. Now I’ve a few doubts. If i configure a load balancer in front of two machines and I need to move the content of one of them on a third one, does the load balancer switch on the working server even if the obsolete one returns a 404? e.g. is the running site,
I move the dns to while the current dns to the load balancer.
and I add to the load balancer also another
Now, due to the dns propagating lag, at some moment one of the two will return a 404, or a default web page, does the load balancer switch on the other one server or does it consider valid the 404 so returns 404?

Then, after switching to nginix I’m just evaluating some possible environments:
Stay with Apache2, and putting in front of it a Nginx load balancer.
Pass to Nginx, and put in front of it a load balancer (Nginx or HAProxy?)

Which one would you prefer?

Then I actually cannot understand the difference between layer 1 and layer 7 (redis, right?).

Thanks :)

  • @SGr33n

    When configuring NGINX, for example, to function as a load balancer, the primary goal and the overall utility comes from routing requests to the first available server. Available is defined as a server that has been deemed responsive at the configured hostname or IP address. In such a case, if you have two servers configured on the load balancer and the first one is live and it’s capable of responding, even if that response is a 404, it’ll receive the request.

    A load balancer can be thought of as a piece of software that simply intercepts a request and ensures that it gets routed to a live server for processing. Even if the result is a 404, that’s still considered to be a “valid” response as the hostname is deemed up and accessible using all normal means of attempting such access (i.e. it’ll respond to a ping, trace, direct request, etc).

    You can use proxy_intercept_errors and set it to true (by default it’s false) which will allow the Load Balancer to handle the error instead of the target web server, though the FOSS version of NGINX (i.e. freely available version) does not provide error code targeting, so you’d ideally need to implement some sort of monitoring that will alert you in such a scenario.


    When it comes to DNS, ideally you want to plan ahead and stick with the hostname that you choose from the start, if for no other reason than to reduce reconfiguration as, unless you create a script to ensure that configuration is replicated across your Load Balancers, you’re going to have to manually edit each of them every time you change the hostname.

    You can create numerous A entries for your domain, each of which can point to the same IP, so there’s no issue there, though from a configuration standpoint, it’s best to think and plan ahead by choosing hostnames that you intend to maintain. As a general rule of thumb, it’s best to use location-specific hostnames. For example, if Droplet A is the Load Balancer and it’s located in the NY3 Data center, then using would allow for easy identification as it tells you what datacenter it’s in, what it is (lb = load balancer), and that it’s the first.

    Apache & NGINX

    As for Apache vs NGINX or NGINX vs NGINX in front of Apache, in my honest opinion, NGINX is more than capable of standing on its own, and that’s proven. It doesn’t need Apache and in all honesty, the more complexity you introduce, the more complexity you’re going to have to deal with. Unless you’re stuck using Apache for one reason or another, drop it and make the switch to NGINX which provides dual functionality in a single package as well as caching.

    You’d still benefit from Redis, of course, and there’s even a module that allows integration with Redis (requires compiling NGINX over installing a package from the standard repositories).

    OSI Models

    Layer 1 would be the Physical layer – think electrical and mechanical, wiring and cables, cards and connections, etc. Layer 7 is the Application layer, which is where HTTP, E-Mail, FTP and various other application-specific or application service falls into place.

    When it comes to Load Balancing, the majority of solutions deal with Layer 4 and Layer 7, with Layer 4 being the Transport Layer. That being said, although it’s referred to as Layer 7, it uses Layers 5, 6 and 7 (Session, Presentation, and Application Layers) while Layer 4 uses Layers 3 and 4 (Networking and Transport Layers).

    Ultimately, it’s broad terminology and a rather complicated description method of what goes on when a request is received, processed and then responded to – as well as what happens between each stage (starting with bits and bytes all the way up to the web page you’re viewing, the images displayed, and the overall rendering of data).

Really clear, thank you so much. So just another question, I’ve already took a look at the Redis vs Memcached benchmarks, is it a good practice to install Redis on the same server where nginix is running? Then if I run php as php-fpm, does FastCGI Caching use FPM? Should I also enable opcache and ACPU or they have conflicts?