Question

Nginx TCP load balancer performs worse than single web server

Posted December 5, 2021 139 views
NginxLoad BalancingInitial Server SetupDigitalOcean Droplets

I have a simple nginx tcp load balancer that simply passes connections to the web servers. The requests per second is lower using the load balancer with 2 servers than one of those servers by itself. The load balancer also doesn’t increase its requests per second when adding additional web servers… This doesn’t seem right. Any ideas what could be causing this?

# loads stream module
load_module /usr/lib/nginx/modules/ngx_stream_module.so;

# sets user to default user for web server
user www-data;

# sets number of cpu cores to use
worker_processes auto;

# customizes how to handle connections
events {

    # sets number of connections to use per cpu core
    worker_connections 1024;

    # uses efficient connection processing method
    use epoll;

    # sets worker processes to accept all connections
    multi_accept on;

}

# customizes how to handle incoming tcp connections
stream {

    # distributes secure connections between specified web servers
    upstream web_servers_insecure {

        # sets ip address and port of web servers
        server 147.182.205.7:80;
        #server 128.199.3.255:80;

    }

    # customizes how to handle insecure connections
    server {

        # sets port
        listen 80;

        # sends tcp connections to upstream directive
        proxy_pass web_servers_insecure;

    }

    # distributes secure connections between specified web servers
    upstream web_servers_secure {

        # sets ip address and port of web servers
        server 147.182.205.7:443;
        #server 128.199.3.255:443;

    }

    # customizes how to handle secure connections
    server {

        # sets port
        listen 443;

        # sends connections to upstream directive
        proxy_pass web_servers_secure;

    }

}

These answers are provided by our Community. If you find them useful, show some love by clicking the heart. If you run into issues leave a comment, or add your own answer to help others.

×
Submit an Answer
1 answer

Hi @davidlittlefieldCoral,

I might be wrong but this is to be expected. When you get the request on your Droplet, the Load Balancer one, it needs to take the request and forward it to the proper Droplet. This action takes some time, even if it should be one millisecond or whatever. Again I might be wrong but this is my take on it.

I’m interested to see other answers if there is a real solution to it.

  • Hi @KFSys,

    Sure, but shouldn’t the load balancer increase the concurrent requests per second with each additional web server?

    • Hi @davidlittlefieldCoral,

      If there are a lot of requests, yes, then you will feel the difference between a load balancer and a single Droplet.

      In this case as I see it, you don’t have a lot of connections at the moment, correct?

      • Thanks for following up, KFsys.

        No, I was working with a template site for testing, and using wrk to test the number of concurrent connections that could be managed simultaneously.

        I’ve also noticed that when I set a rate limit, the requests per second improves for the individual web server, but not as a whole for multiple web servers.