Can a load balancer be setup to direct traffic to one server unless a failure is detected?

Posted April 6, 2017 7.1k views
UbuntuLoad BalancingHigh Availability

The application I’m working with does a lot of file caching so a traditional load balancing situation could be a bit tricky. Can a load balancer be setup to direct all traffic to one server unless it goes down then traffic directed to a server that is replicated from server 1 using rsync. I am currently achieving this using the DO api and a third server that is doing the checking of availability.

These answers are provided by our Community. If you find them useful, show some love by clicking the heart. If you run into issues leave a comment, or add your own answer to help others.

Submit an Answer
1 answer


Using the API w/ a Floating IP would be the best method as Load Balancing isn’t really ideal if you’re simply wanting to direct all traffic to a single server until it’s down. You could at least keep the IP (if the Droplets are in the same DC), but that’s just a suggestion (unless that’s what you’re already doing).

The options available for using NGINX as a Load Balancer are Round Robin, IP Hash, and Weighted. With Round Robin, it just connects in the order listed unless a server is down, while IP Hash attempts to send the user to the same server they first hit, and Weighted simply distributes requests based on the weights you assign to each server – but it still distributes the requests.

That said, depending on the type of caching, if distributing requests is an issue, I’d look in to using a cache that doesn’t require the same server – if possible. Something like Redis or Memcached. That way you’re not limited to caching on a single server and you could deploy one or more caching servers, allowing both to read/write to the master.


LB => Server01 | Server02 => Redis/Memcached

Ideally, I’d shoot for Redis is possible, though you can tinker with both and see which you like the best and which best suites the needs of your application. It is adding another server, or multiple servers to the mix, but as you scale, that’s the direction you’ll be heading.

  • Thank you very much for the thoughtful and detailed response. I’m thinking the current setup will work fine and if I need more performance quite a few gains can be made by breaking off the database into a different server which is supported easily.