Question

Can a load balancer be setup to direct traffic to one server unless a failure is detected?

The application I’m working with does a lot of file caching so a traditional load balancing situation could be a bit tricky. Can a load balancer be setup to direct all traffic to one server unless it goes down then traffic directed to a server that is replicated from server 1 using rsync. I am currently achieving this using the DO api and a third server that is doing the checking of availability.

Subscribe
Share

Submit an answer
You can type!ref in this text area to quickly search our full set of tutorials, documentation & marketplace offerings and insert the link!

These answers are provided by our Community. If you find them useful, show some love by clicking the heart. If you run into issues leave a comment, or add your own answer to help others.

Accepted Answer

@shad

Using the API w/ a Floating IP would be the best method as Load Balancing isn’t really ideal if you’re simply wanting to direct all traffic to a single server until it’s down. You could at least keep the IP (if the Droplets are in the same DC), but that’s just a suggestion (unless that’s what you’re already doing).

The options available for using NGINX as a Load Balancer are Round Robin, IP Hash, and Weighted. With Round Robin, it just connects in the order listed unless a server is down, while IP Hash attempts to send the user to the same server they first hit, and Weighted simply distributes requests based on the weights you assign to each server – but it still distributes the requests.

That said, depending on the type of caching, if distributing requests is an issue, I’d look in to using a cache that doesn’t require the same server – if possible. Something like Redis or Memcached. That way you’re not limited to caching on a single server and you could deploy one or more caching servers, allowing both to read/write to the master.

i.e.

LB => Server01 | Server02 => Redis/Memcached

Ideally, I’d shoot for Redis is possible, though you can tinker with both and see which you like the best and which best suites the needs of your application. It is adding another server, or multiple servers to the mix, but as you scale, that’s the direction you’ll be heading.