Question

Can a load balancer be setup to direct traffic to one server unless a failure is detected?

The application I’m working with does a lot of file caching so a traditional load balancing situation could be a bit tricky. Can a load balancer be setup to direct all traffic to one server unless it goes down then traffic directed to a server that is replicated from server 1 using rsync. I am currently achieving this using the DO api and a third server that is doing the checking of availability.


Submit an answer


This textbox defaults to using Markdown to format your answer.

You can type !ref in this text area to quickly search our full set of tutorials, documentation & marketplace offerings and insert the link!

Sign In or Sign Up to Answer

These answers are provided by our Community. If you find them useful, show some love by clicking the heart. If you run into issues leave a comment, or add your own answer to help others.

Accepted Answer

@shad

Using the API w/ a Floating IP would be the best method as Load Balancing isn’t really ideal if you’re simply wanting to direct all traffic to a single server until it’s down. You could at least keep the IP (if the Droplets are in the same DC), but that’s just a suggestion (unless that’s what you’re already doing).

The options available for using NGINX as a Load Balancer are Round Robin, IP Hash, and Weighted. With Round Robin, it just connects in the order listed unless a server is down, while IP Hash attempts to send the user to the same server they first hit, and Weighted simply distributes requests based on the weights you assign to each server – but it still distributes the requests.

That said, depending on the type of caching, if distributing requests is an issue, I’d look in to using a cache that doesn’t require the same server – if possible. Something like Redis or Memcached. That way you’re not limited to caching on a single server and you could deploy one or more caching servers, allowing both to read/write to the master.

i.e.

LB => Server01 | Server02 => Redis/Memcached

Ideally, I’d shoot for Redis is possible, though you can tinker with both and see which you like the best and which best suites the needs of your application. It is adding another server, or multiple servers to the mix, but as you scale, that’s the direction you’ll be heading.

Try DigitalOcean for free

Click below to sign up and get $200 of credit to try our products over 60 days!

Sign up

Become a contributor for community

Get paid to write technical tutorials and select a tech-focused charity to receive a matching donation.

DigitalOcean Documentation

Full documentation for every DigitalOcean product.

Resources for startups and SMBs

The Wave has everything you need to know about building a business, from raising funding to marketing your product.

Get our newsletter

Stay up to date by signing up for DigitalOcean’s Infrastructure as a Newsletter.

New accounts only. By submitting your email you agree to our Privacy Policy

The developer cloud

Scale up as you grow — whether you're running one virtual machine or ten thousand.

Get started for free

Sign up and get $200 in credit for your first 60 days with DigitalOcean.*

*This promotional offer applies to new accounts only.