How Do You Manage Scaling Down / Up Your Back end Servers?

I have an application with 2 backend servers(512Mb droplet) which is normally idle except for 4AM to 4PM UTC. The load is getting higher each day as the user continue to increase.

I am thinking to automatically boot another droplets from an image if the certain threshold is met and kill those droplets and leave X droplets by default if the traffic is back to normal again to save $$.

I’m using nginx as load balancer… As an example below the upstream backend IPs are hardcoded in my load balancer config. I am planning to update this config after each droplet is booted up and ready then restart nginx.

upstream backend  {
  server web-sms01:80 
  server web-sms02:80 ;

Do you guys have better way of doing this? My method is a bit funny I think. :)

  • Resizing a droplet is not an option because 1 minute downtime is very long for my users.
  • AWS ELB is not an option for now… :)

Submit an answer

This textbox defaults to using Markdown to format your answer.

You can type !ref in this text area to quickly search our full set of tutorials, documentation & marketplace offerings and insert the link!

Sign In or Sign Up to Answer

These answers are provided by our Community. If you find them useful, show some love by clicking the heart. If you run into issues leave a comment, or add your own answer to help others.

Want to learn more? Join the DigitalOcean Community!

Join our DigitalOcean community of over a million developers for free! Get help and share knowledge in Q&A, subscribe to topics of interest, and get courses and tools that will help you grow as a developer and scale your project or business.

Unless you want to start bringing in a larger service orchestration framework, that sounds like a fine way to do it. If you’re spinning up the new droplet using the API you can get the IP address and update the load balancer configuration automatically using a script.

If you want to dive into a more robust alternative, one solution might be Serf. From our tutorial:

Serf is a decentralized service orchestration and service discovery tool. It is extremely fault tolerant and decentralized, with no single point of failure like other similar tools. Serf can be used to trigger any event across a cluster of systems as well as perform monitoring duties. It’s built on top of the Gossip protocol which is designed for decentralized communication. In order for a node to join a Serf cluster, the node only needs to initially know the address of one other node in the cluster. Once the node joins, all membership information is propagated throughout the cluster. The Gossip protocol makes Serf extremely easy to setup and configure.