How Do You Manage Scaling Down / Up Your Back end Servers?

September 14, 2014 2k views

I have an application with 2 backend servers(512Mb droplet) which is normally idle except for 4AM to 4PM UTC. The load is getting higher each day as the user continue to increase.

I am thinking to automatically boot another droplets from an image if the certain threshold is met and kill those droplets and leave X droplets by default if the traffic is back to normal again to save $$.

I'm using nginx as load balancer.. As an example below the upstream backend IPs are hardcoded in my load balancer config. I am planning to update this config after each droplet is booted up and ready then restart nginx.

upstream backend  {
  server web-sms01:80 
  server web-sms02:80 ;
}

Do you guys have better way of doing this? My method is a bit funny I think. :)

  • Resizing a droplet is not an option because 1 minute downtime is very long for my users.
  • AWS ELB is not an option for now.. :)
1 Answer

Unless you want to start bringing in a larger service orchestration framework, that sounds like a fine way to do it. If you're spinning up the new droplet using the API you can get the IP address and update the load balancer configuration automatically using a script.

If you want to dive into a more robust alternative, one solution might be Serf. From our tutorial:

Serf is a decentralized service orchestration and service discovery tool. It is extremely fault tolerant and decentralized, with no single point of failure like other similar tools. Serf can be used to trigger any event across a cluster of systems as well as perform monitoring duties. It's built on top of the Gossip protocol which is designed for decentralized communication. In order for a node to join a Serf cluster, the node only needs to initially know the address of one other node in the cluster. Once the node joins, all membership information is propagated throughout the cluster. The Gossip protocol makes Serf extremely easy to setup and configure.

by cooper.thompson
Serf is a decentralized service orchestration and service discovery tool. It is extremely fault tolerant and decentralized, with no single point of failure like other similar tools. Serf can be used to trigger any event across a cluster of systems as well as perform monitoring duties. It's built on top of the Gossip protocol which is designed for decentralized communication.
Have another answer? Share your knowledge.