Question

Does DO have plan to implement the auto-scaling like what AWS have?

Posted March 18, 2019 12.3k views
ScalingUbuntu 18.04

I’m an AWS user and I find their auto-scaling of instances very easy. I also have applications here in DO. Does DO also plan to have this technology at a price? I know it will be beneficial if this will be implemented with ease of use without the need for users to do scripting.

These answers are provided by our Community. If you find them useful, show some love by clicking the heart. If you run into issues leave a comment, or add your own answer to help others.

×
2 answers

Greetings!

Thank you for taking the time to ask this question here. This is not currently on our roadmap, but that is not a guarantee that it will not be in the future. With that said, we have a bit of a different vision for achieving this on our platform. I will explain.

Vertical scaling can be problematic, and it tends to have a lower ceiling than horizontal scaling. Two $5 servers can outperform one $10 server, under certain circumstances. Rather than raising the ceiling every time you brush against it, we recommend that you implement horizontal scaling. This provides the least interruption for your content, and helps you to save on cost when you don’t need that extra power. We put together a tutorial for it a while back, I think you might find it interesting:

https://www.digitalocean.com/community/tutorials/how-to-automate-the-scaling-of-your-web-application-on-digitalocean-1604

Jarland

by Sebastian Canevari
In this tutorial, we will demonstrate how to use DigitalOcean API to horizontally scale your server setup. To do this, we will use **DOProxy**, a relatively simple Ruby script that, once configured, provides a command line interface to scale your HTTP application server tier up or down. The primary purpose of this tutorial is to teach the minimally required concepts necessary to programmatically scale your DigitalOcean server architecture through the API.
  • Yes, I’m referring to horizontal scaling.

    For example, I have a 2GB (1 CPU) x 3 droplet, in case it reach its threshold, I want my droplet to add additional droplet and evenly spread the load to 4 droplets. How can I achieve this setup?

  • What’s the latest on this? I would think with the fairly recent DO metrics implementation, you guys could mimic a poor man’s Cloudwatch Metric Alarm that signals droplet provisioning. Add the concept of an auto scaling group launch configuration (user data, image, etc.) and related metric alarms that allow you to bootstrap new droplets and we’re in business.

I believe you can pull this off with Terraform using lifecycle rules for your droplets. See https://www.terraform.io/docs/configuration/resources.html#lifecycle-lifecycle-customizations, specifically create_before_destroy. It may require a bit of work on your end but it’s doable.

edited by MattIPv4
  • This is specific to Terraform lifecycle events and how these resources are managed during an execution plan. Unfortunately is not a provider specific implementation specific to droplets.

Submit an Answer