Autoscaling solutions for Digital Ocean - are there existing solutions?

Posted April 15, 2016 11.3k views
ScalingLoad BalancingHigh Availability

Hello. Our company is a Digital Ocean customer, we use it exclusively for all our projects. We love it =). And I personally love DO, and was an initiator to move our infrastructure to Digital Ocean.

My question is - does anyone know about such projects (open source or not)?
I think, it is possible - Digital Ocean have awesome API that allows create/remove droplets, use snapshots, etc.

I was not able to find any (one small project on github, actually) and I want to implement it myself (if there is no such software). It seem pretty interesting.

These answers are provided by our Community. If you find them useful, show some love by clicking the heart. If you run into issues leave a comment, or add your own answer to help others.

Submit an Answer
4 answers

you can simply use DO API.
we have some example configs here:

the idea is to check the load on master droplet and start new droplet if load is too high.
configure it, rsync files and inject ip into load balancer.

This is a very interesting question. One I’ve spent considerable time working with for a lot of my projects as well. This really comes down to deciding what languages and tools you want to invest in, or which languages you are already familiar with to work on.

There are a couple components to any software solution that can do this. The ability to get feedback when your system is under- or over-loaded, and the logic/code for deciding when to build out or break down. The build-out and break-down part benefits from automation tools and the feedback part can leverage automation tools or monitoring solutions.

Check out the links to articles for saltstack, chef, puppet, ansible, vagrant and monitoring or search for any other tools you may already be acquainted with as they may already provide modules or extensions that can help get this type of solution off the ground.

Many of these already support getting information from the systems that are managed which can cover the feedback side of auto-scaling or you can use any type of monitoring software ( and interface with your automation tools ) that works for you. Some already have components for integrating with our API through modules to make things even easier as well.

The key thing to note here is that it will take quite a bit of an investment to work up configurations, find what works with the feedback for deciding which/when services get broken down or built up, and interface all the components in a reliable way. I typically leverage existing tools for this, but if you start something small that gets the job done, you could build it up to something beautiful and powerful.

  • Thank you, Javier. I’ll read as much as I can.

    For now, I just searching for a solution. I also try to learn best practices from AWS.
    To build a prototype I’ve come up with:
    zabbix - for server monitoring (I’m not a devops ninja, and zabbix seems simple and powerful). Looks like it can help with monitoring all resources I need: CPU, RAM, DISK, IO.

    ansible for provisioning and changing load balancer configuration. But using configured snapshots seems to be a better idea (maybe a custom script? who knows).

    Some UI (Rails, Node, whatever) to setup autoscaling policies, like free RAM is less that 10(15/20)% for some time, hight cpu or io usage.

    Did I forget something?

    The main challenges I see now:

    • To install zabbix agent I need access to every server. Maybe Ansible playbook would solve it in some way.
    • The amount of data from zabbix may be huge and it should be analysed fast
    • Zabbix setup, I have near 0 knowledge, need to read some docs)

Take a look at Apache Brooklyn project. It supports many providers and can actually create/destroy vm instances based on many parameters (cluster load average mean, load balancer request rate, etc).

We (ActOnCloud) are working on a solution for Digital Ocean; ActOnCloud for DO will be available in August. Pls read: and let me know if you have feedback for us.