Automatic Snapshot via Zapier Vs DigitalOcean Weekly Backups

Just a question about reliability, efficiency, etc…

So I’ve created a droplet that I’m planning to use for our customer websites. All of these websites are currently on a traditional VPS and will be migrated over in time. As they are now, and how they will be on the droplet is that each WordPress site will have it’s own backup plugin that sends the backup of its files and database over to Amazon S3. My old host did not provide automated server level backups but with DigitalOcean, I have the option of doing the weekly paid backups, which is fine.

I feel like I should have daily backups instead so I’ve messed with Zapier to make that happen and it works with their zap app. Ok great! Daily free snapshots. Of course the downside is that the droplet is shutdown while the snapshot complete… only a few minutes at 2am… not a big problem.

I suppose the questions in my mind are… Is this actually needed with Zapier? Are the normal paid weekly backups via DigitalOcean sufficient enough since each WP site also has a backup plugin for it’s files and databases that are sent off-site?

Any thoughts and input would be welcome. Thanks!

Submit an answer

This textbox defaults to using Markdown to format your answer.

You can type !ref in this text area to quickly search our full set of tutorials, documentation & marketplace offerings and insert the link!

Sign In or Sign Up to Answer

These answers are provided by our Community. If you find them useful, show some love by clicking the heart. If you run into issues leave a comment, or add your own answer to help others.

Want to learn more? Join the DigitalOcean Community!

Join our DigitalOcean community of over a million developers for free! Get help and share knowledge in Q&A, subscribe to topics of interest, and get courses and tools that will help you grow as a developer and scale your project or business.


When it comes to backups, as a general rule of thumb, don’t keep all your eggs in one basket. That doesn’t mean that you shouldn’t use DigitalOcean, it does mean, however, that you should spread your backups out across multiple providers where and when possible.

The biggest issue, for me, is shutting down a Droplet, or any active instance. If you’re running anything that uses a database (i.e. WordPress), simply shutting down the Droplet could result in data corruption (in such cases where MySQL, MariaDB or Percona are not shutdown cleanly). In such a case, when the Droplet comes back online, you may need to repair tables. It’s not a huge issue on small databases, but on larger (starting from 100MB to 500MB in size), it can be a pain and it can take a little while (it’s also not something you want to do using a web interface, such as phpMyAdmin – this should always be done from the command line).

For database driven websites or applications, I’d recommend using a CRON job that fires off a request to a bash script that will, in turn, perform the database backup by first shutting down the service, copying the database information to a secure location on the Droplet, restarting the database service and then compressing the data to a file which is then sent offsite.

You could, of course, go for a slightly more complex setup and use replication with MySQL by setting up a small 2-Droplet cluster (Master/Slave). The master would accept both reads & writes while the slave would only allow reads to be performed (the master would push changes to the slave at the same time).

Using the same CRON job and script, you would still backup the Master, though temporarily switch over to using the IP/Hostname of the Slave so that access to your database(s) is never unavailable. The bash script could handle this rather easily.

i.e. The bash script would (after you’ve properly configured a MySQL Master/Slave setup):

1). Change the Database Host in each wp-config.php files to the Slave IP/Hostname, then;

2). Shutdown the Master Database Server, then;

3). Backup the MySQL Database directory, then;

4). Restart the Master Database Server, then;

5). Change the Database Host in each wp-config.php back to the Master IP/Hostname.

The end result is that once the database directory is backed up, the script switches back to the Master thus, there’s only a brief period of time in which writes would be restricted and best of all, there’s no actual downtime and visitors to these sites may continue to browse as normal.

You could even go a step further and write a conditional statement that checks the IP/Hostname that WordPress is using. If it’s localhost or the IP of your Master, no message is displayed. If it’s the IP/Hostname of the Slave, then a small message is displayed at the top of each site with details on why certain functionality is currently restricted.