Automatic Snapshot via Zapier Vs DigitalOcean Weekly Backups

June 8, 2016 561 views
DigitalOcean Backups

Just a question about reliability, efficiency, etc...

So I've created a droplet that I'm planning to use for our customer websites. All of these websites are currently on a traditional VPS and will be migrated over in time. As they are now, and how they will be on the droplet is that each WordPress site will have it's own backup plugin that sends the backup of its files and database over to Amazon S3. My old host did not provide automated server level backups but with DigitalOcean, I have the option of doing the weekly paid backups, which is fine.

I feel like I should have daily backups instead so I've messed with Zapier to make that happen and it works with their zap app. Ok great! Daily free snapshots. Of course the downside is that the droplet is shutdown while the snapshot complete... only a few minutes at 2am... not a big problem.

I suppose the questions in my mind are... Is this actually needed with Zapier? Are the normal paid weekly backups via DigitalOcean sufficient enough since each WP site also has a backup plugin for it's files and databases that are sent off-site?

Any thoughts and input would be welcome. Thanks!

1 Answer


When it comes to backups, as a general rule of thumb, don't keep all your eggs in one basket. That doesn't mean that you shouldn't use DigitalOcean, it does mean, however, that you should spread your backups out across multiple providers where and when possible.

The biggest issue, for me, is shutting down a Droplet, or any active instance. If you're running anything that uses a database (i.e. WordPress), simply shutting down the Droplet could result in data corruption (in such cases where MySQL, MariaDB or Percona are not shutdown cleanly). In such a case, when the Droplet comes back online, you may need to repair tables. It's not a huge issue on small databases, but on larger (starting from 100MB to 500MB in size), it can be a pain and it can take a little while (it's also not something you want to do using a web interface, such as phpMyAdmin -- this should always be done from the command line).

For database driven websites or applications, I'd recommend using a CRON job that fires off a request to a bash script that will, in turn, perform the database backup by first shutting down the service, copying the database information to a secure location on the Droplet, restarting the database service and then compressing the data to a file which is then sent offsite.

You could, of course, go for a slightly more complex setup and use replication with MySQL by setting up a small 2-Droplet cluster (Master/Slave). The master would accept both reads & writes while the slave would only allow reads to be performed (the master would push changes to the slave at the same time).

Using the same CRON job and script, you would still backup the Master, though temporarily switch over to using the IP/Hostname of the Slave so that access to your database(s) is never unavailable. The bash script could handle this rather easily.

i.e. The bash script would (after you've properly configured a MySQL Master/Slave setup):

1). Change the Database Host in each wp-config.php files to the Slave IP/Hostname, then;

2). Shutdown the Master Database Server, then;

3). Backup the MySQL Database directory, then;

4). Restart the Master Database Server, then;

5). Change the Database Host in each wp-config.php back to the Master IP/Hostname.

The end result is that once the database directory is backed up, the script switches back to the Master thus, there's only a brief period of time in which writes would be restricted and best of all, there's no actual downtime and visitors to these sites may continue to browse as normal.

You could even go a step further and write a conditional statement that checks the IP/Hostname that WordPress is using. If it's localhost or the IP of your Master, no message is displayed. If it's the IP/Hostname of the Slave, then a small message is displayed at the top of each site with details on why certain functionality is currently restricted.

  • Hi @jtittle,

    Thank you for your insights. It's a little over my head at the moment but will definitely look into it further. I already backup the individual sites plus their databases off site to Amazon S3 individually on my other host. So that part is already taken care of.

  • I've done a slight variation on the MySql backup that does not require downtime by backing up one of the replication slaves (link to version 5.7 manual). This variation dispenses the need to swap out the master, and only requires to stop-restart the slave to do the mysqldump command. Just note that changes that occur between when you backup your WP files and when you backup the database may not be 100% in sync. So you'll have to take steps (or not) to mitigate that discrepancy based on your disaster recovery requirements.

    • @bluenotes - @gndo

      Using the method by @gndo is definitely an option as well. Pulling a backup from the slave is also doable and if setup correctly, resyncing shouldn't be an issue. That being said, it's always important to verify backups and verify the master/slave syncing so that you're not pulling from either and ending up with a backup that is long out of sync.

Have another answer? Share your knowledge.