Update the droplets in the load balance

How do I keep all servers with the same content?

I have 3 droplets and I thought of copying the contents of the first one to the other 2. Is there any option that does this automatically?

I researched and did not find it, so I was thinking of doing it with scp just by copying the contents of the web server.

Show comments

Submit an answer

This textbox defaults to using Markdown to format your answer.

You can type !ref in this text area to quickly search our full set of tutorials, documentation & marketplace offerings and insert the link!

Sign In or Sign Up to Answer

These answers are provided by our Community. If you find them useful, show some love by clicking the heart. If you run into issues leave a comment, or add your own answer to help others.

Want to learn more? Join the DigitalOcean Community!

Join our DigitalOcean community of over a million developers for free! Get help and share knowledge in Q&A, subscribe to topics of interest, and get courses and tools that will help you grow as a developer and scale your project or business.


The configuration file for lsyncd would look something like this for multiple servers. The following is an example of what you’d use on the “master” server, or the primary.

settings = {
    logfile = "/var/log/lsyncd/lsyncd.log",
    statusFile = "/var/log/lsyncd/lsyncd.status"

targetlist = {

for _, server in ipairs( targetlist ) do
    sync {
        source = "/path/to/data",
        target = server,
        rsync = {
            rsh = "ssh -i ~/.ssh/id_rsa"

In the above:

DROPLET_IP_02 and DROPLET_IP_03 would be the IP’s or Hostnames of the Droplets that you’d be mirroring data to.

/path/to/data under targetlist would be the path where all content to be mirrored.

/path/to/data under source is the path to where the data to be mirror exists.

rsh = "ssh -i ~/.ssh/id_rsa" would need to be changed to the location of the private key that you’ll be using to login to the other servers.

If the source and destination are both the same (as would be the case in most configurations), then you’d just change /path/to/data to be the same in all three instances.

As an important note, you will need to use ssh-copy-id or physically login to the servers from one another to get the initial setup done. If you don’t, the connection will fail as initial authentication has to be done first.

I just tested the above configuration using a DigitalOcean LB and 3x 1GB Droplets and it does work, though I’ve not setup multi-directional syncing just yet. To do this, the same configuration would essentially exist on each server with lsyncd installed on each.

You would need to change the root@DROPLET_IP to make sure each sever syncs to the correct server and that ~/.ssh/id_rsa is correctly set for each one as well.


If you want something relatively simple, I’d recommend using lsyncd.

Without going the storage-only Droplet route where you’d deploy Droplets with Block Storage and use something such as GlusterFS, MooseFS, or XtremeFS, lsyncd is going to be the easiest.

That said, when it comes to bi-directional syncing or x-directional, depending on how many servers are at play now and in the future, you’d need to set it up on each of the Droplets.

So something such as:

Droplet01 syncs to Droplet02 + Droplet03 Droplet02 syncs to Droplet01 + Droplet03 Droplet03 syncs to Droplet01 + Droplet02

It can be a little tricky, so take your time. The guide I’ve linked to covers setting up one server, so what you’d do is simply replicate that and change the destinations.

To avoid having to setup so much syncing across servers, you’d most likely need to setup something a little more advanced. I tried GlusterFS on my end and it’s a major pain. Setting up 4x storage-only Droplets with 100GB Block Storage worked without any issues, though when it came to fail-over, GlusterFS failed majorly.

Supposedly, it’s supposed to be aware of the other servers in the cluster, though each time I brought down the primary, it dropped the mount and failed to remount one of the backup devices.

I even reached out to DO’s support team for help on this one as my experience with GFS is limited, though even they couldn’t make heads or tails of the rather cryptic error messages that resulted.