Can I share an attached volume between droplets?

August 24, 2016 5.6k views
Block Storage Ubuntu
lsmon714
By:
lsmon714

I know I can share disk resources using NFS or SSHFS but I wonder if a volume can be shared directly between 2 droplets.
I know it sounds weird but I am curious about it.

30 Answers

Exact use case too!
Would be really useful - thanks!

Not currently. At this time it is only possible to mount a volume to one droplet at a time. We are looking into making it possible to mount a volume on more than one droplet in the future but there is no ETA for that at this time.

I noticed that digitalocean added load balancers recently which is awesome, but I am wondering if we can share storage while using this ?

Hi, i need that exact use case as well.
I want to deploy a Docker Swarm and share a volume between all the nodes in the cluster. Is there a ETA for this?
Thanks!

+1.
We have been using DO for all development/testing. We have successfully deployed a scalable staging environment using Docker Swarm Mode. The only glitch is that we can not scale the application container across multiple swarm nodes as these instances can not share a persistence storage (volumes) across the nodes. There are many suggestions from unofficial sources to use NFS, Flocker, GlusterFS, etc. But there is no clear direction / guideline from DigitalOcean.

Without an officially supported solution for shared storage from DO, we have to regretfully leave DO and join the AWS band wagon.

any updates on this one? I'm interested in trying this feature too! Hope the DO team will have this on their next update

Hello.
I would also like to know, I intend to use Cluster with Docker and it would be interesting to use Block Storage to centralize the files.

  • @felipo.antonoff I know it's been around 2.5 months, but if you're still looking for a solution I would look at creating a droplet with a volume, and then using NFS on that droplet to allow networked access to your other droplets. The Swarm nodes should be able to bind mount them. Depending on your desired set-up you may want to have a NFS container running, pinned to the node attached to the Volume, which your other Swarm containers could mount as regular Docker volumes.

Hello

I am interested in this requirement as well.
We use file system as storage in our product. We are growing and we need to be able to share the block storage across droplets.

thanks

Hi, we are also interested in this feature.

Thanks!

This is an essential use case. I need to move to CDN just because I don't have this feature! It's disappointing that there is no response despite the comments here.

Any news on this?! This is a breaking issue for my company, and I'm trying to move us to Digital Ocean from AWS. We need to be able to continue serving content while rebooting/maintaining a server. Shared volume mounting would solve this for us.

Any news? Been few years. I am looking for this also.

I really enjoy using DO, thanks a lot for such amazing products!

But, I also faced this problem and I can't scale my Swarm cluster, because of this restriction. Ability to attach block storage to multiple hosts would be so helpful!

Any news? Been few years. I am looking for this also.

I'm interested in this feature too! Hope to see it as soon as possible.

+1 Would be glad if that use case available

+1 Would be a welcome addition to your feature set!

+1 too, if this function were available, I could spend double price for block storage :))

I see two possible solutions:

1) Use Digital Ocean Spaces and mount it using S3FS, but Spaces have been experiencing some weird timeouts lately so you must have a backup at hand or be sure to use a CDN in front of Spaces, or
2) If you want more control, you can spin up a small droplet, attach a volume, install an AWS S3/ DO Spaces clone, for example Minio, and mount it using S3FS or use it directly (if your system allows that).

As far as I understand, it's not desirable to mount a single volume to multiple droplets due to possible data corruption that might happen during simultaneous writes to the same file.

+1

This feature is essential for clusters

Have another answer? Share your knowledge.