Question

Can I share an attached volume between droplets?

Posted August 24, 2016 19.3k views
UbuntuBlock Storage

I know I can share disk resources using NFS or SSHFS but I wonder if a volume can be shared directly between 2 droplets.
I know it sounds weird but I am curious about it.

3 comments
  • please this service is seriously needed… digital ocean should do something

  • I just hit this show stopper… cannot share volumes!?
    You cannot seriously operate a cloud computing environment without shared storage.
    Without it, you cannot set up a high availability test environment.
    What is currently being done by Digital Ocean on shared storage? Is the only option to setup our own NFS server and share among droplets?

  • Show 1 more comments

These answers are provided by our Community. If you find them useful, show some love by clicking the heart. If you run into issues leave a comment, or add your own answer to help others.

×
54 answers

This function is still unavailable in 2019 ! I really need this , please.

I noticed that digitalocean added load balancers recently which is awesome, but I am wondering if we can share storage while using this ?

Exact use case too!
Would be really useful - thanks!

Hi, i need that exact use case as well.
I want to deploy a Docker Swarm and share a volume between all the nodes in the cluster. Is there a ETA for this?
Thanks!

+1 also need this feature

+1 this is a great feature that we need!
please add it!

+1 we really need this feature.

+1 for this feature.
As said, due to technical limitations, implementation of this feature is not practical or even impossible, which I totally understand.
But what I don’t understand is inability for DO to provide some alternative, like managed NFS cluster or similar.
It’s not that hard, and still would be useful for many.

  • Yes, I would like to be able to access files between droplets. I could try SSHFS.

    Doing this would be like trying to connect an IDE, SATA, USB drive to two computers at once. Even someone made some sort of wierd connector cable just because it cable would physically connect and fit into the respective ports, you would not get the drive on both systems because of the protocols…

    For me I use digital ocean for self hosting, so using external NFS, database servers, etc defeats the purpose. But I see there are developers (DO’s primary target marlet) who can’t be bothered with SysAdmin where I am a SysAdmin honin’ those skills (though I do code). A client of mine used managed hosting, I don’t see why if they are competent enough to know how to code they can’t configure their servers, I guess they don’t have time and would rather spend their time developing.

Not currently. At this time it is only possible to mount a volume to one droplet at a time. We are looking into making it possible to mount a volume on more than one droplet in the future but there is no ETA for that at this time.

Hi, we are also interested in this feature.

Thanks!

This is an essential use case. I need to move to CDN just because I don’t have this feature! It’s disappointing that there is no response despite the comments here.

+1.
We have been using DO for all development/testing. We have successfully deployed a scalable staging environment using Docker Swarm Mode. The only glitch is that we can not scale the application container across multiple swarm nodes as these instances can not share a persistence storage (volumes) across the nodes. There are many suggestions from unofficial sources to use NFS, Flocker, GlusterFS, etc. But there is no clear direction / guideline from DigitalOcean.

Without an officially supported solution for shared storage from DO, we have to regretfully leave DO and join the AWS band wagon.

Any news on this?! This is a breaking issue for my company, and I’m trying to move us to Digital Ocean from AWS. We need to be able to continue serving content while rebooting/maintaining a server. Shared volume mounting would solve this for us.

Any news? Been few years. I am looking for this also.

+1 Also need this feature.

any updates on this one? I’m interested in trying this feature too! Hope the DO team will have this on their next update

+1 too, if this function were available, I could spend double price for block storage :))

+1, i’d love this too!

any news? I’m so looking forward to it. : )

+1 Also need this feature.

+1

I would love to see that kind of feature too. Of course I agree with @jarland about the fact that you cannot add a DO block storage to multiple droplet because of concurrent write.

May be a NFS managed service is more interesting (why managed as NFS is easy to install? because of bandwith, replication, availabilty etc) like EFS in AWS world. Make sense?

Any chance DO make it happen?

+1 also need this feature, it is currently a blocker for me

Wow! This is a serious deficiency. Not implementing this shouldn’t even be an option. It is REQUIRED by a lot of architectures. You can have each of your customers redesign their systems to work with this limitation, or you can fix it so it works like it should. I would recommend the second option. I’m a little perturbed I spent so much time bringing up a new site just to be blocked by this.

I could program around it, but I won’t. It’s not my job to work around your defects.

I have the same issue, i am trying to setup mailu on my cluster, it is being split up in multiple containers for each service. Yet they are configured to use the same PVC in ReadWriteMany mode. I can workaround it with creating individual volumes, but then i reach the hard limit. Either way, DigitalOcean better revise their Kubernetes offer because ReadWriteMany is very common and prohibiting is makes zero sense. This should be available.

https://mailu.io/master/kubernetes/mailu/index.html#prequisites

https://raw.githubusercontent.com/Mailu/Mailu/master/docs/kubernetes/mailu/pvc.yaml

Gives us FFS ReadWriteMany or stop offering kubernetes on digitalocean please. Thank you!

Well today I hit the same brick wall : ) “ At the moment a volume can only be attached to a single Droplet.”

Btw if you have not voted for a feature like this you can do so here.
https://ideas.digitalocean.com/ideas/DO-I-739

Wow, this was extremely disappointing. I have been using DO including block storage for many years but never needed to scale out horizontally, until now. I just assumed that you could attach block storage to several Droplets. Thought that was one of the two primary use cases(the other being that you need more storage).

Deploying code to several servers every time is both a lot more complex, but also more error-prone and makes debugging a lot harder if you are not 100% sure if all the code has been distributed to all servers. Zero-downtime deploys would be almost impossible even if you deploy in parallel you would have different code on different servers for seconds at best each deploy.

I’m seriously considering other options now…

Its incredible that this feature still doesnt exist. It kind of beats a lot of the purpose of kubernetes or other scaling solutions.

Hello.
I would also like to know, I intend to use Cluster with Docker and it would be interesting to use Block Storage to centralize the files.

  • @felipo.antonoff I know it’s been around 2.5 months, but if you’re still looking for a solution I would look at creating a droplet with a volume, and then using NFS on that droplet to allow networked access to your other droplets. The Swarm nodes should be able to bind mount them. Depending on your desired set-up you may want to have a NFS container running, pinned to the node attached to the Volume, which your other Swarm containers could mount as regular Docker volumes.

Hello

I am interested in this requirement as well.
We use file system as storage in our product. We are growing and we need to be able to share the block storage across droplets.

thanks

I really enjoy using DO, thanks a lot for such amazing products!

But, I also faced this problem and I can’t scale my Swarm cluster, because of this restriction. Ability to attach block storage to multiple hosts would be so helpful!

Any news? Been few years. I am looking for this also.

I’m interested in this feature too! Hope to see it as soon as possible.

+1 Would be glad if that use case available

+1 Would be a welcome addition to your feature set!

+1 Need this too.

I see two possible solutions:

1) Use Digital Ocean Spaces and mount it using S3FS, but Spaces have been experiencing some weird timeouts lately so you must have a backup at hand or be sure to use a CDN in front of Spaces, or
2) If you want more control, you can spin up a small droplet, attach a volume, install an AWS S3/ DO Spaces clone, for example Minio, and mount it using S3FS or use it directly (if your system allows that).

As far as I understand, it’s not desirable to mount a single volume to multiple droplets due to possible data corruption that might happen during simultaneous writes to the same file.

+1

This feature is essential for clusters

I’d like to see this aswell to share a backup-volume on several droplets.

Hey friends!

Providing an update on this. While mounting a volume to more than one droplet sounds great in theory, this does not work well with the design of volumes / block storage. The design is that the droplet will see the storage as a locally attached drive, exactly the same as a physical hard drive placed into the computer. For the same reasons you cannot place a drive in one computer and also attach it to the computer next to it, block storage cannot be treated that way either. The operating systems are simply not built to function that way, the data would be corrupted.

This isn’t to say that a future product iteration would not function differently or that a method of working around this will not exist later. However, it is to say that right now this is not the best way to approach the concept.

Now there may be a way to do something like this with object storage. There are definitely ways to add another layer on top of the droplet which has the volume attached, that can be used to mount it elsewhere. SSHFS was one thing mentioned here, NFS being another.

Hope that helps provide some clarity on this :)

Kind Regards,
Jarland

  • This does provide more clarity in the sense that nothing will change in the near future. It does not provide any solutions that were not yet stated or are not painful to implement. DigitalOcean misses so much opportunity because scaling services that need shared storage is a pain. I think if there was at least an officially supported direction on how to do shared volumes (for Docker Swarm) then we’d be much better off.

  • Hey Jarland,

    I do understand that the complexity of the issue and that one couldn’t just attach the same volume to multiple machines. Though, I think that what’s being asked is a way of achieving a more flexible, shared, block storage, not necessarily using the already existing volumes.

    Now there may be a way to do something like this with object storage. There are definitely ways to add another layer on top of the droplet which has the volume attached, that can be used to mount it elsewhere. SSHFS was one thing mentioned here, NFS being another.

    This is how we’re actually managing it; attaching a volume to a droplet and sharing it through NFS. There are some issues with this, such as a single point of failure on the droplet holding the volume (yes, I know there are such things as a NFS cluster). Maybe one way to go around this would be for you to offer a similar solution, but fully managed? I believe that one of the reasons why people are asking for a shared block storage solution is because it is not easy to setup and manage one on their own.

  • It’d be great if it could just be treated like a NAS (network attached storage) volume.

    For my scenario I really only need to be able to read from the droplets so corruption really wouldn’t be an issue. A (hack) workaround would be to mount it on one droplet then share it to the others via NFS, but if the droplet that “owns” the volume goes down then everything goes down. This is not acceptable.

    I’d love a volume that wasn’t “owned” by any droplet, but could be available like a NAS volume on multiple droplets.

Any news on this ? This would solve so much problems that im facing! :D

+1 On this. This is huge. I’m a massive advocate of DO and only today realised you can’t map a volume to two droplets. I can’t even begin to describe how much I assumed that you could do this. I had some problems solved in my head based on being able to do this.

DO - I implore you to make this work by whatever means you can.

DO, wake up, people wait it!

Wanted to start using Kubernetes Cluster but having volumes without “ReadWriteMany” access is just makes it pointless in sense of scaling. From what I understand so far you are limited to a single node with you volume.

  • I ran into this same issue today! Hahaha, I was certain that from what I’ve read about Kubernetes that it would be the solution to our scaling problems. What’s the point if you can’t share fs system across many pods :D I created 6 pods as replicas and only half of them work and the other half dont since theyre on the other node.

    Im looking into

    1. either azure or amazon elastic file thingy (not fully sure if they support multi read write yet)
    2. I read somewhere that Portworx Volumes support multi read write.
    3. GlusterFs (ugh) or possibly some other NFS thing.
    • I have the same issue, also there is a soft limit on max volumes (10) and hard limit on attached volumes to one Droplet (7). Shared volumes and these limits are really a pain in the a** to setup a kubernetes cluster.

+1 from me, its not possible to mount an external volume on 1 device and then share it with NFS to other devices

+1 Is there a succinct summary of options absent this feature, DO friends?

OK, look. Since this has been requested and requested and +1’d so many times I think it’s worth dispelling a few myths and providing an idea of how you could actually achieve what you’re after.

First: a block device is not just a “magic storage thing”. It’s like a physical hard drive and, as one commenter said: “You can’t plug one hard drive into two computers”. Another way to think of this is the name: “block device”. In other words, one machine accessing the device blocks another.

Second: you can have you cake and eat it, depending on how technical you want to get.

Your easiest bet is to use a droplet with an attached block device (which has redundancy, if you want it) and then share that to your other droplets using NFS. Out of the box, NFS is fairly straightforward to configure, but it’s performance can vary depending on what you’re doing with it. Running lots of “slave” web servers all reading off an NFS share for their content is fine IMHO, but lots of nodes writing to it = less good.

DRBD [distributed redundant block device] is phenomenal, but probably not I’d consider inside a VM environment. I used this when I ran a digital agency a while back: this allows you to create partitions on separate machines (100GB on machine A and the same on B) and it’ll synchronise between both machines and provide failover support without a central storage area (which is why we used it). It was [at the time] rock solid, but unless you find a decent guide on how to set it up, it can feel a bit overwhelming.

There are also other options like GlusterFS (lordy!) but here (for me) is the Fog or War and I haven’t touched this much.

In short: you cannot have a block device which connects to two things, by definition, but you can get something close.

by John Kwiatkoski
With the digitalocean-csi, DigitalOcean Block Storage, and the NFS protocol, you can make a ReadWriteMany (RWX) Persistent Volume for Kubernetes. In this tutorial, you will configure dynamic provisioning for NFS volumes within a DigitalOcean Kubernetes (DOKS) cluster, in which the exports are stored on DigitalOcean Block storage volumes. You will then deploy multiple instances of a demo Nginx application and test the data sharing between each instance.
Submit an Answer