Configuring nfs-server with DO Block Storage to support replication in kubernetes

The service works with 1 replica only but, when i try to scale the service. Multi-Attach error for volume i think this has something to do with DO block storage which supports RWO only. is there a way to make it work?

Show comments

Submit an answer

This textbox defaults to using Markdown to format your answer.

You can type !ref in this text area to quickly search our full set of tutorials, documentation & marketplace offerings and insert the link!

Sign In or Sign Up to Answer

These answers are provided by our Community. If you find them useful, show some love by clicking the heart. If you run into issues leave a comment, or add your own answer to help others.

Bobby Iliev
Site Moderator
Site Moderator badge
January 9, 2021
Accepted Answer

Hi there @jthegreat,

I believe that NFS is not really needed. I’ve recently tested this and it seems to be working as expected with a PersistentVolumeClaim and a DigitalOcean Block storage.

You can follow the steps on how to do that here:

Let me know how it goes. Regards, Bobby

answer from DO support:

Thank you for reaching out!

I understand your concern here about the ReadWriteMany. We won’t be able to support ReadWriteMany volumes because of some inherent limitations of our Block storage product. Our Engineering team is working towards the solution/workaround for this ReadWriteMany issue. However, I won’t be able to provide a specific ETA about the same.

Our do-block-storage product does have a few technical limitations that can cause trouble for some usecases. You can find more information on these limitations in our documentation:

You can work around these limitations for example by using a different protocol to share and mount the storage. An example is exporting the storage throughout the cluster using other means such as the NFS protocol.

There is a helm chart ‘nfs-server-provisioner’ that will create a containerized NFS server. You can then back the nfs server with do-block-storage and use NFS exports for the use cases of shared storage or to get around the any mounting limitations. You can find more information and instructions on the helm chart and the helm project in the links below. nfs-server-provisioner:


Currently, setting up NFS is the workaround for RWX using PVC. However, the deployment of nfs-server described in this tutorial is not highly available, and therefore is not recommended for use in production. If nodes fail the storage would be down until the nfs-server pod is redeployed by kubernetes. There are more production ready set-ups out there. But this still stands as an easy and quick way to get up and running during development or testing out application proof of concepts for RWX volumes.

The helm chart above is good for testing proof of concept or low stakes workflows. The reason for this it is not highly available and thus recommended for production. If your nfs-server pod goes down for any reason. All of your storage exports are now stale which can lead to issues, and it typically doesn’t recover automatically but needs a manual pod restart to get back up and running.

There are some other setups out there like the project is highly available and allows for RWX volumes. But its configuration and setup is a bit more in depth.

Try DigitalOcean for free

Click below to sign up and get $200 of credit to try our products over 60 days!

Sign up

card icon
Get our biweekly newsletter

Sign up for Infrastructure as a Newsletter.

Sign up
card icon
Hollie's Hub for Good

Working on improving health and education, reducing inequality, and spurring economic growth? We’d like to help.

Learn more
card icon
Become a contributor

You get paid; we donate to tech nonprofits.

Learn more
Welcome to the developer cloud

DigitalOcean makes it simple to launch in the cloud and scale up as you grow – whether you’re running one virtual machine or ten thousand.

Learn more ->
DigitalOcean Cloud Control Panel
Get started for free

Enter your email to get $200 in credit for your first 60 days with DigitalOcean.

New accounts only. By submitting your email you agree to our Privacy Policy.

© 2023 DigitalOcean, LLC.