Question

Configuring nfs-server with DO Block Storage to support replication in kubernetes

Posted January 7, 2021 739 views
Block StorageKubernetes

The service works with 1 replica only but, when i try to scale the service.
Multi-Attach error for volume
i think this has something to do with DO block storage which supports RWO only.
is there a way to make it work?

1 comment

These answers are provided by our Community. If you find them useful, show some love by clicking the heart. If you run into issues leave a comment, or add your own answer to help others.

×
Submit an Answer
2 answers

Hi there @jthegreat,

I believe that NFS is not really needed. I’ve recently tested this and it seems to be working as expected with a PersistentVolumeClaim and a DigitalOcean Block storage.

You can follow the steps on how to do that here:

https://www.digitalocean.com/docs/kubernetes/how-to/add-volumes/

Let me know how it goes.
Regards,
Bobby

  • What we are trying to achieve is
    1 Volume only with multiple replicas

    • Hello,

      Yes, a PersistentVolumeClaim should work in this case.

      Regards,
      Bobby

      • Hi @bobbyiliev , I’m trying to solve a similar issue and if you could provide assistance it would be of great help.

        I have a PVC, with Storage Class as “do-block-storage” and I think that the settings “Access Mode: ReadWriteOnce” is what causes this, right?

        By the guide you linked, it says:
        accessModes must be set to ReadWriteOnce. The other parameters, ReadOnlyMany and ReadWriteMany, are not supported by DigitalOcean volumes”

        To enable multiple Pods to access the same Volume/PVC shouldn’t this use other access mode?

        • Hi there,

          Do you get the error only if the pods are scheduled on different nodes?

          I believe that the ReadWriteOnce indicates that the PVC would be mounted to 1 node only, but could then be used by multiple pods on that node.

          Regards,
          Bobby

answer from DO support:

Thank you for reaching out!

I understand your concern here about the ReadWriteMany. We won’t be able to support ReadWriteMany volumes because of some inherent limitations of our Block storage product. Our Engineering team is working towards the solution/workaround for this ReadWriteMany issue. However, I won’t be able to provide a specific ETA about the same.

Our do-block-storage product does have a few technical limitations that can cause trouble for some usecases. You can find more information on these limitations in our documentation:

https://www.digitalocean.com/docs/kubernetes/overview/#persistent-data

You can work around these limitations for example by using a different protocol to share and mount the storage. An example is exporting the storage throughout the cluster using other means such as the NFS protocol.

There is a helm chart ‘nfs-server-provisioner’ that will create a containerized NFS server. You can then back the nfs server with do-block-storage and use NFS exports for the use cases of shared storage or to get around the any mounting limitations. You can find more information and instructions on the helm chart and the helm project in the links below.
nfs-server-provisioner: https://github.com/helm/charts/tree/master/stable/nfs-server-provisioner

helm: https://helm.sh/docs/using_helm/#quickstart

Currently, setting up NFS is the workaround for RWX using PVC. However, the deployment of nfs-server described in this tutorial is not highly available, and therefore is not recommended for use in production. If nodes fail the storage would be down until the nfs-server pod is redeployed by kubernetes. There are more production ready set-ups out there. But this still stands as an easy and quick way to get up and running during development or testing out application proof of concepts for RWX volumes.

The helm chart above is good for testing proof of concept or low stakes workflows. The reason for this it is not highly available and thus recommended for production. If your nfs-server pod goes down for any reason. All of your storage exports are now stale which can lead to issues, and it typically doesn’t recover automatically but needs a manual pod restart to get back up and running.

There are some other setups out there like the https://rook.io/ project is highly available and allows for RWX volumes. But its configuration and setup is a bit more in depth.