Kuberntes PVC ReadWriteMany access mode alternative

January 3, 2019 3.6k views

Hello Everyone,

According to documentation on DO regarding PVC, it can have only ReadWriteOnce access mode, it means that this PVC can be acessible by only one pod/node .
However I have an app that I want to scale between several nodes. Each pod located on own node + load balancer for traffic, and problem is - each pod should have access to shared media files (images etc, write and read), , but with ReadWriteOnce pods can not access to PVC from different nodes. This problem could be easily solved with ReadWriteMany access mode for PVC

Is there is a way to build volume on DO kubernetes cluster, that could be shared between kubernetes nodes

8 Answers

I was self considering a sort of S3FS mount with spaces, and that build-in in my docker container. But the ideal situation is ReadWriteMany mount of course!

I also come across this today: https://github.com/CTrox/csi-s3 not working on kubernetes 1.12 for now but if that is fixed that could also a solution.

Hello Lennard, thanks I’ll try it out.

I am also thinking to create POD with NFS and share it between nodes


I will provide feedback about my research

  • Thx @artemeb5c59b7a9928f2362137 for the link, seems an interesting solution. Let us know how that works for you.

    • For what it’s worth, I did try this, but could not get it to work.

      kubectl logs nfs-server-pod
       * Not starting NFS kernel daemon: no support in current kernel.
      • Yes, I also faced with it, at time I was testing it kube nodes did not have NFS provisioner pre-installed, and looks like you need to manulally install it on cluster directly to make it work, it was not solution for me, as we want to make it work out of the box without additional cluster tuning
        Try to setup NFS in docker droplet and consume it on cluster side, it works ok for me

I am curious, did you got any further? I now run pods on dedicated nodes with nodeSelector and labels.

@jwdobken I have created NFS on separate droplet, and using it as shared volume for pods that require it, it works properly.
Previously I have tried to setup nfs inside cluster but decided to check if NFS matches my requirements (does it give any locks etc ) on separate droplet first, additionally it was easier to configure backups that I must have

  • I’m considering same, but I’m curious: how is bandwidth into and out of the NFS droplet charged? I know DO doesn’t charge bandwidth inside the cluster, but I haven’t seen anything about this sort of layout.

    I would potentially have approx 500 public-facing nodes attached to several of these NFS pods - it would be awful if that bandwidth was charged in pennies.

Take a look at the Digital Ocean section here: https://medium.com/asl19-developers/create-readwritemany-persistentvolumeclaims-on-your-kubernetes-cluster-3a8db51f98e3

I am using s3-csi on Digital Ocean Spaces with the Goofys mounter to achieve readWriteMany.

Going to give a try to @lex19 approach and see how it works, on our case we only have a few writes.

Have another answer? Share your knowledge.