Question

Kuberntes PVC ReadWriteMany access mode alternative

Posted January 3, 2019 7.2k views
Kubernetes

Hello Everyone,

According to documentation on DO regarding PVC, it can have only ReadWriteOnce access mode, it means that this PVC can be acessible by only one pod/node .
However I have an app that I want to scale between several nodes. Each pod located on own node + load balancer for traffic, and problem is - each pod should have access to shared media files (images etc, write and read), , but with ReadWriteOnce pods can not access to PVC from different nodes. This problem could be easily solved with ReadWriteMany access mode for PVC

Is there is a way to build volume on DO kubernetes cluster, that could be shared between kubernetes nodes

These answers are provided by our Community. If you find them useful, show some love by clicking the heart. If you run into issues leave a comment, or add your own answer to help others.

×
7 answers

Take a look at the Digital Ocean section here: https://medium.com/asl19-developers/create-readwritemany-persistentvolumeclaims-on-your-kubernetes-cluster-3a8db51f98e3

I am using s3-csi on Digital Ocean Spaces with the Goofys mounter to achieve readWriteMany.

I was self considering a sort of S3FS mount with spaces, and that build-in in my docker container. But the ideal situation is ReadWriteMany mount of course!

I also come across this today: https://github.com/CTrox/csi-s3 not working on kubernetes 1.12 for now but if that is fixed that could also a solution.

Hello Lennard, thanks I’ll try it out.

I am also thinking to create POD with NFS and share it between nodes

https://matthewpalmer.net/kubernetes-app-developer/articles/kubernetes-volumes-example-nfs-persistent-volume.html

I will provide feedback about my research

I am curious, did you got any further? I now run pods on dedicated nodes with nodeSelector and labels.

@jwdobken I have created NFS on separate droplet, and using it as shared volume for pods that require it, it works properly.
Previously I have tried to setup nfs inside cluster but decided to check if NFS matches my requirements (does it give any locks etc ) on separate droplet first, additionally it was easier to configure backups that I must have

  • I’m considering same, but I’m curious: how is bandwidth into and out of the NFS droplet charged? I know DO doesn’t charge bandwidth inside the cluster, but I haven’t seen anything about this sort of layout.

    I would potentially have approx 500 public-facing nodes attached to several of these NFS pods - it would be awful if that bandwidth was charged in pennies.

Going to give a try to @lex19 approach and see how it works, on our case we only have a few writes.

Submit an Answer