Question

Shared storage on Digital Ocean Kubernetes

I have been experimenting with the early access of Kubernetes. In GKE, for example, it’s easy to setup a standard VM with an NFS server configured, and then install the Helm chart for nfs-client-provisioner so you can have shared storage among pods (for things like redundant Django apps running in different pods). I tried this with DO and it did not work, even when I monkeyed with the firewall settings for the k8s nodes.

I am unaware of another inexpensive shared storage option, the standard PVCs offered by DO don’t permit ReadWriteMany. Does anyone have a better solution?


Submit an answer


This textbox defaults to using Markdown to format your answer.

You can type !ref in this text area to quickly search our full set of tutorials, documentation & marketplace offerings and insert the link!

Sign In or Sign Up to Answer

These answers are provided by our Community. If you find them useful, show some love by clicking the heart. If you run into issues leave a comment, or add your own answer to help others.

Accepted Answer

Digital Ocean support comes through again–they let me know that they only just started to deploy the NFS tools on worker nodes and if I deployed a cluster with k8s version 1.11.3-do.1 it should work. I tried again today and nfs-server-provisioner works now, as well as rook.io with CephFS.

Quick bonnie++ test, two different clusters, same worker node configuration:

CephFS via Rook:

Version  1.97       ------Sequential Output------ --Sequential Input- --Random-
Concurrency   1     -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
Machine        Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec %CP
magical-ardinghe 4G   547  89 37557   4 24877   3   946  93 88786   5  3006  44
Latency               266ms    2444ms    3881ms   25122us     257ms     295ms
Version  1.97       ------Sequential Create------ --------Random Create--------
magical-ardinghelli -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
              files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP
                 16   727   1 +++++ +++   481   1   794   2  1881   2   456   1
Latency              1128ms   23441us     672ms    1445ms   10984us     173ms
1.97,1.97,magical-ardinghelli-3l9v,1,1544812020,4G,,547,89,37557,4,24877,3,946,93,88786,5,3006,44,16,,,,,727,1,+++++
,+++,481,1,794,2,1881,2,456,1,266ms,2444ms,3881ms,25122us,257ms,295ms,1128ms,23441us,672ms,1445ms,10984us,173ms

For NFS:

Version  1.97       ------Sequential Output------ --Sequential Input- --Random-
Concurrency   1     -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
Machine        Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec %CP
nfs-busybox-dfdb 4G  1000  95 31489   2 135902  13  1287  93 229139  11  2707  33
Latency             19663us   36927ms     182ms     117ms   80444us   27791us
Version  1.97       ------Sequential Create------ --------Random Create--------
nfs-busybox-dfdb5c6 -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
              files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP
                 16  1157  14 23094  15  1565  20   249   3  4797  22  1559  19
Latency              9404us   29618us   10571us     662ms    4625us   17487us
1.97,1.97,nfs-busybox-dfdb5c685-vckmj,1,1544823234,4G,,1000,95,31489,2,135902,13,1287,93,229139,11,2707,33,16,,,,,1157,14,23094,15,1565,20,249,3,4797,22,1559,19,19663us,36927ms,182ms,117ms,80444us,27791us,9404us,29618us,10571us,662ms,4625us,17487us0

Good thinking to try nfs-server-provisioner. I suspected that the nfs tools packages were not installed on the nodes. I wish we could ssh into the nodes, could have answered that question much quicker.

Any other ideas for shared storage? I considered rook.io and fired up a cluster just to test that but couldn’t get it working. Also there are big scary warnings on the Ceph site saying that CephFS is not for production use.

Looks like workers just don’t have nfs tools to mount NFS. Both nfs-server-provisioner over “do-block-storage” and external nfs server (already used on my old k8s cluster) failed to mount.

Try DigitalOcean for free

Click below to sign up and get $200 of credit to try our products over 60 days!

Sign up

Get our biweekly newsletter

Sign up for Infrastructure as a Newsletter.

Hollie's Hub for Good

Working on improving health and education, reducing inequality, and spurring economic growth? We'd like to help.

Become a contributor

Get paid to write technical tutorials and select a tech-focused charity to receive a matching donation.

Welcome to the developer cloud

DigitalOcean makes it simple to launch in the cloud and scale up as you grow — whether you're running one virtual machine or ten thousand.

Learn more
DigitalOcean Cloud Control Panel