I have been experimenting with the early access of Kubernetes. In GKE, for example, it’s easy to setup a standard VM with an NFS server configured, and then install the Helm chart for nfs-client-provisioner so you can have shared storage among pods (for things like redundant Django apps running in different pods). I tried this with DO and it did not work, even when I monkeyed with the firewall settings for the k8s nodes.
I am unaware of another inexpensive shared storage option, the standard PVCs offered by DO don’t permit ReadWriteMany. Does anyone have a better solution?
This textbox defaults to using Markdown to format your answer.
You can type !ref in this text area to quickly search our full set of tutorials, documentation & marketplace offerings and insert the link!
Accepted Answer
Digital Ocean support comes through again–they let me know that they only just started to deploy the NFS tools on worker nodes and if I deployed a cluster with k8s version 1.11.3-do.1 it should work. I tried again today and nfs-server-provisioner works now, as well as rook.io with CephFS.
Quick bonnie++ test, two different clusters, same worker node configuration:
CephFS via Rook:
Version 1.97 ------Sequential Output------ --Sequential Input- --Random-
Concurrency 1 -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP
magical-ardinghe 4G 547 89 37557 4 24877 3 946 93 88786 5 3006 44
Latency 266ms 2444ms 3881ms 25122us 257ms 295ms
Version 1.97 ------Sequential Create------ --------Random Create--------
magical-ardinghelli -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP
16 727 1 +++++ +++ 481 1 794 2 1881 2 456 1
Latency 1128ms 23441us 672ms 1445ms 10984us 173ms
1.97,1.97,magical-ardinghelli-3l9v,1,1544812020,4G,,547,89,37557,4,24877,3,946,93,88786,5,3006,44,16,,,,,727,1,+++++
,+++,481,1,794,2,1881,2,456,1,266ms,2444ms,3881ms,25122us,257ms,295ms,1128ms,23441us,672ms,1445ms,10984us,173ms
For NFS:
Version 1.97 ------Sequential Output------ --Sequential Input- --Random-
Concurrency 1 -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP
nfs-busybox-dfdb 4G 1000 95 31489 2 135902 13 1287 93 229139 11 2707 33
Latency 19663us 36927ms 182ms 117ms 80444us 27791us
Version 1.97 ------Sequential Create------ --------Random Create--------
nfs-busybox-dfdb5c6 -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP
16 1157 14 23094 15 1565 20 249 3 4797 22 1559 19
Latency 9404us 29618us 10571us 662ms 4625us 17487us
1.97,1.97,nfs-busybox-dfdb5c685-vckmj,1,1544823234,4G,,1000,95,31489,2,135902,13,1287,93,229139,11,2707,33,16,,,,,1157,14,23094,15,1565,20,249,3,4797,22,1559,19,19663us,36927ms,182ms,117ms,80444us,27791us,9404us,29618us,10571us,662ms,4625us,17487us0
Also this https://github.com/kubernetes/examples/tree/master/staging/volumes/nfs can help.
Looks like workers just don’t have nfs tools to mount NFS. Both nfs-server-provisioner over “do-block-storage” and external nfs server (already used on my old k8s cluster) failed to mount.
Get paid to write technical tutorials and select a tech-focused charity to receive a matching donation.
Full documentation for every DigitalOcean product.
The Wave has everything you need to know about building a business, from raising funding to marketing your product.
Stay up to date by signing up for DigitalOcean’s Infrastructure as a Newsletter.
New accounts only. By submitting your email you agree to our Privacy Policy
Scale up as you grow — whether you're running one virtual machine or ten thousand.
Sign up and get $200 in credit for your first 60 days with DigitalOcean.*
*This promotional offer applies to new accounts only.