I created a volume for my kubernetes cluster using a persistent volume claim (PVC). I destroyed the cluster when I had finished using it, but kept the volume because it might be useful in future. I now want to create a new cluster and attach the existing volume.
How can I attach that volume to a new kubernetes cluster?
These answers are provided by our Community. If you find them useful, show some love by clicking the heart. If you run into issues leave a comment, or add your own answer to help others.
Hey there! Quick update for you. This solution should work for you to use an existing volume.
https://github.com/digitalocean/csi-digitalocean/blob/master/examples/kubernetes/pod-single-existing-volume/README.md
Hey there! Quick update for you. This solution should work for you to use an existing volume.
https://github.com/digitalocean/csi-digitalocean/blob/master/examples/kubernetes/pod-single-existing-volume/README.md
Hey there! Quick update for you. This solution should work for you to use an existing volume.
https://github.com/digitalocean/csi-digitalocean/blob/master/examples/kubernetes/pod-single-existing-volume/README.md
Has anyone tried the migration described in https://github.com/digitalocean/csi-digitalocean/blob/master/examples/kubernetes/pod-single-existing-volume/README.md by adding a
claimRef
that only specifies namespace and name but not uid? E.g.@phildougherty thanks for the link!! I had this problem while migrating all my workloads to a new cluster, and solved it by:
Before that I tried Steps 1-3, but then manually deleted the DO volume from the control panel, then restored a Snapshot I had of the old volume with the “new” auto-generated name, but that failed because apparently the DO provisioner uses GUIDs to track them internally, so even if the names match what’s expected it won’t attach it.
Needless to say, the method you linked looks much simpler so I’ll be trying that next time.
Hello,
At this time it is not possible because of a limitation in the upstream Kubernetes CSI implementation.
Take a look at the following issues: https://github.com/digitalocean/csi-digitalocean/issues/85 https://github.com/kubernetes-csi/external-provisioner/issues/86
There may be a workaround for this that our team is currently testing. If we have success with it I will update this post.
+1
We got similar issue, we snapshot old volume and deployed it on new volume in new cluster. Both needs to be on same datacenter or you try rsync by attaching volume manually to a droplet
+1