How do you re-attach a persistent volume to a kubernetes cluster?

June 16, 2019 434 views
DigitalOcean Kubernetes

I created a volume for my kubernetes cluster using a persistent volume claim (PVC). I destroyed the cluster when I had finished using it, but kept the volume because it might be useful in future. I now want to create a new cluster and attach the existing volume.

How can I attach that volume to a new kubernetes cluster?

5 Answers

+1

We got similar issue, we snapshot old volume and deployed it on new volume in new cluster. Both needs to be on same datacenter or you try rsync by attaching volume manually to a droplet

Hello,

At this time it is not possible because of a limitation in the upstream Kubernetes CSI implementation.

Take a look at the following issues:
https://github.com/digitalocean/csi-digitalocean/issues/85
https://github.com/kubernetes-csi/external-provisioner/issues/86

There may be a workaround for this that our team is currently testing. If we have success with it I will update this post.

@phildougherty thanks for the link!! I had this problem while migrating all my workloads to a new cluster, and solved it by:

  1. create the PVC and PV on the new cluster
  2. let DO provision the new volume
  3. delete the PV on old + new clusters, causing DO volume to detach
  4. attach both DO volumes to a separate droplet
  5. mount both volumes, manually copy everything over
  6. detach both volumes
  7. re-create the PV on the new cluster to attach the newly-populated volume

Before that I tried Steps 1-3, but then manually deleted the DO volume from the control panel, then restored a Snapshot I had of the old volume with the “new” auto-generated name, but that failed because apparently the DO provisioner uses GUIDs to track them internally, so even if the names match what’s expected it won’t attach it.

Needless to say, the method you linked looks much simpler so I’ll be trying that next time.

  • BTW, it would be great if the DO API / CLI offered a means to do that automatically. I migrate my entire cluster semi-regularly in order to get new Kubernetes features so I’m sure this will become more of an issue for me as time goes on.

Have another answer? Share your knowledge.