How do you re-attach a persistent volume to a kubernetes cluster?

I created a volume for my kubernetes cluster using a persistent volume claim (PVC). I destroyed the cluster when I had finished using it, but kept the volume because it might be useful in future. I now want to create a new cluster and attach the existing volume.

How can I attach that volume to a new kubernetes cluster?


Submit an answer
You can type!ref in this text area to quickly search our full set of tutorials, documentation & marketplace offerings and insert the link!

These answers are provided by our Community. If you find them useful, show some love by clicking the heart. If you run into issues leave a comment, or add your own answer to help others.

Hey there! Quick update for you. This solution should work for you to use an existing volume.

Hey there! Quick update for you. This solution should work for you to use an existing volume.

Hey there! Quick update for you. This solution should work for you to use an existing volume.

Has anyone tried the migration described in by adding a claimRef that only specifies namespace and name but not uid? E.g.

apiVersion: v1
kind: PersistentVolume
  name: example
  - ReadWriteOnce
    storage: 10Gi
    apiVersion: v1
    kind: PersistentVolumeClaim
    name: caluma-adfinis-minio
    namespace: caluma-adfinis
  persistentVolumeReclaimPolicy: Delete
  storageClassName: default
  volumeMode: Filesystem

@phildougherty thanks for the link!! I had this problem while migrating all my workloads to a new cluster, and solved it by:

  1. create the PVC and PV on the new cluster
  2. let DO provision the new volume
  3. delete the PV on old + new clusters, causing DO volume to detach
  4. attach both DO volumes to a separate droplet
  5. mount both volumes, manually copy everything over
  6. detach both volumes
  7. re-create the PV on the new cluster to attach the newly-populated volume

Before that I tried Steps 1-3, but then manually deleted the DO volume from the control panel, then restored a Snapshot I had of the old volume with the “new” auto-generated name, but that failed because apparently the DO provisioner uses GUIDs to track them internally, so even if the names match what’s expected it won’t attach it.

Needless to say, the method you linked looks much simpler so I’ll be trying that next time.


At this time it is not possible because of a limitation in the upstream Kubernetes CSI implementation.

Take a look at the following issues:

There may be a workaround for this that our team is currently testing. If we have success with it I will update this post.


We got similar issue, we snapshot old volume and deployed it on new volume in new cluster. Both needs to be on same datacenter or you try rsync by attaching volume manually to a droplet