Question

How do I create a PVC from a Volume Snapshot to a Kubernetes cluster? It is tuck in "pending" state

Posted February 9, 2021 795 views
BackupsKubernetesDigitalOcean Volumes

Hey there! Wondering if I could get some help with a k8s, PVCs, and volume backup restoration.

Context
Got an app in Cluster A with Postgres, which keeps data in a volume created via a PVC resource, and we’re migrating to a new Cluster B.

Versions
Cluster A - Kubernetes 1.18.14-do.0
Cluster B - Kubernetes 1.20.2-do.0

What I want to do
Use the PVC data from Cluster A and attach it to the app in Cluster B.

What I tried
Created a backup snapshot of the volume from Cluster A and attempted to create a new PVC in Cluster B from it following the tutorial in https://www.digitalocean.com/docs/kubernetes/how-to/snapshot-volumes/:

Cluster B - pvc.yaml

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: pg-data
  namespace: pipeline
  labels:
    environment: prod
spec:
  dataSource:
    name: pg-data-backup
    kind: VolumeSnapshot
    apiGroup: snapshot.storage.k8s.io
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 1Gi # same size as that of the PVC from Cluster A 

What I am seeing
The new PVC in Cluster B is stuck in Pending status, as shown below.

kubectl get persistentvolumeclaims -n pipeline

NAME      STATUS    VOLUME   CAPACITY   ACCESS MODES   STORAGECLASS       AGE
pg-data   Pending                                      do-block-storage   39m

Am I doing something wrong? Any ideas or guidance would be hugely appreciated.

These answers are provided by our Community. If you find them useful, show some love by clicking the heart. If you run into issues leave a comment, or add your own answer to help others.

×
Submit an Answer
1 answer

@careduz Would it be possible to simply upgrade Cluster A to Kubernetes 1.20.2-do.0 instead of trying to move your persistent volumes to Cluster B?

Think different and code well,

-Conrad

  • That’d certainly be an option if we were just upgrading out k8s, but we are moving because was Cluster A our first foray into K8s (bit of an experiment, as it were, to better understand how we could use it) and now that we want to move more of our infrastructure over we need more robust compute requirements, amongst others.

    I submitted a ticket that got escalated to the next support level but am waiting to hear back.