I’ve been testing/learning Kubernetes using the DO managed Kubernetes cluster and am having some issues with some persistent volumes.

I was testing a deployment of a MSSQL Server Express service but needed more RAM to run the container. I went to the settings on the cluster and deleted the 1GB/1vCPU pool and created a new pool with more memory.

Once I was done testing I resized the node pool again back to a single node 1GB/1vCPU pool.

A few days later I noticed that there were two pvc volumnes listed in the DO control panel but I am unable to delete, attach, or detach them. I deleted the pvc and pv resources using kubectl but they still remain.

If I try to detach one of them it just spins and eventually does nothing.

If I try to attach one of them I get an error toast notification that says “Attachment not found”.

I ran kubectl logs -f csi-do-controller-0 csi-attacher --namespace=kube-system and it keeps looping the following errors:

I1025 00:51:35.671304       1 reflector.go:286] github.com/kubernetes-csi/external-attacher/vendor/k8s.io/client-go/informers/factory.go:87: forcing resync
I1025 00:51:35.671387       1 controller.go:167] Started VA processing "csi-4ee97f129eafa27066c9e111ea25bd0d40dfacbd126e4f20e4eaf400d91b256f"
I1025 00:51:35.671400       1 csi_handler.go:76] CSIHandler: processing VA "csi-4ee97f129eafa27066c9e111ea25bd0d40dfacbd126e4f20e4eaf400d91b256f"
I1025 00:51:35.671406       1 csi_handler.go:127] Starting detach operation for "csi-4ee97f129eafa27066c9e111ea25bd0d40dfacbd126e4f20e4eaf400d91b256f"
I1025 00:51:35.671472       1 csi_handler.go:134] Detaching "csi-4ee97f129eafa27066c9e111ea25bd0d40dfacbd126e4f20e4eaf400d91b256f"
I1025 00:51:35.671485       1 csi_handler.go:335] Saving detach error to "csi-4ee97f129eafa27066c9e111ea25bd0d40dfacbd126e4f20e4eaf400d91b256f"
I1025 00:51:35.680875       1 csi_handler.go:345] Saved detach error to "csi-4ee97f129eafa27066c9e111ea25bd0d40dfacbd126e4f20e4eaf400d91b256f"
I1025 00:51:35.680894       1 csi_handler.go:86] Error processing "csi-4ee97f129eafa27066c9e111ea25bd0d40dfacbd126e4f20e4eaf400d91b256f": failed to detach: node "compassionate-hertz-e8z" not found
I1025 00:51:35.680919       1 controller.go:167] Started VA processing "csi-4ee97f129eafa27066c9e111ea25bd0d40dfacbd126e4f20e4eaf400d91b256f"
I1025 00:51:35.680924       1 csi_handler.go:76] CSIHandler: processing VA "csi-4ee97f129eafa27066c9e111ea25bd0d40dfacbd126e4f20e4eaf400d91b256f"
I1025 00:51:35.680928       1 csi_handler.go:127] Starting detach operation for "csi-4ee97f129eafa27066c9e111ea25bd0d40dfacbd126e4f20e4eaf400d91b256f"
I1025 00:51:35.680956       1 csi_handler.go:134] Detaching "csi-4ee97f129eafa27066c9e111ea25bd0d40dfacbd126e4f20e4eaf400d91b256f"
I1025 00:51:35.680964       1 csi_handler.go:335] Saving detach error to "csi-4ee97f129eafa27066c9e111ea25bd0d40dfacbd126e4f20e4eaf400d91b256f"
I1025 00:51:35.684097       1 csi_handler.go:345] Saved detach error to "csi-4ee97f129eafa27066c9e111ea25bd0d40dfacbd126e4f20e4eaf400d91b256f"
I1025 00:51:35.684113       1 csi_handler.go:86] Error processing "csi-4ee97f129eafa27066c9e111ea25bd0d40dfacbd126e4f20e4eaf400d91b256f": failed to detach: node "compassionate-hertz-e8z" not found
I1025 00:51:36.111862       1 reflector.go:286] github.com/kubernetes-csi/external-attacher/vendor/k8s.io/client-go/informers/factory.go:87: forcing resync
I1025 00:51:36.111932       1 controller.go:197] Started PV processing "pvc-37f8f87ed59b11e8"
I1025 00:51:36.111941       1 csi_handler.go:350] CSIHandler: processing PV "pvc-37f8f87ed59b11e8"
I1025 00:51:36.111977       1 csi_handler.go:386] CSIHandler: processing PV "pvc-37f8f87ed59b11e8": VA "csi-4ee97f129eafa27066c9e111ea25bd0d40dfacbd126e4f20e4eaf400d91b256f" found
I1025 00:51:37.012480       1 controller.go:197] Started PV processing "pvc-37f8f87ed59b11e8"
I1025 00:51:37.012497       1 csi_handler.go:350] CSIHandler: processing PV "pvc-37f8f87ed59b11e8"
I1025 00:51:37.012532       1 csi_handler.go:386] CSIHandler: processing PV "pvc-37f8f87ed59b11e8": VA "csi-4ee97f129eafa27066c9e111ea25bd0d40dfacbd126e4f20e4eaf400d91b256f" found
I1025 00:51:37.391372       1 reflector.go:428] github.com/kubernetes-csi/external-attacher/vendor/k8s.io/client-go/informers/factory.go:87: Watch close - *v1beta1.VolumeAttachment total 39 items received
I1025 00:51:45.671543       1 reflector.go:286] github.com/kubernetes-csi/external-attacher/vendor/k8s.io/client-go/informers/factory.go:87: forcing resync
I1025 00:51:45.671632       1 controller.go:167] Started VA processing "csi-4ee97f129eafa27066c9e111ea25bd0d40dfacbd126e4f20e4eaf400d91b256f"
I1025 00:51:45.671646       1 csi_handler.go:76] CSIHandler: processing VA "csi-4ee97f129eafa27066c9e111ea25bd0d40dfacbd126e4f20e4eaf400d91b256f"
I1025 00:51:45.671654       1 csi_handler.go:127] Starting detach operation for "csi-4ee97f129eafa27066c9e111ea25bd0d40dfacbd126e4f20e4eaf400d91b256f"
I1025 00:51:45.671715       1 csi_handler.go:134] Detaching "csi-4ee97f129eafa27066c9e111ea25bd0d40dfacbd126e4f20e4eaf400d91b256f"
I1025 00:51:45.671731       1 csi_handler.go:335] Saving detach error to "csi-4ee97f129eafa27066c9e111ea25bd0d40dfacbd126e4f20e4eaf400d91b256f"
I1025 00:51:45.679013       1 csi_handler.go:345] Saved detach error to "csi-4ee97f129eafa27066c9e111ea25bd0d40dfacbd126e4f20e4eaf400d91b256f"
I1025 00:51:45.679038       1 csi_handler.go:86] Error processing "csi-4ee97f129eafa27066c9e111ea25bd0d40dfacbd126e4f20e4eaf400d91b256f": failed to detach: node "compassionate-hertz-e8z" not found
I1025 00:51:45.679076       1 controller.go:167] Started VA processing "csi-4ee97f129eafa27066c9e111ea25bd0d40dfacbd126e4f20e4eaf400d91b256f"
I1025 00:51:45.679086       1 csi_handler.go:76] CSIHandler: processing VA "csi-4ee97f129eafa27066c9e111ea25bd0d40dfacbd126e4f20e4eaf400d91b256f"
I1025 00:51:45.679094       1 csi_handler.go:127] Starting detach operation for "csi-4ee97f129eafa27066c9e111ea25bd0d40dfacbd126e4f20e4eaf400d91b256f"
I1025 00:51:45.679133       1 csi_handler.go:134] Detaching "csi-4ee97f129eafa27066c9e111ea25bd0d40dfacbd126e4f20e4eaf400d91b256f"
I1025 00:51:45.679146       1 csi_handler.go:335] Saving detach error to "csi-4ee97f129eafa27066c9e111ea25bd0d40dfacbd126e4f20e4eaf400d91b256f"
I1025 00:51:45.772287       1 csi_handler.go:345] Saved detach error to "csi-4ee97f129eafa27066c9e111ea25bd0d40dfacbd126e4f20e4eaf400d91b256f"
I1025 00:51:45.772335       1 csi_handler.go:86] Error processing "csi-4ee97f129eafa27066c9e111ea25bd0d40dfacbd126e4f20e4eaf400d91b256f": failed to detach: node "compassionate-hertz-e8z" not found
I1025 00:51:46.112052       1 reflector.go:286] github.com/kubernetes-csi/external-attacher/vendor/k8s.io/client-go/informers/factory.go:87: forcing resync
I1025 00:51:46.112117       1 controller.go:197] Started PV processing "pvc-37f8f87ed59b11e8"
I1025 00:51:46.112125       1 csi_handler.go:350] CSIHandler: processing PV "pvc-37f8f87ed59b11e8"
I1025 00:51:46.112164       1 csi_handler.go:386] CSIHandler: processing PV "pvc-37f8f87ed59b11e8": VA "csi-4ee97f129eafa27066c9e111ea25bd0d40dfacbd126e4f20e4eaf400d91b256f" found
I1025 00:51:49.393802       1 reflector.go:428] github.com/kubernetes-csi/external-attacher/vendor/k8s.io/client-go/informers/factory.go:87: Watch close - *v1.PersistentVolume total 37 items received

compassionate-hertz-e8z was a node from the old node pool that was deleted which may or may not be related. Strangely pvc-37f8f87ed59b11e8 isn’t even listed in my account under volumes, the two that were successfully deleted using kubectl delete are.

The only thing I haven’t tried yet is deleting the cluster, I’m just using it for testing/learning purposes so that won’t be a problem if I need to do that.

1 comment
  • Having the same problem, and apparently manually patching the finalizers solves it.
    However, that’s kind of a manual solution and not very scalable.
    Sometimes, this kind of errors just prevent a statefulset from launching pods and I have to manually fix for ever case.

    What is the root cause of the problem? and is there any permanent solution for it?

These answers are provided by our Community. If you find them useful, show some love by clicking the heart. If you run into issues leave a comment, or add your own answer to help others.

×
2 answers

Who is having this issue:
Patch the finalizers, e.g

kubectl patch pv <pvc-ID> -p ’{“metadata”:{“finalizers”:null}}’
kubectl patch pvc <pv-ID> -p ’{“metadata”:{“finalizers”:null}}’

after that it’s possible to delete them, the same workflow is applied for the pods.

I have the same issue. If I run
kubectl get pvc
I have a list of PV having Reclaim set to Delete and Status to Released.

kubectl delete pvc <ID>
It says it is deleted, but the command is not terminated.

Thanks.

Submit an Answer