There are some race conditions that can impact deployments with volumes that are destroyed or recycled outside of the Cloud Controller Manager. We are working to improve this, but for now you can try this work around.
You can use kubectl get volumeattachment, to get information on what mounts kubernetes believes are active. You can use
kubectl describe tog et more information on each. Due to the race conditions mentioned above, sometimes the volumeattachment object that represents a mounted volume is not deleted or stuck in the deletion process. This failure to delete the object makes kubernetes thinks that the volume must still be mounted.
To resolve this issue you need to modify the volumeattachment using:
kubectl edit volumeattachment <volume name>
Then remove the following section from the volume attachment
This lets Kubernetes run through the rest of the cleanup process, find the volume state correctly and attach it to the new node.
Senior Developer Support Engineer