OpenEBS, volumes, and kuberenetes upgrades

I installed the OpenEBS application from the marketplace onto a Kubernetes cluster. Following the instructions from MayaData, I’ve created three unformatted volumes of the same size and attached them to three different nodes in my cluster. I’ve created a StoragePool, and StorageClasses, and PersistentVolumeClaims against those. I can mount the OpenEBS PersistentVolumes into my containers giving me replicated storage on my cluster. So far, so good.

My question is, what will happen to those volumes when I upgrade my cluster to a new version of Kubernetes? DigitalOcean’s software will create new nodes, but will it attach volumes to those nodes in the same way it did when I created them (e.g., one volume per node)? Will it even attach them at all? I’m a bit scared to find out.


Submit an answer
You can type!ref in this text area to quickly search our full set of tutorials, documentation & marketplace offerings and insert the link!

These answers are provided by our Community. If you find them useful, show some love by clicking the heart. If you run into issues leave a comment, or add your own answer to help others.

Hi @colinconstable … Yes, currently the manual steps need to perform to make the pools back online. We also documented the above steps with detailed information into our help center. Please have a look at it here.

So the good news is you can fix this yourself…

Step 1. Attach the disk(s) to the new node(s) Step 2. Get the new node name(s). Step 3. Edit the CSP and update with the new node name in Step 4. Edit the the cstor deployment which was for each node and update the nodeSelector with new node name.

This should be automated DO!

I can tell you what happens when you click auto update and have OpenEBS on a Sunday morning all hell lets loose and the cluster is down.

Happened to me this morning … Stupid… Auto upgrade should not break anything… If an admin can fix this the system should manage it…

Hi @bkoehn. I work for MayaData. I am trying to make a similar setup and trying out the use case. In general, if an old node is replaced with a new node, there are few manual steps to make the cStor pool and volume online. This can be done only when the unique id (BD name) of the underlying disk is not changed. So to understand about the attached block device information, could you give me the output of udevadm info <device_path> (For example,udevadm info /dev/sdb) of all attached blockdevices from all the worker nodes, please?