I’ve set up a Kubernetes cluster with 2 nodes in DigitalOcean. There’s a block storage volume that I use for persistent database storage, and as such I need my pods that rub database software to be able to access it.
At first I ran into the issue of, on deployment, the containers for the database going to a random node. If they went to the node that had the block storage, all was good. If they went to the other one, it’d go into a loop and be unable to start. It seemingly never tried to move it to the other node.
To get around this, I pinned the database to the node that has the block storage attached to it. I had to do this by hostname, as DigitalOcean does not appear to apply any labels or anything j can filter by to the node to which it attaches the block storage.
This morning I woke up to a dead service and a pile of outage emails. My node have apparently been replaced. This in itself isn’t a problem, but now of course the hostnames are different and as such Kubernetes can’t find the node it’s meant to spin up this database on. And even if it could there’s no guarantee that that node has the block storage.
I’m fairly new to Kubernetes so I might be missing something obvious here, but how are you meant to keep something running smoothly on this if you can’t guarantee that your database will be able to access the block storage it needs?
These answers are provided by our Community. If you find them useful, show some love by clicking the heart. If you run into issues leave a comment, or add your own answer to help others.