Removing first node crashes entire cluster

November 23, 2019 85 views
DigitalOcean Managed Kubernetes

I have noticed that if I set up a k8s cluster of 3 nodes, then power down the first node to simulate a failure, the cluster immediately stops receiving traffic and instead of routing traffic to the two other nodes the LB reports all nodes as down. The only way I can get it back up is to delete/recreate the nodes. What could be going on here, other than a bug?

Be the first one to answer this question.