Removing first node crashes entire cluster

I have noticed that if I set up a k8s cluster of 3 nodes, then power down the first node to simulate a failure, the cluster immediately stops receiving traffic and instead of routing traffic to the two other nodes the LB reports all nodes as down. The only way I can get it back up is to delete/recreate the nodes. What could be going on here, other than a bug?


Submit an answer
You can type!ref in this text area to quickly search our full set of tutorials, documentation & marketplace offerings and insert the link!