Managed Kubernetes: Load balancer says all nodes are down

All the pods seem to be running as normal, and can be reached using kubectl.

All of a sudden (without anyone touching anything), the load balancer is no longer working.

It is not possible to reach the website externally. The internal dns in Kubernetes seems to work as usual however. (When I test by exec-ing into one of the pods)

Does anyone know what might have happened here?


Hi tobiasbergkvist, I would recommend using tools like Prometheus and Grafana to see what’s happening within your cluster. Here’s a good work through for getting these tools installed within your Kubernetes cluster using Helm. Well, all the best and please have a great weekend.

Think different and code well,


Submit an answer
You can type!ref in this text area to quickly search our full set of tutorials, documentation & marketplace offerings and insert the link!

These answers are provided by our Community. If you find them useful, show some love by clicking the heart. If you run into issues leave a comment, or add your own answer to help others.

Seems like this was caused by memory usage being too high! I had added ELK for logging a couple of weeks ago, and it had slowly been eating up all the memory.

Somehow this must have killed something critical to external DNS (without actually killing the website pods).