All pods scheduled on a single node. How do I balance pods on managed Kubernetes?

I have a managed Kubernetes cluster with 2 nodes and around 50 deployments/pods. However, all 50 pods are scheduled on a single node while the other node is completely empty.

How do I get the scheduler to schedule new pods on the node with more free resources? How do I evenly distribute the pods now?


Submit an answer
You can type!ref in this text area to quickly search our full set of tutorials, documentation & marketplace offerings and insert the link!

These answers are provided by our Community. If you find them useful, show some love by clicking the heart. If you run into issues leave a comment, or add your own answer to help others.

Accepted Answer

Try using podAntiAffinity or podAffinity in your deployment

I’ve noticed that this issue is definitely something real that kubernetes starting to avoid certain node or it will release most of the pods on the same node. But it is happening when user will scale down and up, executing command “kubectl scale” one after another, in short intervals. But if you give it some time problem seems to disappear - it is scaling evenly.

So at the end if you releasing or scaling normally with natural intervals (not many times per minute) it is more likely to distribute pods evenly.

That’s my observation - not sure if this is some general rule. So my conclusion is to just stop fighting with k8s :)

I hope it helped.

We have also encountered this issue:

Hi there!

Having a two node setup can certainly cause this behavior, if for one reason or another, the other node is unhealthy or goes down. Redeploying your app(s) should allow the scheduler to properly distribute the pods unless there is an actual issue on your other node causing it to not accept pods. If this behavior does continue, I would recommend opening a support ticket so our team can have a closer look.


John Kwiatkoski Senior Developer Support Engineer

There is no way yet to tell k8 to rebalance the pods, having 3 nodes redues this problem if only one node is added out of the scheduler at a time.

There is a descheduler project that aims to solve this problem and some other, but is still in incubator Here you some nice post about this problem:

If a node goes down, any pods running on it would be restarted automatically on another node.

If you start specifying exactly where you want them to run, then you actually loose the capability of Kubernetes to reschedule them on a different node since you have only two nodes.

The usual practice, therefore, is to simply let Kubernetes do its thing.

If however you do have valid requirements to run a pod on a specific node, due to requirements for certain local volume type etc, have a read of: