All pods scheduled on a single node. How do I balance pods on managed Kubernetes?

Posted July 21, 2019 21.1k views

I have a managed Kubernetes cluster with 2 nodes and around 50 deployments/pods. However, all 50 pods are scheduled on a single node while the other node is completely empty.

How do I get the scheduler to schedule new pods on the node with more free resources? How do I evenly distribute the pods now?

These answers are provided by our Community. If you find them useful, show some love by clicking the heart. If you run into issues leave a comment, or add your own answer to help others.

Submit an Answer
6 answers

Try using podAntiAffinity or podAffinity in your deployment

If a node goes down, any pods running on it would be restarted automatically on another node.

If you start specifying exactly where you want them to run, then you actually loose the capability of Kubernetes to reschedule them on a different node since you have only two nodes.

The usual practice, therefore, is to simply let Kubernetes do its thing.

If however you do have valid requirements to run a pod on a specific node, due to requirements for certain local volume type etc, have a read of:

  • Thank you for your answer.

    My goal is not to specify exactly where the pods should run. It is to spread the pods evenly between the nodes so as to spread the CPU load evenly between the nodes.

There is no way yet to tell k8 to rebalance the pods, having 3 nodes redues this problem if only one node is added out of the scheduler at a time.

There is a descheduler project that aims to solve this problem and some other, but is still in incubator
Here you some nice post about this problem:

  • The descheduler depends on the scheduler to place the descheduled pods on nodes that are underutilized. Unfortunately in my case, all my pods are scheduled on a single node while the other node is left empty.

    Do you know how is the scheduler configured on the master node? Can it be configured when using DigitalOcean managed Kubernetes?

    • No clue, I suggets you open a support ticket at DO, I didnt look in detail to see if the descheduler can be configured in a Managed cluster, at first glance I would say yes.

Hi there!

Having a two node setup can certainly cause this behavior, if for one reason or another, the other node is unhealthy or goes down. Redeploying your app(s) should allow the scheduler to properly distribute the pods unless there is an actual issue on your other node causing it to not accept pods. If this behavior does continue, I would recommend opening a support ticket so our team can have a closer look.


John Kwiatkoski
Senior Developer Support Engineer

We have also encountered this issue:

  • In my case, I discovered that the issue is caused by the auto k8 upgrade option. DigitalOcean will terminate, upgrade, and relaunch the node in your cluster one by one. This means that the last node will probably launch after all the pods have already been scheduled on other nodes, causing the last node to have no pods on it.

I’ve noticed that this issue is definitely something real that kubernetes starting to avoid certain node or it will release most of the pods on the same node.
But it is happening when user will scale down and up, executing command “kubectl scale” one after another, in short intervals.
But if you give it some time problem seems to disappear - it is scaling evenly.

So at the end if you releasing or scaling normally with natural intervals (not many times per minute) it is more likely to distribute pods evenly.

That’s my observation - not sure if this is some general rule.
So my conclusion is to just stop fighting with k8s :)

I hope it helped.