All pods scheduled on a single node. How do I balance pods on managed Kubernetes?

July 21, 2019 2.7k views

I have a managed Kubernetes cluster with 2 nodes and around 50 deployments/pods. However, all 50 pods are scheduled on a single node while the other node is completely empty.

How do I get the scheduler to schedule new pods on the node with more free resources? How do I evenly distribute the pods now?

4 Answers

Try using podAntiAffinity or podAffinity in your deployment

If a node goes down, any pods running on it would be restarted automatically on another node.

If you start specifying exactly where you want them to run, then you actually loose the capability of Kubernetes to reschedule them on a different node since you have only two nodes.

The usual practice, therefore, is to simply let Kubernetes do its thing.

If however you do have valid requirements to run a pod on a specific node, due to requirements for certain local volume type etc, have a read of:

  • Thank you for your answer.

    My goal is not to specify exactly where the pods should run. It is to spread the pods evenly between the nodes so as to spread the CPU load evenly between the nodes.

There is no way yet to tell k8 to rebalance the pods, having 3 nodes redues this problem if only one node is added out of the scheduler at a time.

There is a descheduler project that aims to solve this problem and some other, but is still in incubator
Here you some nice post about this problem:

  • The descheduler depends on the scheduler to place the descheduled pods on nodes that are underutilized. Unfortunately in my case, all my pods are scheduled on a single node while the other node is left empty.

    Do you know how is the scheduler configured on the master node? Can it be configured when using DigitalOcean managed Kubernetes?

    • No clue, I suggets you open a support ticket at DO, I didnt look in detail to see if the descheduler can be configured in a Managed cluster, at first glance I would say yes.

Hi there!

Having a two node setup can certainly cause this behavior, if for one reason or another, the other node is unhealthy or goes down. Redeploying your app(s) should allow the scheduler to properly distribute the pods unless there is an actual issue on your other node causing it to not accept pods. If this behavior does continue, I would recommend opening a support ticket so our team can have a closer look.


John Kwiatkoski
Senior Developer Support Engineer

Have another answer? Share your knowledge.

You can type !ref in this text area to quickly search our full set of tutorials, documentation & marketplace offerings and insert the link!