I have a managed Kubernetes cluster with 2 nodes and around 50 deployments/pods. However, all 50 pods are scheduled on a single node while the other node is completely empty.
How do I get the scheduler to schedule new pods on the node with more free resources? How do I evenly distribute the pods now?
This textbox defaults to using Markdown to format your answer.
You can type !ref in this text area to quickly search our full set of tutorials, documentation & marketplace offerings and insert the link!
These answers are provided by our Community. If you find them useful, show some love by clicking the heart. If you run into issues leave a comment, or add your own answer to help others.
Sign up for Infrastructure as a Newsletter.
Working on improving health and education, reducing inequality, and spurring economic growth? We'd like to help.
Get paid to write technical tutorials and select a tech-focused charity to receive a matching donation.
Try using
podAntiAffinity
orpodAffinity
in your deploymentI’ve noticed that this issue is definitely something real that kubernetes starting to avoid certain node or it will release most of the pods on the same node. But it is happening when user will scale down and up, executing command “kubectl scale” one after another, in short intervals. But if you give it some time problem seems to disappear - it is scaling evenly.
So at the end if you releasing or scaling normally with natural intervals (not many times per minute) it is more likely to distribute pods evenly.
That’s my observation - not sure if this is some general rule. So my conclusion is to just stop fighting with k8s :)
I hope it helped.
We have also encountered this issue: https://i.imgur.com/Wo406vt.png