I have a Kubernetes cluster running several review environments. According to the graphs on in the Digital Ocean console, it’s CPU usage is at ~10%, with at most spikes up to 30% CPU. Yet, when I try to deploy a Helm chart, I see my pod has the following status:
status: conditions: - lastProbeTime: null lastTransitionTime: "2020-07-01T13:54:51Z" message: '0/4 nodes are available: 4 Insufficient cpu.' reason: Unschedulable status: "False" type: PodScheduled
What could be causing this?
This textbox defaults to using Markdown to format your answer.
You can type !ref in this text area to quickly search our full set of tutorials, documentation & marketplace offerings and insert the link!
These answers are provided by our Community. If you find them useful, show some love by clicking the heart. If you run into issues leave a comment, or add your own answer to help others.
Click below to sign up and get $200 of credit to try our products over 60 days!
I would check to see what requests for CPU are being made by the desired chart. Keep in mind that the full system resources are not available to pods as each node does require overhead for system processes and containerized infrastructure.
You can see the available memory chart for our node sizes here: https://www.digitalocean.com/docs/kubernetes/#allocatable-memory
That does not list the requests for CPU but looking at my own cluster in kube-system the default kube-system pods request almost ~700m or 70% of all the CPU available on a 1 CPU node.
You have a few options here:
Use larger nodes. This step would require the least amount of effort as there’s no need to change any app configuration. It also directly addresses the underlying problem of not having enough resources in the cluster.
remove/reduce your helm chart’s requests. This is an okay option to get it past the deployment step but is normally not a good idea as it could overload your cluster and cause outages for other applications.
Reduce your workloads to make room for the new deployment. Getting rid of other resource heavy workloads would free up space for this deployment
I personally would just recommend the simplest path, which would to be to use larger nodes. Hope this helps!
John Kwiatkoski Senior Developer Support Engineer - Kubernetes