k8s too much for 1GB/1CPU single node pool

I have a single node (1GB/1vCPU) cluster that I run some very low-traffic stuff on.

I loved DO because for I could keep my hosting costs (on k8s) to about $25 ($12 for ingress/$12 for the node, and whatever for storage) – amazing.

Recently I had to upgrade to 1.23.10-do.0 – the upgrade was forced, but no big deal. However, when the upgrade finished the CPU requests exceeded the available CPU.

According to kubectl describe nodes, I get (this text is edited for simplicity):

  cpu:                900m
  Name                                              CPU 
  ----                                              ----
  ingress-nginx-controller-7868b8f99b-7g8lp         100m
  cilium-ng6ww                                      310m
  cilium-operator-c9bf9575f-dtmfc                   100m
  coredns-d4c49d69-dwvx9                            100m
  coredns-d4c49d69-zlj79                            100m
  cpc-bridge-proxy-d22g9                            100m
  do-node-agent-z6p2v                               102m  

So, in the base deployment alone we have 912 CPU requested of an available 900. In its infinite wisdom, kubernetes decided the ingress controller was the best pod to keep down, so everything was effectively down.

To fix it, i manually lowered the CPU request for ingress to 50, but I am no concerned the base offering will not work after upgrade going forward. I expect I’ll have to make a manual correction after every future upgrade.

Is there a “hands off” approach to fixing this? Without doubling my hosting costs I can’t get a system that offers enough CPU to meet the reservation requests on the base offering.


Submit an answer

This textbox defaults to using Markdown to format your answer.

You can type !ref in this text area to quickly search our full set of tutorials, documentation & marketplace offerings and insert the link!

Sign In or Sign Up to Answer

These answers are provided by our Community. If you find them useful, show some love by clicking the heart. If you run into issues leave a comment, or add your own answer to help others.

Bobby Iliev
Site Moderator
Site Moderator badge
December 2, 2022

Hi there,

Thank you for reporting this! Indeed I was able to replicate the same behavior with a fresh new cluster with 1vCPU and the NGINX Ingress Controller 1-Click from the Marketplace.

I will forward this information internally to the Marketplace team.

However, the 1-Click installation is based on the following official repo:

In that values, file is where the CPU limits are defined, so it is possible that the team might not be able to influence the change internally.

Another option is to open an issue on the Kubernetes Ingress repo directly and describe the problem, as this will affect all clusters with 1vCPU.



Try DigitalOcean for free

Click below to sign up and get $200 of credit to try our products over 60 days!

Sign up

card icon
Get our biweekly newsletter

Sign up for Infrastructure as a Newsletter.

Sign up
card icon
Hollie's Hub for Good

Working on improving health and education, reducing inequality, and spurring economic growth? We’d like to help.

Learn more
card icon
Become a contributor

You get paid; we donate to tech nonprofits.

Learn more
Welcome to the developer cloud

DigitalOcean makes it simple to launch in the cloud and scale up as you grow – whether you’re running one virtual machine or ten thousand.

Learn more ->
DigitalOcean Cloud Control Panel
Get started for free

Enter your email to get $200 in credit for your first 60 days with DigitalOcean.

New accounts only. By submitting your email you agree to our Privacy Policy.

© 2023 DigitalOcean, LLC.