Report this

What is the reason for this report?

Why is cilium given 30% of a nodes CPU on a Kubernets cluster?

Posted on June 3, 2019

I noticed that the cilium pods are generously provisioned on a DO k8s cluster. It seems a little excessive provision 30% of the CPU with no upper limit. Given it’s a Go project, and they’re usually quite performant, this reasonable?

kube-system                cilium-crswr                         300m (30%)    0 (0%)      0 (0%)           0 (0%)         10d


This textbox defaults to using Markdown to format your answer.

You can type !ref in this text area to quickly search our full set of tutorials, documentation & marketplace offerings and insert the link!

These answers are provided by our Community. If you find them useful, show some love by clicking the heart. If you run into issues leave a comment, or add your own answer to help others.

Hi there,

With cilium usage like this can be expected. Cilium provides the software defined network for our DOKS clusters. The reason we dont put an upper limit on cilium is because if the cilium pod goes down all workloads running on that node will lose network connectivity. Therefore capping the cilium pod’s resources at a limit would cause kubenetes to kill the cilium pod if it tried to take more than its limit causing an outage anyway.

It’s for this reason that cilium does not have a cap as its a crucial infrastructure component that is in our customers best interest to give it the resources it needs to maintain a stable cluster.

Regards,

John Kwiatkoski Senior Developer Support Engineer

The developer cloud

Scale up as you grow — whether you're running one virtual machine or ten thousand.

Get started for free

Sign up and get $200 in credit for your first 60 days with DigitalOcean.*

*This promotional offer applies to new accounts only.