Should I use the cheapest nodes for k8s?

I want to know, for example if I setup k8s with the cheapest nodes(10$ per one) and make it autoscale from 1 to 10 and if in future I got a lot of traffic will it be okay?

Submit an answer

This textbox defaults to using Markdown to format your answer.

You can type !ref in this text area to quickly search our full set of tutorials, documentation & marketplace offerings and insert the link!

Sign In or Sign Up to Answer

These answers are provided by our Community. If you find them useful, show some love by clicking the heart. If you run into issues leave a comment, or add your own answer to help others.

Hello @devtoolsSquid,

Yes, you can use smaller nodes in your DOKS cluster and enable autoscale on them. However, the small node sizes are not intended for production workloads. These node sizes are there for experimentation and educational purposes. After taking into account the resource usage of the containerized infrastructure there is not a lot or resources left for production workloads to run in a stable or sustainable manner.

Using many small nodes in your cluster can be an inefficient use of compute resources. Customers actually get more resources per dollar by using moderately sized nodes as each new node requires overhead of components such as cilium, DO node agents, and CSI containers. We recommend for customers to have at least 3 moderately sized nodes as opposed to say 6 very small nodes. This way the overhead of infrastructure takes less of your resources and leaves more for you to use.

Regarding Cluster Autoscaling (CA), it manages the number of nodes in a cluster. It monitors the number of idle pods, or unscheduled pods sitting in the pending state, and uses that information to determine the appropriate cluster size.

Horizontal Pod Autoscaling (HPA) adds more pods and replicas based on events like sustained CPU spikes. HPA uses the spare capacity of the existing nodes and does not change the cluster’s size.

You can utilize both CA and HPA. CA and HPA can work in conjunction: if the HPA attempts to schedule more pods than the current cluster size can support, then the CA responds by increasing the cluster size to add capacity. These tools can take the guesswork out of estimating the needed capacity for workloads while controlling costs and managing cluster performance.

For more details please refer to below links :

Hope that this helps!

Best Regards, Chandan Sagar Pradhan

@devtoolsSquid It really depends on the CPU and memory requirements of your individual services. Thus, I recommend doing some capacity planning by doing the following:

  • determine the requirements (i.e. CPU and memory) for each service to determine the specific configuration of a node

  • perform load testing on 1, 3, 5, 7, and 10 nodes to determine how much traffic you can support as well as find and resolve any bottlenecks within your system architecture

In general, what works with 1 node may not work when you have to use 10 nodes. Thus, you need to measure the performance of your overall system architecture. It may or may not scale linearly going from 1 to 10 nodes. Well, good luck with everything and I wish you all the best.

Think different and code well,