Question

pod didn't trigger scale-up (it wouldn't fit if a new node is added)

Autoscaling worked fine (kubernetes added new 1cpu/2gbRAM droplet/node to cluster automatically) when i started a kubernetes Job with requested memory: "800Mi", parallelism: 2 but when i tried to start a job with requested memmory: "2000Mi, parallelism: 2 it showing error in job’s event logs

pod didn't trigger scale-up (it wouldn't fit if a new node is added): 1 Insufficient memory

kubectl apply -f job.yaml

apiVersion: batch/v1
kind: Job
metadata:
  name: scaling-test
spec:
  parallelism: 2
  template:
    metadata:
      name: scaling-test
    spec:
      containers:
        - name: debian
          image: debian
          command: ["/bin/sh","-c"]
          args: ["sleep 300"]
          resources:
            requests:
              cpu: "100m"
              memory: "1900Mi"
      restartPolicy: Never

My assumption was, kubernetes while autoscaling will add new node with respect to job’s requested resource requirements.(add 1cpu/3gbRAM node) any idea how to make kubernetes autoscale and provision a node based on job’s requested resources?


Submit an answer


This textbox defaults to using Markdown to format your answer.

You can type !ref in this text area to quickly search our full set of tutorials, documentation & marketplace offerings and insert the link!

Sign In or Sign Up to Answer

These answers are provided by our Community. If you find them useful, show some love by clicking the heart. If you run into issues leave a comment, or add your own answer to help others.

Accepted Answer

The autoscaling task adds nodes to the pool that requires additional compute/memory resources. The node type is determined by the pool the settings and not by the autoscaling rules.

From this, you can see that you need to ensure that your configured node is large enough to handle your largest pod.

John Kwiatkoski
DigitalOcean Employee
DigitalOcean Employee badge
March 23, 2020

Hi there!

As the two answers above mention. There is overhead from both the containerized infrastructure and the system that limit the amount of RAM left for your autoscaled workload. Can you verify that this error is resolved when using a larger node size?

You can find an estimate of the allocateable memory for node sizes in our documentation here: https://www.digitalocean.com/docs/kubernetes/#allocatable-memory

Let me know if you have any further questions.

Regards,

John Kwiatkoski Senior Developer Support Engineer - Kubernetes

Try DigitalOcean for free

Click below to sign up and get $200 of credit to try our products over 60 days!

Sign up

Get our biweekly newsletter

Sign up for Infrastructure as a Newsletter.

Hollie's Hub for Good

Working on improving health and education, reducing inequality, and spurring economic growth? We'd like to help.

Become a contributor

Get paid to write technical tutorials and select a tech-focused charity to receive a matching donation.

Welcome to the developer cloud

DigitalOcean makes it simple to launch in the cloud and scale up as you grow — whether you're running one virtual machine or ten thousand.

Learn more
DigitalOcean Cloud Control Panel