Question

pod didn't trigger scale-up (it wouldn't fit if a new node is added)

Posted March 20, 2020 10.5k views
Kubernetes

Autoscaling worked fine (kubernetes added new 1cpu/2gbRAM droplet/node to cluster automatically) when i started a kubernetes Job with
requested memory: "800Mi", parallelism: 2 but when i tried to start a job with requested memmory: "2000Mi, parallelism: 2 it showing error in job’s event logs

pod didn't trigger scale-up (it wouldn't fit if a new node is added): 1 Insufficient memory

kubectl apply -f job.yaml

apiVersion: batch/v1
kind: Job
metadata:
  name: scaling-test
spec:
  parallelism: 2
  template:
    metadata:
      name: scaling-test
    spec:
      containers:
        - name: debian
          image: debian
          command: ["/bin/sh","-c"]
          args: ["sleep 300"]
          resources:
            requests:
              cpu: "100m"
              memory: "1900Mi"
      restartPolicy: Never

My assumption was, kubernetes while autoscaling will add new node with respect to job’s requested resource requirements.(add 1cpu/3gbRAM node) any idea how to make kubernetes autoscale and provision a node based on job’s requested resources?

These answers are provided by our Community. If you find them useful, show some love by clicking the heart. If you run into issues leave a comment, or add your own answer to help others.

×
2 answers

The autoscaling task adds nodes to the pool that requires additional compute/memory resources. The node type is determined by the pool the settings and not by the autoscaling rules.

From this, you can see that you need to ensure that your configured node is large enough to handle your largest pod.

  • Scenario, I configured k8 cluster with one node 1cpu/2gbRAM after that i created a job.

    apiVersion: batch/v1
    kind: Job
    metadata:
      name: scaling-test
    spec:
      parallelism: 1
      template:
        metadata:
          name: scaling-test
        spec:
          containers:
            - name: debian
              image: debian
              command: ["/bin/sh","-c"]
              args: ["sleep 300"]
              resources:
                requests:
                  cpu: "100m"
                  memory: "6000Mi"
          restartPolicy: Never
    

    As i have already enabled autoscaling(minimum 1, max 10 nodes) from digitalocean website.
    after
    kubectl apply -f job.yaml
    job is in pending state for an hour and job’s event logs shows following error
    pod didn't trigger scale-up (it wouldn't fit if a new node is added)
    New node should have added automatically which can satifly 6000Mi requsted memmory requirement but it did not happen or i’m doing it wrong? ( i’m new to kubernetes and English)

    • Your assumption that Kubernetes will choose a different size node is incorrect. And because this pod requires too much RAM, autoscaling will never work. You have two options.

      1. Request less memory than your nodes have (2 GB) for the pod.

      2. Create a node pool that uses a node type with 8 GB of RAM.

Hi there!

As the two answers above mention. There is overhead from both the containerized infrastructure and the system that limit the amount of RAM left for your autoscaled workload. Can you verify that this error is resolved when using a larger node size?

You can find an estimate of the allocateable memory for node sizes in our documentation here: https://www.digitalocean.com/docs/kubernetes/#allocatable-memory

Let me know if you have any further questions.

Regards,

John Kwiatkoski
Senior Developer Support Engineer - Kubernetes

Submit an Answer