Autoscaling worked fine (kubernetes added new 1cpu/2gbRAM droplet/node to cluster automatically) when i started a kubernetes Job with
memory: "800Mi", parallelism: 2 but when i tried to start a job with requested
memmory: "2000Mi, parallelism: 2 it showing error in job’s event logs
pod didn't trigger scale-up (it wouldn't fit if a new node is added): 1 Insufficient memory
kubectl apply -f job.yaml
apiVersion: batch/v1 kind: Job metadata: name: scaling-test spec: parallelism: 2 template: metadata: name: scaling-test spec: containers: - name: debian image: debian command: ["/bin/sh","-c"] args: ["sleep 300"] resources: requests: cpu: "100m" memory: "1900Mi" restartPolicy: Never
My assumption was, kubernetes while autoscaling will add new node with respect to job’s requested resource requirements.(add 1cpu/3gbRAM node) any idea how to make kubernetes autoscale and provision a node based on job’s requested resources?
This textbox defaults to using Markdown to format your answer.
You can type !ref in this text area to quickly search our full set of tutorials, documentation & marketplace offerings and insert the link!
These answers are provided by our Community. If you find them useful, show some love by clicking the heart. If you run into issues leave a comment, or add your own answer to help others.
Join our DigitalOcean community of over a million developers for free! Get help and share knowledge in Q&A, subscribe to topics of interest, and get courses and tools that will help you grow as a developer and scale your project or business.
Click below to sign up and get $200 of credit to try our products over 60 days!