Question

Can we use doctl as a cli to create and destroy kubernetes clusters?

I’d like to use the CLI to create and destroy clusters - is that possible?

Subscribe
Share

Submit an answer
You can type!ref in this text area to quickly search our full set of tutorials, documentation & marketplace offerings and insert the link!

These answers are provided by our Community. If you find them useful, show some love by clicking the heart. If you run into issues leave a comment, or add your own answer to help others.

doctl has supported Kubernetes since version 1.12.0; find the most recent release here.

$ doctl kubernetes cluster create --help
create a cluster

Usage:
  doctl kubernetes cluster create <name> [flags]

Aliases:
  create, c

Flags:
      --auto-upgrade                whether to enable auto-upgrade for the cluster
      --count int                   number of nodes in the default node pool (incompatible with --node-pool) (default 3)
  -h, --help                        help for create
      --maintenance-window string   maintenance window to be set to the cluster. Syntax is in the format: 'day=HH:MM', where time is in UTC time zone. Day can be one of: ['any', 'monday', 'tuesday', 'wednesday', 'thursday', 'friday', 'saturday', 'sunday'] (default "any=00:00")
      --node-pool strings           cluster node pools, can be repeated to create multiple node pools at once (incompatible with --size and --count)
                                    format is in the form "name=your-name;size=size_slug;count=5;tag=tag1;tag=tag2" where:
                                    	- name:   name of the node pool, must be unique in the cluster
                                    	- size:   size for the nodes in the node pool, possible values: see "doctl k8s options sizes".
                                    	- count:  number of nodes in the node pool.
                                    	- tag:    tags to apply to the node pool, repeat to add multiple tags at once.
      --region string               cluster region, possible values: see "doctl k8s options regions" (required) (default "nyc1")
      --set-current-context         whether to set the current kubectl context to that of the new cluster (default true)
      --size string                 size of nodes in the default node pool (incompatible with --node-pool), possible values: see "doctl k8s options sizes". (default "s-1vcpu-2gb")
      --tag strings                 tags to apply to the cluster, repeat to add multiple tags at once
      --update-kubeconfig           whether to add the created cluster to your kubeconfig (default true)
      --version string              cluster version, possible values: see "doctl k8s options versions" (default "latest")
      --wait                        whether to wait for the created cluster to become running (default true)

Global Flags:
  -t, --access-token string   API V2 Access Token
  -u, --api-url string        Override default API V2 endpoint
  -c, --config string         config file (default is $HOME/.config/doctl/config.yaml)
      --context string        authentication context name
  -o, --output string         output format [text|json] (default "text")
      --trace                 trace api access
  -v, --verbose               verbose output

Creating a cluster using the default options is as easy as running:

doctl k8s cluster create example-cluster-01

Here’s a more complete example of creating a cluster while specifying non-default options:

$ doctl k8s cluster create \
   --region lon1 \
   --version 1.14.2-do.0 \
   --tag demo \
   --size s-2vcpu-4gb \
   --count 5 \
   --maintenance-window="tuesday=20:00" \
   --auto-upgrade \
   example-cluster-02

Notice: cluster is provisioning, waiting for cluster to be running
....................................................
Notice: cluster created, fetching credentials
Notice: adding cluster credentials to kubeconfig file found in "/home/asb/.kube/config"
Notice: setting current-context to do-lon1-example-cluster-02
ID                                      Name                  Region    Version        Auto Upgrade    Status     Node Pools
a5b355ab-0406-4e36-9ec8-3d9c70b4525e    example-cluster-02    lon1      1.14.2-do.0    true            running    example-cluster-02-default-pool

By default, the cluster’s kubeconfig will be saved locally when you create it. You can grab the kubeconfig for an existing cluster using:

$ doctl k8s cluster kubeconfig save <cluster-id|cluster-name>

If you have any feedback, please feel free to open an issue on GitHub.

This comment has been deleted