Kubernetes Overview

Plans and Pricing

DigitalOcean Kubernetes (DOKS) clusters are priced by the number and capacity of the worker nodes. There is no additional charge for Kubernetes masters, which are fully managed by DigitalOcean.

Worker nodes are built on Droplets, but unlike Droplets, worker nodes in Kubernetes clusters do not have a monthly price cap. If your DOKS worker node runs for more than 672 hours in a month, you will continue to be charged the hourly rate for that Droplet until either it is destroyed or a new month begins.

DOKS Kubernetes clusters support integrations with DigitalOcean Load Balancers and block storage volumes, which are billed at their normal rate and appear in the Kubernetes section of your invoice. Kubernetes clusters are billed at the standard rate for bandwidth usage.

Regional Availability

At least one datacenter in every region supports Kubernetes. Our regional availability matrix has more detail about our datacenter regions and product availability. Kubernetes will not be offered in NYC2, NYC3, AMS2, or SFO1.

Features

DigitalOcean Kubernetes allows you to deploy scalable and secure Kubernetes clusters. Development teams can create a cluster with the simplicity of DigitalOcean and retain full access to the cluster with existing toolchains. We offer the latest version of Kubernetes as well as earlier patch levels of the latest minor version for special use cases.

Managed Clusters

DigitalOcean Kubernetes is a managed offering. We handle the complexities of the control plane and containerized infrastructure. On both the master nodes and the worker nodes, we maintain the system updates, security patches, operating system configuration and installed packages. For more detail, see The Managed Elements of DigitalOcean Kubernetes.

The content of the cluster, however, belongs to you. You have cluster-level administrative rights to create and delete any Kubernetes API objects through the API and doctl. There are no restrictions on the API objects you can create as long as the underlying Kubernetes version supports them. You can also install popular tools like Helm, metrics-server, and Istio.

Private Networking

Clusters are part of a VPC which operates like private networking, meaning network communication is private within the cluster. Cluster logs are rotated when they reach 10mb in size and the last 2 copies are retained in addition to the current active log.

Cluster networking is preconfigured with Cilium. Overlay networking is preconfigured with Cilium and supports network policies.

Worker Nodes and Node Pools

Worker nodes are built on Droplets, but unlike standard Droplets, worker nodes are managed with the Kubernetes command-line client kubectl and are not accessible with SSH.

Both Standard and CPU-Optimized Droplet plans are available for worker nodes. All of the worker nodes within a node pool have identical resources. You can add and remove worker nodes from node pools at any time, and you can also create additional node pools at any time.

Worker nodes are automatically deleted and respawned when needed. In addition, you can manually recycle a worker node from a cluster’s Manage tab.

Each node pool can have a different worker configuration. This allows you to have different services on different node pools, where each pool has the RAM, CPU, and attached storage resources the service requires.

You can name node pools when they are created. The nodes in the node pool will inherit the node pool’s naming scheme.

Kubernetes role-based access control (RBAC) is enabled by default. See Using RBAC Authorization for details.

Tags

Clusters are automatically tagged with k8s and the specific cluster ID, like k8s:EXAMPLEc-3515-4a0c-91a3-2452eEXAMPLE. In addition, worker nodes are tagged with k8s:worker . You can add your own tags to the cluster and worker nodes in the Tags field. At creation time, the k8s prefix is reserved for system tags and cannot be used at the beginning of custom tags.

Warning
Although you can currently tag individual workers from the Droplets page in the control panel, tagging individual worker nodes will not be supported in the future.

Persistent Data

You can persist data to DigitalOcean Block Storage Volumes with the DigitalOcean CSI plugin. Support to resize volumes has not yet been implemented. Block storage volumes cannot be mounted to more than one Droplet at a time.

By default, users can create up to 100 volumes and up to a total of 16TB of disk space per region. You can contact our support team to request an increase. You can attach a maximum of 7 volumes to any one node or Droplet, and this limit cannot be changed.

You can also persist data to DigitalOcean object storage by using the Spaces API to interact with Spaces from within your application.

Load Balancing

The DigitalOcean Kubernetes Cloud Controller supports provisioning DigitalOcean Load Balancers.

Limits

Resource Limits

  • Clusters are restricted to a single region.
  • Clusters have a maximum of 512 nodes.
  • Nodes have a maximum of 110 pods.
  • Nodes run on Droplets which are subject to certain resource limitations and use Block Storage Volumes that have similar constraints.
  • Network throughput is capped at 2Gbps.
  • We recommend against using HostPath volumes. Nodes are frequently replaced and all data stored on the nodes will be lost. Instead, we recommend using a named volume with a PersistentVolumeClaim.

Master Node Limits

  • The master configuration is managed by DigitalOcean. You cannot modify the master files, feature gates, or admission controllers. See The Managed Elements of DigitalOcean Kubernetes for more specifics.
  • DigitalOcean provisions a single master node and thus is not considered to be high availability. This does not make the cluster workers or workloads unavailable, but during upgrades or maintenance, the control plane may be unavailable for a short time. This is expected and any running clusters will be unaffected.

Limitations on Kubernetes Clusters in General

Known Issues

  • Load balancers and block storage volumes created by your Kubernetes manifests are not deleted when a cluster is deleted. You will continue to be billed for them until you delete them explicitly.

  • Custom node labels do not persist across upgrades and recycles.

  • Installing webhooks targeted at services within the cluster can cause Kubernetes version upgrades to fail as internal services may not be accessible during upgrade.

  • You cannot tag load balancers or block storage volumes.

  • Support for resizing DigitalOcean Block Storage Volumes in Kubernetes has not yet been implemented.

  • In the DigitalOcean Control Panel, cluster resources (worker nodes, load balancers, and block storage volumes) are listed outside of the Kubernetes page. If you rename or otherwise modify these resources in the control panel, you may render them unusable to the cluster or cause the reconciler to provision replacement resources. To avoid this, manage your cluster resources exclusively with kubectl or from the control panel’s Kubernetes page.

  • The certificate authority, client certificate, and client key data in the kubeconfig.yaml file displayed in the control panel expire every seven days after download. If you use this file, you will need to download a new certificate every week. To avoid this, we strongly recommend using doctl.

  • Let’s Encrypt certificates are not supported by default; you must generate a certificate yourself.

  • You cannot assign Kubernetes clusters (or the underlying Droplets in a cluster) to a Project.