Developer Center

Improve System Performance with Effective Load Testing using K-Bench

Published on February 1, 2024

Cristian Marius Tiutiu, Bikram Gupta, and Anish Singh Walia

Improve System Performance with Effective Load Testing using K-Bench


Load Testing is a non-functional software testing process in which the performance of a system is tested under a specific expected load. It determines how the system behaves while being put under load. The goal of Load Testing is to improve performance bottlenecks and to ensure stability and smooth functioning of the system. Load testing gives confidence in the system & its reliability and performance.

K-bench is a framework to benchmark a Kubernetes infrastructure’s control and data plane aspects. K-Bench provides a configurable way to prescriptively create and manipulate Kubernetes resources at scale and eventually provide the target infrastructure’s relevant control plane and data plane performance metrics.

K-bench allows users to control the client-side concurrency, the operations, and how these different types of operations are executed sequentially or in parallel. In particular, the user can define, through a config file, a workflow of operations for supported resources.

After a successful run, the benchmark reports metrics (e.g., number of requests, API invoke latency, throughput, etc.) for the executed operations on various resource types.

In this tutorial, you will configure the K-bench. This tool needs to be installed on a droplet, preferably with access to the target cluster for testing. You will be configuring (if not already present) a Prometheus stack for your cluster to observe the results of a test run.

K-bench Architecture Diagram

K-bench Architecture Diagram

Table of Contents


To complete this tutorial, you will need:

  1. DOKS cluster, refer to: Kubernetes-Starter-Kit-Developers if one needs to be created
  2. Prometheus stack installed on the cluster, refer to: How to Install the Prometheus Monitoring Stack if it’s not installed
  3. A droplet which will serve as the K-bench master

Creating a DO droplet for K-bench

In this section, you will create a droplet that will serve as your K-bench master. On this droplet, you will clone the K-bench repo, perform the installation, run tests, and/or add any new tests that will fit your use case. The reason for using a droplet is that it is best to have a decoupled resource apart from the cluster, which we can use for just one specific reason, and that is doing load tests and visualizing the results of benchmarks.

Please follow the below steps to create a droplet, install and configure K-bench:

  1. Navigate to your DO cloud account.
  2. From the Dashboard, click on the Create button and select the Droplets option.
  3. Choose the Ubuntu distribution, the basic plan, Regular with SSD CPU options, a region and as Authenticaion choose the SSH keys option. If no SSH keys are present this article explains how to create one and add it to the DO account.
  4. From the droplet dashboard, click on the Console button. After this you will be presented with a screen informing you to Update Droplet Console, follow those steps to gain SSH access to the droplet.
  5. Once the SSH access is available, click on the Console button again. You will be logged in as root into the droplet.
  6. Clone the K-bench repository via HTTPS using this command:
git clone
  1. Navigate to the cloned repository directory.
cd k-bench/
  1. Run the install script to install GO and any other dependencies K-Bench has.
  1. From the DOKS cluster dashboard, click on the Download Config File and copy the contents of the config file. K-bench needs that information to connect to the cluster.
  2. Create a kube folder where the kube config will be added, paste the contents copied from Step 9, and save the file.
mkdir ~/.kube
vim ~/.kube/config
  1. As a validation step, run the test start command, which will create a benchmark for the default test.
  1. If the test was successful, the tool will output that it started and it is writing the logs to a folder prefixed with results_run_<date>
  2. Open the benchmark log and observe the results.
cat results_run_29-Jun-2022-08-06-42-am/default/kbench.log

Note: The tests are added under the config folder of k-bench. To change an existing test, its config.json file needs to be updated. The test is run via the -t flag supplied by k-bench. For example, running the cp_heavy_12client is done via: ./ -t cp_heavy_12client

K-bench Benchmark Results Sample

K-bench Benchmark Results Sample

Grafana Metric Visualization

K-bench tests are very easily observable using Grafana. You can create different dashboards to provide observability and understanding of Prometheus metrics. In this section you will explore some useful metrics for Kubernetes and some Dashboards which can offer insight into what is happening with the DOKS cluster under load.

Note: This section can only be completed if the Prometheus stack was created earlier in Step 2 of the Prerequisites section or is installed on the cluster.

Please follow the below steps:

  1. Connect to Grafana (using default credentials: admin/prom-operator) by port forwarding to the local machine.
kubectl --namespace monitoring port-forward svc/kube-prom-stack-grafana 3000:80
  1. Navigate to http://localhost:3000/ and log in to Grafana.
  2. Import the Kubernetes System Api Server by navigating to http://localhost:3000/dashboard/import, add the 15761 ID in the box under Import via and add Load
  3. From the upper-mentioned dashboard, you will be able to see the API latency, HTTP requests by code, HTTPS requests by verb, etc. You can use this dashboard to monitor the API under load.
  4. From the Grafana main page, click on the Dashboards menu and click on the Node Exporter Nodes to open a Node resource-oriented dashboard. You can use this dashboard to monitor the resources available in your nodes during a test.
  5. You can also use various metrics to count the number of pods that have been created during a test. For example, from the Explore page, enter the following in the metrics browser: count(kube_pod_info{namespace="kbench-pod-namespace"}). This will show a graph with the number of pods at any given time.

Grafana API Server Dashboard Sample

Grafana API Server Dashboard Sample

Grafana Node Dashboard sample

Grafana API Server Dashboard Sample

Grafana Pod Count sample

Grafana Pod count Sample


K-bench provides a configurable way for users to create and manipulate Kubernetes resources at scale and generate relevant performance metrics for the target infrastructure. With K-bench, users can control the client-side concurrency, define operations workflows for supported resources, and obtain detailed benchmark reports for various resource types. By using K-bench, users can gain confidence in the reliability and performance of their system, ultimately leading to a better user experience.

Thanks for learning with the DigitalOcean Community. Check out our offerings for compute, storage, networking, and managed databases.

Learn more about us

About the authors
Default avatar
Cristian Marius Tiutiu


Default avatar

Sr Technical Writer

Still looking for an answer?

Ask a questionSearch for more help

Was this helpful?
Leave a comment

This textbox defaults to using Markdown to format your answer.

You can type !ref in this text area to quickly search our full set of tutorials, documentation & marketplace offerings and insert the link!

Try DigitalOcean for free

Click below to sign up and get $200 of credit to try our products over 60 days!

Sign up

Join the Tech Talk
Success! Thank you! Please check your email for further details.

Please complete your information!

Featured on Community

Get our biweekly newsletter

Sign up for Infrastructure as a Newsletter.

Hollie's Hub for Good

Working on improving health and education, reducing inequality, and spurring economic growth? We'd like to help.

Become a contributor

Get paid to write technical tutorials and select a tech-focused charity to receive a matching donation.

Welcome to the developer cloud

DigitalOcean makes it simple to launch in the cloud and scale up as you grow — whether you're running one virtual machine or ten thousand.

Learn more
DigitalOcean Cloud Control Panel