Reproducibility of performance tests

I need to execute performance tests that will run once per day or several days. The tests will test how performance of the same (but continuosly developed) code change in time. This testing should prevent introducing a performance regression before the code goes to production.

I’d like to use general purpose droplet for this purpose that will be used / paid only for the time of test executions and then returned back to DO pool so that I don’t need to pay for idle time.

The crucial thing is though that I need to be able to compare results of the test between different execution - ie. at the start of the month and at the end of it. Can I rely on having the same hardware specs for the droplet? Because if not - the performance results might change not due to the changes in the code but due to changes in the hardware itself and in that case the results wouldn’t be comparable.

Thanks for hints!


Submit an answer
You can type!ref in this text area to quickly search our full set of tutorials, documentation & marketplace offerings and insert the link!

These answers are provided by our Community. If you find them useful, show some love by clicking the heart. If you run into issues leave a comment, or add your own answer to help others.

Hello there,

You can spin up the droplets in the same region in order to have the best persistent results in terms of running the tests on droplet with the same specs. You can spin up general-purpose droplets with the same resources (RAM and CPU) and then perform the tests.

Keep in mind that you’ll need to destroy the droplets if you do not want to be billed for them. Even if the droplets are offline, they’re still using resources (disk space to store your data) and you will receive an invoice for the respective month.

Regards, Alex