I need to execute performance tests that will run once per day or several days. The tests will test how performance of the same (but continuosly developed) code change in time. This testing should prevent introducing a performance regression before the code goes to production.

I’d like to use general purpose droplet for this purpose that will be used / paid only for the time of test executions and then returned back to DO pool so that I don’t need to pay for idle time.

The crucial thing is though that I need to be able to compare results of the test between different execution - ie. at the start of the month and at the end of it. Can I rely on having the same hardware specs for the droplet? Because if not - the performance results might change not due to the changes in the code but due to changes in the hardware itself and in that case the results wouldn’t be comparable.

Thanks for hints!

These answers are provided by our Community. If you find them useful, show some love by clicking the heart. If you run into issues leave a comment, or add your own answer to help others.

×
Submit an Answer
1 answer

Hello there,

You can spin up the droplets in the same region in order to have the best persistent results in terms of running the tests on droplet with the same specs. You can spin up general-purpose droplets with the same resources (RAM and CPU) and then perform the tests.

Keep in mind that you’ll need to destroy the droplets if you do not want to be billed for them. Even if the droplets are offline, they’re still using resources (disk space to store your data) and you will receive an invoice for the respective month.

Regards,
Alex

  • Thanks for answer recommendation of using same region.

    I summarize what I know now to use for performance tests:

    • general-purpose droplets needs to be used to have dedicated HW without any interference of other traffic
    • there is no hard guarantee that I will obtain the same specs HW - best chance is to spin the same specs droplet in the same region
    • to avoid excessive bills dropplets must be destroyed

    I found a workaround for continuous measuring performance impacts of developed code. Every time perf. tests are executed they must execute current (HEAD) code version as well as last measured code version. And then I could compare whether the HW specs significantly changed or not. Example:

    January 2021

    • first perf. test of application v. 1.0 > thrpt 1000 reqs/sec

    February 2021

    • repeated perf. test of application v. 1.0 > thrpt 1200 reqs/sec
    • repeated perf. test of application v. 1.1 > thrpt 1300 reqs/sec

    Now I know, that for second test I got 1.2x better HW from DO. But new version of application performed even better, so that there is approximate 1.3/1.2 = 8.3% performance improvement.

    So the key is to perform tests of both application versions on the same HW specs in order I can rely on the results.