Why does my 1GB 1CPU Droplet running NGINX show better loadtest results than my 4GB 2CPU droplet?

I am testing out using a DO droplet for hosting a single static site. I am running Ubuntu 18.10 on the $5 droplet and on the $40 droplet, both standard on NY 1. I installed NGINX on both. The small droplet has nothing else installed, and was not updated at all. The second, larger droplet was updated, has ufw (which I disabled with no performance difference) and Node, but there is no node app running. I ran loadtest against both servers from my laptop running off my home wifi:

loadtest http://ipaddress/static-file -t 10 -c 20 --rps 50

The result is strange. The small droplet is completing over 400 requests, while the larger droplet is only completing around 20. Latency is also a magnitude larger on the larger droplet. I ran this several times with similar results. Any ideas would be helpful. I am not sure if the issue is hardware/software difference between the droplets, a result of some package that was updated in Ubuntu, something else? If the results were not so drastically different I would not have bothered to post this, but the stark difference is alarming. I would expect the performance to be more like the smaller droplet than the larger one.

Any ideas would be helpful.

Show comments

Submit an answer

This textbox defaults to using Markdown to format your answer.

You can type !ref in this text area to quickly search our full set of tutorials, documentation & marketplace offerings and insert the link!

Sign In or Sign Up to Answer

These answers are provided by our Community. If you find them useful, show some love by clicking the heart. If you run into issues leave a comment, or add your own answer to help others.

Want to learn more? Join the DigitalOcean Community!

Join our DigitalOcean community of over a million developers for free! Get help and share knowledge in Q&A, subscribe to topics of interest, and get courses and tools that will help you grow as a developer and scale your project or business.

Tried doing localhost based tests instead of over network to rule out network related factors ?

First thing is to find out what the underlying hardware used on each droplet and then test cpu, memory and disk I/O performance as droplets can vary in performance due to underlying hardware even for same priced droplet. Also check for cpu utilisation metrics i.e. cpu steal time to ensure your droplet isn’t on a vps host node with busy neighbours.

I did benchmarks for 2x $15/month droplets which came with different hardware Xeon E5-2630Lv2 Ivy Bridge cpu versus Xeon Gold 6140 Scalable Skylake cpu and the difference is huge

New Xeon Gold 6140 Skylake cpu was faster for cpu and memory based tasks but slower for disk I/O tests compared to Xeon E5-2630Lv2. I’ve since ran the same tests dozens of times while building out my Centmin Mod LEMP stack’s DigitalOcean 1-click marketplace app’s image - Xeon Gold 6140/Xeon Platinium 8168 Skylakes all have faster cpu and memory but slower disk I/O than Xeon E5-2650v4 Broadwell or Xeon E5-2697Av4 Broadwell cpu droplets.


It’s difficult to say, to be honest. It could be configuration needing to be tweaked, but if it’s a 1:1 comparison in all other aspects it could also be difference in hardware. Not all of our hardware is the same, each new deployment pushes out our latest configuration. All should be sufficient, and anything unable to perform adequately would obviously be removed, but not identical. For that reason alone, it may not be reasonable to expect two droplets to perform identical down to the smallest detail at all times.

It could also be network traffic, as two droplets will have two different paths inside of the datacenter. Different ports, different neighbors, etc.

Ensuring a 1:1 comparison on the software level by deploying two new servers could be a decent test as well.