Block Storage Volume performance

We found out that XFS Block Storage Volume performance is quite low compared to the regular ext4 SSD disk bundled with droplets.

This is the hdparm result on a XFS Block Storage Volume:

root@db1:~# hdparm -Tt /dev/sda

/dev/sda: Timing cached reads: 16970 MB in 1.99 seconds = 8514.97 MB/sec Timing buffered disk reads: 532 MB in 3.00 seconds = 177.28 MB/sec

This is instead the hdparm result of bunlded SSD:

root@db1:~# hdparm -Tt /dev/vda1

/dev/vda1: Timing cached reads: 16458 MB in 1.99 seconds = 8257.90 MB/sec Timing buffered disk reads: 2798 MB in 3.00 seconds = 932.35 MB/sec

We’ve done those tests after we saw that some stress-test on API/DBs are running with low performance compared to other cloud providers.

Since we want to install and use a medium-sized MongoDB database (~25GB), XFS is a requirement, but putting MongoDB data on Block Storage is causing low performance compared to droplet’s bundled SSD disks.

We really love DO services but we also need to have fast SSD + XFS + MongoDB. What can you suggest to avoid that low performance?

Show comments

Submit an answer

This textbox defaults to using Markdown to format your answer.

You can type !ref in this text area to quickly search our full set of tutorials, documentation & marketplace offerings and insert the link!

Sign In or Sign Up to Answer

These answers are provided by our Community. If you find them useful, show some love by clicking the heart. If you run into issues leave a comment, or add your own answer to help others.

Hey there,

You will always see a bit worse performance on a Volume compared to local storage, Volumes are network-based storage which adds some overhead. In our experience, it is quite possible to run MongoDB databases on Volumes with proper tuning.

We recommend testing with MongoDB as the performance can differ from using hdparm which only writes/reads one file.

With regards to these results, that speed is expected on a Standard Droplet:

We recommend Optimized Droplets for higher burst throughput, but also recommend application tests to see how non-burst performance works for your use-case. The IOPs you get can be tuned by adjusting queue depth.

Queue depth is one of those things that we always like to bring up in regards to gaining improved performance with MongoDB on a Volume. Volumes in DO are tuned to perform at 5k iops, but that’s at a high queue depth and block sizes. At a lower QD (Queue Depth), we may see fewer IOPS as additional latency is introduced. Our Engineers have tested some benchmarks by manipulating the application’s QD. At a QD of ~2, we see about 500-600 iops. But, raising that to 32, he saw that raise to ~3400 iops. It is required to tune the application to work at a higher QD and tweak the settings to increase the number of parallel writes to the disk.

On the OS level, the following file '/sys/block/<sdx>/device/queue_depth" stores the value of QD, and you’ll need to tune MongoDB’s QD and the OS QD together to get more performance. The following blog article MongoDB’s website details a bit about Queue Depth: It’s suggested as well to discuss this with your DBA or Database administrator, if not MongoDB support for the best MongoDB tuning options.

Regards, Ethan Fox Developer Support Engineer II - DigitalOcean

For anyone coming here after searching for the same question/answer in regards to Block Storage Performance.

In the meanwhile DO has upped the game and we get the following performance:

/dev/vda1: Timing cached reads: 16674 MB in 1.99 seconds = 8371.55 MB/sec Timing buffered disk reads: 2162 MB in 3.00 seconds = 720.64 MB/sec root@nofam-website:/var/www/nofam-web-peak/public# hdparm -Tt /dev/sda

/dev/sda: (Block Storage Timing cached reads: 15484 MB in 1.99 seconds = 7772.91 MB/sec Timing buffered disk reads: 1628 MB in 3.00 seconds = 542.47 MB/sec

Hello @efox this is interesting and I wonder if you have similar pointers to optimising the configuration for a MySQL data store.

At the moment on a basic instance a simple query counting the number of rows in a table with 3m rows takes 1.8s on local storage vs 12seconds on block storage.

hdparm testing as above and similar results. The instance type is fine for our usage and I havent tried optimised droplets (only CPU optimised are available atm in Lon 1).

root@db3:~# hdparm -Tt /dev/sda

/dev/sda: Timing cached reads: 16110 MB in 1.99 seconds = 8098.96 MB/sec Timing buffered disk reads: 704 MB in 3.01 seconds = 234.24 MB/sec root@db3:~# hdparm -Tt /dev/vda1

/dev/vda1: Timing cached reads: 16454 MB in 1.99 seconds = 8273.15 MB/sec Timing buffered disk reads: 1802 MB in 3.00 seconds = 600.17 MB/sec root@db3:~#