Question

NVMe Disk busy 95% -100%

Looked at atop and noticed disk load from 95% to 100%.

I started to analyze, it all started with the fact that I turned off all the working projects on this dedicated server and noticed that the load had dropped to 15-20%, I thought it was in the projects … but it wasn’t there, the load returned again and began to reach 75-85%, in atop it was clear that when kworker appeared, the disk load instantly jumped.

atop screenshots:

  1. https://i.stack.imgur.com/r81Wr.png

  2. https://i.stack.imgur.com/lsd8f.png

  3. https://i.stack.imgur.com/nQ86t.png

    I look in perf log, perf top and see:

    https://i.stack.imgur.com/1VOxm.png https://i.stack.imgur.com/KdXFa.png

    Drives are healthy, speed result:

    1073741824 bytes (1.1 GB, 1.0 GiB) copied, 0.4319 s, 2.5 GB/s

    Timing buffered disk reads: 3878 MB in 3.00 seconds = 1292.39 MB/sec

    what can be done in the next steps to localize the problem and load disks by 95-100%? debian 10


Submit an answer

This textbox defaults to using Markdown to format your answer.

You can type !ref in this text area to quickly search our full set of tutorials, documentation & marketplace offerings and insert the link!

Sign In or Sign Up to Answer

These answers are provided by our Community. If you find them useful, show some love by clicking the heart. If you run into issues leave a comment, or add your own answer to help others.

Hi @thintroll,

To be perfectly honest, I’ve not experienced such an issue on my end however it does look to me like something is writing and thus creating the disk load, maybe your MariaDB or your APP?

Anyway, I’ll follow the topic with eagerness to see the outcome.

I delve into the search and understand that I do not understand anything at all. Some advise noop, others mq-deadline, others none, and it is not clear how safe it is to set such parameters and whether it is correct, which one to choose for NVMe. Often information is found on advice on Ubuntu, but I, in turn, sit on debian 10 According to the subject, the information is all blurry, each experience throws out I looked in the files, there: cat /sys/block/nvme0n1/queue/scheduler [none] mq-deadline cat /sys/block/nvme1n1/queue/scheduler [none] mq-deadline