lcer
By:
lcer

Process jbd2/vda1-8 high loading

May 27, 2015 3.3k views
Server Optimization Ubuntu

I am using a $10/month droplet with Ubuntu 14.04. I noticed that a process named "jbd2/vda1-8" pops up from time to time. As soon as it runs, all Apache processes become blocked and my entire server becomes unresponsive.

Some output from "ps auxf" command:

Wed May 27 15:58:10 UTC 2015
root       199  0.0  0.0      0     0 ?        D    May19   6:52  \_ [jbd2/vda1-8]
Wed May 27 15:58:11 UTC 2015
root       199  0.0  0.0      0     0 ?        D    May19   6:52  \_ [jbd2/vda1-8]
www-data 27636  0.0  0.9 406192  9604 ?        D    15:57   0:00  \_ /usr/sbin/apache2 -k start
Wed May 27 15:58:12 UTC 2015
root       199  0.0  0.0      0     0 ?        D    May19   6:52  \_ [jbd2/vda1-8]
www-data 27362  0.1  1.1 405668 11672 ?        D    15:56   0:00  \_ /usr/sbin/apache2 -k start
www-data 27636  0.0  0.9 406192  9604 ?        D    15:57   0:00  \_ /usr/sbin/apache2 -k start
www-data 27685  0.3  1.3 408368 13812 ?        D    15:57   0:00  \_ /usr/sbin/apache2 -k start
www-data 27738  0.2  0.9 405252  9340 ?        D    15:57   0:00  \_ /usr/sbin/apache2 -k start

The "top" command also shows "wa" above 90.

I realized it is something to do with disk journaling. Can it be disabled, or is anything I can do to reduce the load from "jbd2" without affecting my droplet?

Thanks for any advice!

5 comments
  • I also see this, and it is on a $40/month droplet which also happens to be my main db server for production! This results in 1-2 minutes of really slow response every 30 minutes to an hour. Please any help?

  • I wrote ticket to support.
    And they changed my server without changing my IP address.
    No such problems after month using now.

  • Thanks for the info!

  • I also had some random I/O loads and [jbd2/vda1-8] process behaving badly on my 80$ droplet. Migration to another hypervisor via DO's support solved the issue. Two days already so far so good.

  • Same issue here on my droplet which is in NY2. This thread helped a lot in identifying that the problem was out of my hands. DO support was really great and my droplet which had 20GB of used disk space took only ~5 minutes to migrate and come back online.

    The difference is night and day:



    showing 3 day history, with red line showing the migration downtime

2 Answers

Me too, did you get the way to solve this problem?

  • No, I wasn't able to directly solve the problem. What worked for me was trying to minimize disk access and using memory cache whenever possible.

Have another answer? Share your knowledge.