Best approach to checking the data from Graphs - CPU %?

Posted January 22, 2016 4.1k views

I run a pretty fast (barebones) forum and I hover around only 2-6% CPU. But I spike very hard up to 12% (number is small but considering the normal, it’s double to 6x more CPU usage) and I was wondering what tool or the best approach to seeing what was going on at that time?

I have a relative timestamp via that handy graph but beyond that I don’t know how to capture what was going on at that moment. I’m also using serverpilot…should I ask them as well?

Only been with DO barely a week and since I have serverpilot ingrained into the website/droplet I’m usually at a loss to where to post lmao. I’m still trying to seperate the two services in my mind.

Hope to hear from you!

These answers are provided by our Community. If you find them useful, show some love by clicking the heart. If you run into issues leave a comment, or add your own answer to help others.

Submit an Answer
5 answers

The cheap and quick way to see cpu hogs is to use the command top at a terminal. You can also batch the output to a file (using the -b option), but be sure to use a time limit (the -n option) with a reasonable sampling period (-d option) because the file can grow large if your duration of interest happens to be long. Example command to run top for 24 hours at 15 second intervals:

top -b -d 15 -n 5760 > /tmp/top.out
#after it's done
grep -n "load average" /tmp/top.out | more

That’s 24 hours * 60 min/hour * 4 samples/min. The line numbers from the grep output on the left column corresponds to the line number in the file, which you can use to match the times when your load average reaches high levels.

Good hunting.

  • OHHHHH. So this works once you start it. It means you gotta keep it running then after the 24 hours do it.

    I was thinking it was grabbing for logs or something. Bahahaha. :)

    (Just closed mine inadvertently LOL!)

    I’ll restart it now and report back tomorrow night. (I fear the spammers are back on the forum…but it’s allllll gravy.)

As an aside, could the 2-6% be normal operating procedures and the jump to 12% be my own self visiting my website and thus utilizing all the services/features? Lol. Mindgasm ;)

I found this, “sp-agent” running on linux as a top (only .2% cpu lol)

What’s it for?


  • Don’t know what sp-agent is, but I would not worry too much about it yet since you are showing a very low load average and the %cpu usage for that process is a small fraction of a percent. I recommend more “top” samples before you start investigating so you can get a better picture of any resource bottlenecks.

    PS - No preference on terminals.

    • Alright thanks for the heads up. I feel like a new parent with a first baby LOL. I haven’t used Putty since the 90s…but I’m right back to using it again. :D

@gndo do you have any preferences for terminals? I used putty way back in the day and was wondering if people use anything else lol.

I could really use some help understanding it but I found some culprits. I had noticed sometimes on my graphs it would spike to 70% or even 90%.

This is what comes up for the high CPU hogs (I added in the “user…” line and the acct in brackets):
user cpu mem time app
[root] 95.4 21.0 0:21.97 update-apt-+
[root] 50.8 11.8 0:07.63 update-apt-+
[root] 65.7 7.9 0:09.89 unattended-+

^ Come to think of it, that’s probably serverpilot doing it’s thing huh....

  • @GreenKi - So soon after midnight, serverpilot starts up its maintenance daemons. I’m not a SP user, so you should ask in their forum if you can modify your SP to limit the maximum CPU usage (or better yet, use the term CPU utilization so they won’t confuse it with CPU cores). Otherwise, an alternative would be to schedule those daemons at a more convenient time for your users. I’m thinking they’re using a cronjob, but you should ask in the SP forum if you don’t know how to change the time when those jobs run.