Memory seems to spike every morning

The Graphs beta has a memory graph, that shows “the percentage of physical ram being used”.

Is this total RAM, so including OS-cache (cache buffers) or is this “application memory” (for lack of a better word)? The most interesting number here would be the application-used memory, so without the filesystem cache etc, at least, that’s what I think.

I understand that it is hard to see from the outside of a VM what is what obviously, but as the beta uses the in-VM agent, it might be just as possible to measure “actual” memory usage.

If I compare in-vm numbers, it does look like it is actual application memory by the way.

The reason for this question is that it looks like every morning my memory usage jumps up quite a bit. Before I start to debug, I have to find out if this is an actual problem: if the image shows memory usage including cache, it could very well be that more cache is used after some sort of process, and there is no issue. If the graph shows application memory, that would be there is some sort of leak going on!

In a small dropplet (512mb), the memory usage jumps at least 15% every morning. A reboot (this is somewhat of a test machine currently, so I can play around with it) has it start at 50-60%, and then between 7 and 8 in the morning it jumps to 75% used.

I’m running the docker-image from digitalocean, and am not running any cronjobs.

Any thoughts?

Submit an answer

This textbox defaults to using Markdown to format your answer.

You can type !ref in this text area to quickly search our full set of tutorials, documentation & marketplace offerings and insert the link!

Sign In or Sign Up to Answer

These answers are provided by our Community. If you find them useful, show some love by clicking the heart. If you run into issues leave a comment, or add your own answer to help others.

Want to learn more? Join the DigitalOcean Community!

Join our DigitalOcean community of over a million developers for free! Get help and share knowledge in Q&A, subscribe to topics of interest, and get courses and tools that will help you grow as a developer and scale your project or business.


Using top or htop will help you figure out what processes ran by what users are using the most of your resources relatively easily (you’d have to install htop – it’s the colorful version of top).

Once top is running, you can sort processes and expand them using SHIFT + M and just c. You can also hit e while top is running to change the reported memory usage to human readable format.

You can also run something like:

ps -A --sort -rss -o comm,pmem | head -n 11

To get a list of what users are using the most memory right now. Sample output would look something like this:

COMMAND         %MEM
mysqld           6.9
php-fpm7.1       2.6
php-fpm7.1       2.5
php-fpm7.1       2.5
php-fpm7.1       1.0
memcached        0.8
snapd            0.5
nginx            0.3
systemd-journal  0.2
sshd             0.1

So as you can see above, MySQL is what just so happens to be using the most memory followed by php7.1-fpm (which is actually using more, but this is broken down by process, not collectively).

@nanne not sure if you found the answer to your question but if not, this article on DO explains that the memory shown in the memory graph is Total Memory - Free - Cached:

“Memory usage is calculated by subtracting free memory and memory used for caching from the total memory amount.”

I have the same issue: I have an alarm set up for when the server uses 80% of the RAM and it gets triggered every morning at 6:25 UTC (at the same time when cron.daily crons run) but I can’t seem to be able to reproduce it by running manually the crons. It’s frustrating because I am pretty sure it’s a non-issue: there’s plenty of RAM to be used, it’s just that it’s cached.

I saw this with one of my droplets as well (strangely, only one of them) - but then someone pointed out it may be system cron jobs that are in /etc/cron.daily that run every day, causing the spike in memory.