Memory seems to spike every morning

The Graphs beta has a memory graph, that shows “the percentage of physical ram being used”.

Is this total RAM, so including OS-cache (cache buffers) or is this “application memory” (for lack of a better word)? The most interesting number here would be the application-used memory, so without the filesystem cache etc, at least, that’s what I think.

I understand that it is hard to see from the outside of a VM what is what obviously, but as the beta uses the in-VM agent, it might be just as possible to measure “actual” memory usage.

If I compare in-vm numbers, it does look like it is actual application memory by the way.

The reason for this question is that it looks like every morning my memory usage jumps up quite a bit. Before I start to debug, I have to find out if this is an actual problem: if the image shows memory usage including cache, it could very well be that more cache is used after some sort of process, and there is no issue. If the graph shows application memory, that would be there is some sort of leak going on!

In a small dropplet (512mb), the memory usage jumps at least 15% every morning. A reboot (this is somewhat of a test machine currently, so I can play around with it) has it start at 50-60%, and then between 7 and 8 in the morning it jumps to 75% used.

I’m running the docker-image from digitalocean, and am not running any cronjobs.

Any thoughts?


Submit an answer
You can type!ref in this text area to quickly search our full set of tutorials, documentation & marketplace offerings and insert the link!

These answers are provided by our Community. If you find them useful, show some love by clicking the heart. If you run into issues leave a comment, or add your own answer to help others.


Using top or htop will help you figure out what processes ran by what users are using the most of your resources relatively easily (you’d have to install htop – it’s the colorful version of top).

Once top is running, you can sort processes and expand them using SHIFT + M and just c. You can also hit e while top is running to change the reported memory usage to human readable format.

You can also run something like:

ps -A --sort -rss -o comm,pmem | head -n 11

To get a list of what users are using the most memory right now. Sample output would look something like this:

COMMAND         %MEM
mysqld           6.9
php-fpm7.1       2.6
php-fpm7.1       2.5
php-fpm7.1       2.5
php-fpm7.1       1.0
memcached        0.8
snapd            0.5
nginx            0.3
systemd-journal  0.2
sshd             0.1

So as you can see above, MySQL is what just so happens to be using the most memory followed by php7.1-fpm (which is actually using more, but this is broken down by process, not collectively).

@nanne not sure if you found the answer to your question but if not, this article on DO explains that the memory shown in the memory graph is Total Memory - Free - Cached:

“Memory usage is calculated by subtracting free memory and memory used for caching from the total memory amount.”

I have the same issue: I have an alarm set up for when the server uses 80% of the RAM and it gets triggered every morning at 6:25 UTC (at the same time when cron.daily crons run) but I can’t seem to be able to reproduce it by running manually the crons. It’s frustrating because I am pretty sure it’s a non-issue: there’s plenty of RAM to be used, it’s just that it’s cached.

I saw this with one of my droplets as well (strangely, only one of them) - but then someone pointed out it may be system cron jobs that are in /etc/cron.daily that run every day, causing the spike in memory.


It is a bit generic, but it’s one method of seeing where an issue may lie. The graphs only report what the server will allow them to and is a way to pretty-print what you could decipher by running the same commands I provided above.

Beyond that, the graphs + monitoring will allow you to set alerts up that signal when, for example, CPU or RAM spikes to X%, XX%, etc (monitoring may not be available on your account – it is coming though – if it is, you can use it to set alerts up now).

Of course, there are multiple other ways to check usage and stats as one piece of software rarely covers everything. The exception may be the do-agent since I’m sure it’s a custom solution that they designed in house, but I’ve not broken in to the software to see what or how it works. You can take a look at the project though as it is open source.

The alternative to setting up monitoring (if it’s not enabled on your DO account) is to use a cron job that runs around the same time as when you’re seeing the spikes and echo the output of a command, such as what I provided, to a file.

The commands I provided may give a little more insight since they target specifics whereas the graphs do not provide such. They provide a general overview of what’s going on, but not the specifics of what user, where, etc.


Memory utilization will depend on what you’re running on the Droplet. From your post, I know you’re running Docker, though what else is being ran inside or outside the Docker container?

For example, if you’re running NGINX inside a Docker container, memory usage can fluctuate just as would be the case if you were running MySQL/MariaDB or similar.

Even though it may be a devbox, keep in mind that DigitalOcean IP’s are shared – not at the same time – but have been shared with others, so it’s possible for repeated attempts to access an IP to spike the resource usage. The best way to combat that (if that’s indeed what’s happening) is to setup your firewall and block access to ports other than those that you absolutely need open.

On Ubuntu you can use iptables, or ufw which is a bit easier to work with since it doesn’t use as many odd/obscure arguments.

We can also look at the server and run the free command to see what current usage is in regards to RAM/Memory:

free -mh

You’ll see something that looks like:

              total        used        free      shared  buff/cache   available
Mem:           3.9G        523M        272M        111M        3.1G        3.0G
Swap:            0B          0B          0B

In the above, my Droplet has roughly 4GB RAM, I’ve used 523MB, but there’s only 272MB free, why? Because the rest of it is being used for cache.

If I actually needed the 3GB that’s available, the used column would begin to increase while the cache would begin to decrease. That would mean I’m getting closer to using my physical memory allocation and that less and less is being cached.