nanne
By:
nanne

Memory seems to spike every morning

February 9, 2017 601 views
Control Panels Monitoring Ubuntu 16.04

The Graphs beta has a memory graph, that shows "the percentage of physical ram being used".

Is this total RAM, so including OS-cache (cache buffers) or is this "application memory" (for lack of a better word)? The most interesting number here would be the application-used memory, so without the filesystem cache etc, at least, that's what I think.

I understand that it is hard to see from the outside of a VM what is what obviously, but as the beta uses the in-VM agent, it might be just as possible to measure "actual" memory usage.

If I compare in-vm numbers, it does look like it is actual application memory by the way.

The reason for this question is that it looks like every morning my memory usage jumps up quite a bit. Before I start to debug, I have to find out if this is an actual problem: if the image shows memory usage including cache, it could very well be that more cache is used after some sort of process, and there is no issue. If the graph shows application memory, that would be there is some sort of leak going on!

In a small dropplet (512mb), the memory usage jumps at least 15% every morning. A reboot (this is somewhat of a test machine currently, so I can play around with it) has it start at 50-60%, and then between 7 and 8 in the morning it jumps to 75% used.

I'm running the docker-image from digitalocean, and am not running any cronjobs.

Any thoughts?

4 Answers

@nanne

Memory utilization will depend on what you're running on the Droplet. From your post, I know you're running Docker, though what else is being ran inside or outside the Docker container?

For example, if you're running NGINX inside a Docker container, memory usage can fluctuate just as would be the case if you were running MySQL/MariaDB or similar.

Even though it may be a devbox, keep in mind that DigitalOcean IP's are shared -- not at the same time -- but have been shared with others, so it's possible for repeated attempts to access an IP to spike the resource usage. The best way to combat that (if that's indeed what's happening) is to setup your firewall and block access to ports other than those that you absolutely need open.

On Ubuntu you can use iptables, or ufw which is a bit easier to work with since it doesn't use as many odd/obscure arguments.

We can also look at the server and run the free command to see what current usage is in regards to RAM/Memory:

free -mh

You'll see something that looks like:

              total        used        free      shared  buff/cache   available
Mem:           3.9G        523M        272M        111M        3.1G        3.0G
Swap:            0B          0B          0B

In the above, my Droplet has roughly 4GB RAM, I've used 523MB, but there's only 272MB free, why? Because the rest of it is being used for cache.

If I actually needed the 3GB that's available, the used column would begin to increase while the cache would begin to decrease. That would mean I'm getting closer to using my physical memory allocation and that less and less is being cached.

  • Thanks you for your excellent reply!
    I do have to admit, it's a bit generic. The thing that I notice is the graph signalling a repeating, very clear jump in memory usage.

    The first thing I need to figure out is what the graph is showing. I know about the 'free' output: that's why I asked if the output on the beta graphs is including or excluding cache ?
    This would make a huge difference in how to find out what is going on.

    For more specific parts:
    I have blocked all unused ports with uwf, and I'm having a hard time believing that the specific spike is caused by traffic. It could obviously be, and I'll try to debug that, but i might need to check what's going on at the server pre jump and post jump, which is sorta on the early side for me :)

@nanne

Using top or htop will help you figure out what processes ran by what users are using the most of your resources relatively easily (you'd have to install htop -- it's the colorful version of top).

Once top is running, you can sort processes and expand them using SHIFT + M and just c. You can also hit e while top is running to change the reported memory usage to human readable format.

You can also run something like:

ps -A --sort -rss -o comm,pmem | head -n 11

To get a list of what users are using the most memory right now. Sample output would look something like this:

COMMAND         %MEM
mysqld           6.9
php-fpm7.1       2.6
php-fpm7.1       2.5
php-fpm7.1       2.5
php-fpm7.1       1.0
memcached        0.8
snapd            0.5
nginx            0.3
systemd-journal  0.2
sshd             0.1

So as you can see above, MySQL is what just so happens to be using the most memory followed by php7.1-fpm (which is actually using more, but this is broken down by process, not collectively).

  • Eeuhm, again, thank you. The same thing though: it is a bit generic. I know this, and I can and will check what's going on, but now I'm interested in if what I'm seeing is actually an issue. Do you know what the graph beta shows?

    To be clear: I don't need specific help in seeing what is currently going on. I'm foremost interested in debugging if what I see in the graph is actually an issue, and then i'll see why this is happening.

@nanne

It is a bit generic, but it's one method of seeing where an issue may lie. The graphs only report what the server will allow them to and is a way to pretty-print what you could decipher by running the same commands I provided above.

Beyond that, the graphs + monitoring will allow you to set alerts up that signal when, for example, CPU or RAM spikes to X%, XX%, etc (monitoring may not be available on your account -- it is coming though -- if it is, you can use it to set alerts up now).

Of course, there are multiple other ways to check usage and stats as one piece of software rarely covers everything. The exception may be the do-agent since I'm sure it's a custom solution that they designed in house, but I've not broken in to the software to see what or how it works. You can take a look at the project though as it is open source.

https://github.com/digitalocean/do-agent

The alternative to setting up monitoring (if it's not enabled on your DO account) is to use a cron job that runs around the same time as when you're seeing the spikes and echo the output of a command, such as what I provided, to a file.

The commands I provided may give a little more insight since they target specifics whereas the graphs do not provide such. They provide a general overview of what's going on, but not the specifics of what user, where, etc.

I saw this with one of my droplets as well (strangely, only one of them) - but then someone pointed out it may be system cron jobs that are in /etc/cron.daily that run every day, causing the spike in memory.

Have another answer? Share your knowledge.