Graphs and monitoring differs from real memory usage?

Hey, i hope you guys help me out with this issue.

Currently i have a 1gb ram droplet with ubuntu 16.04.2 os.

On my droplet dashboard, the memory graphs says that im using about 71.8% of my ram. See: Dashboard graphs image

But inside my droplet, i run a command to display the current memory usage (%) and the result is this: Inside droplet

As you guys can see, mysql are using 26.6% of my ram, and my java webservice is using 32.2, and some other process that dont mean almost nothing to the vm. At the memory graphs, it stills going up and up, but at my droplet, it still the same. Java at 32.2% and mysql at 26.6%

Maybe im looking at the graphs wrong? Or its really a bug in the dashboard?



Hello , is there any update related to this ? in my company we often get alerts from DigitalOcean about memory running high but when we check in the server we find it’s much more less and in the normal average…and because of the alerts that we often receive , we setup a script to restart the server automatically whenever the used memory is high , but we unfortunately still getting the alerts… so there is a mismatch for sure…please tell us in case of any update!

Submit an answer
You can type!ref in this text area to quickly search our full set of tutorials, documentation & marketplace offerings and insert the link!

These answers are provided by our Community. If you find them useful, show some love by clicking the heart. If you run into issues leave a comment, or add your own answer to help others.

So I’ve had quite a long back and forth with DO support about this. The final answer I got from them was:

“Based on the follow-up, it looks like this is a result of the consideration for what is being computed. The specific example outlined:”

cat /proc/meminfo
MemTotal:        4046440 kB
MemFree:          311132 kB
MemAvailable:    1705164 kB
Buffers:          500192 kB
Cached:           612276 kB

So given the output of /proc/meminfo above:

# (total - free - cached) / total
(4046440 - 311132 - 612276) / 4046440 = ~78%

Here is what they say about how they measure memory on the docs page:

This still differs quite significantly from the value of MemAvailable (which - I think - is what most of us are looking at - it’s also what you will see if you run free -m):

(1705164/4046440) = ~42% avail = 58% used. 

I don’t know what accounts for this difference, nor do I know which is the more reliable metric - though, from what I can gather it would appear that most people seem to look at memory available rather than doing the maths.

You can also interact with the DO agent via: /opt/digitalocean/bin/do-agent

e.g.: if you run: /opt/digitalocean/bin/do-agent --stdout-only on a droplet with monitoring enabled it will dump the stats in Prometheus form to stdout.

In conclusion:

I am not sure that I trust the results that I am getting from DigitalOcean. Furthermore, I’m rather frustrated because these (potentially) misleading results have lead me to purchase more droplets than perhaps I needed.

Finally, I wrote an ansible script that might be useful to get the memory available across your droplets:

- hosts: ...
    - name: Get memory usage
      shell: free -m
      register: result
    - debug: var=result
    - debug: var=ansible_memory_mb
    - debug: var="{{ansible_memory_mb.nocache.used/*100}}"

At the end of the day, I ended up more confused than when I started. I can’t understand why there is such a significant difference between the calculation that they are doing and the results that I am getting. Nor do I know which metrics to trust. I’d greatly appreciate if someone smarter than me could explain this to me and help me determine an actionable metric!

Hi there!

I have encountered same issue for many times and i am really getting disappointed :(. We receive many false alert reports every day, about memory usage (i have alert for >70% of memory used).

But when i check with htop or free -m or free -h server memory usage is around 20-25%.

root@edukai:~# free -h
              total        used        free      shared  buff/cache   available
Mem:           3.9G        1.0G        211M        1.4M        2.6G        2.6G
Swap:          4.0G        5.8M        4.0G
root@edukai:~# free -m
              total        used        free      shared  buff/cache   available
Mem:           3945        1065         211           1        2667        2630
Swap:          4095           5        4090

At same moment (and for last hour) digitalocean memory dashboard shows this

Is there any update please? This is getting very annonying for me, i am considering using collectd+stackdriver instead of digitalocean’s monitoring

Product Manager for the monitoring team here. Our team is in agreement that the memory usage graph incorrectly includes cached memory as memory used. I have logged this as a bug and will circle back once a fix is released. I apologize for the confusion.

Hey there @c0nf1ck, @dmaatoug and everyone in the comments! Our Engineering team has deployed changes to accommodate this and memory graphs now show (MemTotal - MemAvailable) / MemTotal. This formula is also used for memory utilization alerts. This does require your Droplet has the up-to-date agent installed (version 3.10.1+) and that your Droplet has a Linux kernel with version 3.14+

@c0nf1ck @dmaatoug @michellep @Deividas @ialliht @krishraghuram @janmikes @toast38coza @Mohsen47 @diverboxer

Folks, I have created an idea for fixing this issue, support suggested for me to create an idea:

You can vote or comment it here:

I think this would help many users!

Please see another question I have opened with the same issue, without knowing this question was already published.

It seems like the difference is in the buffers: the standard tools count buffers as available, while the digital ocean monitor counts it as not available.

I am not sure if that’s correct.

I can change the buffer allocation a bit by tuning vfs_cache_pressure down. But I think I would prefer the digital ocean monitor to conform to the standard tooling, out explain why the standard notion of memory available is not appropriate.

Do yourself a favor, as me, migrate everything to VULTR. Too much better than digital ocean!

Thank me later :)

I’m still running into the fundamental ‘buffer/cache is included in used memory calculation’ even with the new metrics, which is baffling more than one year later!

I started getting email alerts that my memory usage is over 50%, but when I run free -h --wide I get:

          total        used        free      shared     buffers       cache   available

Mem: 7.8G 366M 514M 144M 1.9G 5.0G 6.9G Swap: 1.0G 6.4M 1.0G

Is there a way to reset the monitor? At this point I’m just going to have to turn the notifications off, which isn’t my preferred solution!

We dug through both the frontend and backend code and were not able to find an incorrect memory calculation in the code despite a comment to the contrary I found on an older bug report. Our best guess is that the discrepancy is due to the delay between processing/displaying metrics in the dashboard and viewing the metrics via the CLI. If you are able to replicate it please let us know. thanks