Question

Graphs and monitoring differs from real memory usage?

Hey, i hope you guys help me out with this issue.

Currently i have a 1gb ram droplet with ubuntu 16.04.2 os.

On my droplet dashboard, the memory graphs says that im using about 71.8% of my ram. See: Dashboard graphs image

But inside my droplet, i run a command to display the current memory usage (%) and the result is this: Inside droplet

As you guys can see, mysql are using 26.6% of my ram, and my java webservice is using 32.2, and some other process that dont mean almost nothing to the vm. At the memory graphs, it stills going up and up, but at my droplet, it still the same. Java at 32.2% and mysql at 26.6%

Maybe im looking at the graphs wrong? Or its really a bug in the dashboard?

Thanks.

Show comments

Submit an answer

This textbox defaults to using Markdown to format your answer.

You can type !ref in this text area to quickly search our full set of tutorials, documentation & marketplace offerings and insert the link!

Sign In or Sign Up to Answer

These answers are provided by our Community. If you find them useful, show some love by clicking the heart. If you run into issues leave a comment, or add your own answer to help others.

So I’ve had quite a long back and forth with DO support about this. The final answer I got from them was:

“Based on the follow-up, it looks like this is a result of the consideration for what is being computed. The specific example outlined:”

cat /proc/meminfo
MemTotal:        4046440 kB
MemFree:          311132 kB
MemAvailable:    1705164 kB
Buffers:          500192 kB
Cached:           612276 kB
...

So given the output of /proc/meminfo above:

# (total - free - cached) / total
(4046440 - 311132 - 612276) / 4046440 = ~78%

Here is what they say about how they measure memory on the docs page: https://www.digitalocean.com/docs/monitoring/resources/glossary-of-terms/#memory

This still differs quite significantly from the value of MemAvailable (which - I think - is what most of us are looking at - it’s also what you will see if you run free -m):

(1705164/4046440) = ~42% avail = 58% used. 

I don’t know what accounts for this difference, nor do I know which is the more reliable metric - though, from what I can gather it would appear that most people seem to look at memory available rather than doing the maths.

You can also interact with the DO agent via: /opt/digitalocean/bin/do-agent

e.g.: if you run: /opt/digitalocean/bin/do-agent --stdout-only on a droplet with monitoring enabled it will dump the stats in Prometheus form to stdout.

In conclusion:

I am not sure that I trust the results that I am getting from DigitalOcean. Furthermore, I’m rather frustrated because these (potentially) misleading results have lead me to purchase more droplets than perhaps I needed.

Finally, I wrote an ansible script that might be useful to get the memory available across your droplets:

---
- hosts: ...
  tasks:
    - name: Get memory usage
      shell: free -m
      register: result
    - debug: var=result
    - debug: var=ansible_memory_mb
    - debug: var="{{ansible_memory_mb.nocache.used/ansible_memory_mb.real.total*100}}"

At the end of the day, I ended up more confused than when I started. I can’t understand why there is such a significant difference between the calculation that they are doing and the results that I am getting. Nor do I know which metrics to trust. I’d greatly appreciate if someone smarter than me could explain this to me and help me determine an actionable metric!

Hi there!

I have encountered same issue for many times and i am really getting disappointed :(. We receive many false alert reports every day, about memory usage (i have alert for >70% of memory used).

But when i check with htop or free -m or free -h server memory usage is around 20-25%.

root@edukai:~# free -h
              total        used        free      shared  buff/cache   available
Mem:           3.9G        1.0G        211M        1.4M        2.6G        2.6G
Swap:          4.0G        5.8M        4.0G
root@edukai:~# free -m
              total        used        free      shared  buff/cache   available
Mem:           3945        1065         211           1        2667        2630
Swap:          4095           5        4090

At same moment (and for last hour) digitalocean memory dashboard shows this

Is there any update please? This is getting very annonying for me, i am considering using collectd+stackdriver instead of digitalocean’s monitoring

Product Manager for the monitoring team here. Our team is in agreement that the memory usage graph incorrectly includes cached memory as memory used. I have logged this as a bug and will circle back once a fix is released. I apologize for the confusion.