Graphs and monitoring differs from real memory usage?

July 28, 2017 4.1k views
Java MySQL Monitoring Ubuntu 16.04

Hey, i hope you guys help me out with this issue.

Currently i have a 1gb ram droplet with ubuntu 16.04.2 os.

On my droplet dashboard, the memory graphs says that im using about 71.8% of my ram. See:
Dashboard graphs image

But inside my droplet, i run a command to display the current memory usage (%) and the result is this:
Inside droplet

As you guys can see, mysql are using 26.6% of my ram, and my java webservice is using 32.2, and some other process that dont mean almost nothing to the vm.
At the memory graphs, it stills going up and up, but at my droplet, it still the same. Java at 32.2% and mysql at 26.6%

Maybe im looking at the graphs wrong? Or its really a bug in the dashboard?


6 Answers

Product Manager for the monitoring team here. Our team is in agreement that the memory usage graph incorrectly includes cached memory as memory used. I have logged this as a bug and will circle back once a fix is released. I apologize for the confusion.

  • Any ETA on this?

  • Has this been fixed?
    Because I got this…

    (venv) user@droplet:~$ free -h --wide
                  total        used        free      shared     buffers       cache   available
    Mem:           1.9G        1.1G         94M        788K         71M        673M        716M
    Swap:            0B          0B          0B

    Notice 716M available memory.

    DO Graph says 64.5 percent usage, which implies 674.5M available memory, which is quite close, but not exact.


I cannot answer your question about the dashboards but to get the real memory usage from the command line you can use this command free -h it will give you your total RAM, currently used, free, shared buffered and cached, this is the output from one of my servers

root@backup01:~# free -h
             total       used       free     shared    buffers     cached
Mem:          7.7G       7.0G       683M        89M       4.1G       891M
-/+ buffers/cache:       2.0G       5.7G
Swap:         7.9G       406M       7.5G

The values in the first line could be misleading look at the second one it says 2.0G of my 7.7G are used which means memory usage is 25% what are your values?

Also note that these values differ from time to time so check the last value in your dashboard with the value you got in your terminal.

Hope this helps.

I use htop to monitor too, memory usage reported from DigitalOcean graph always higher than htop, and the different is about 25% (70% vs 43%), it’s weird.

I’m also getting strange and I think inappropriate numbers from the dashboard monitor. It’s currently telling me that memory used is 80%, but here is the output of free -h:

              total        used        free      shared  buff/cache   available
Mem:           2.0G        820M         95M         45M        1.1G        881M
Swap:            0B          0B          0B

That looks to me like total - available = 1.119GB or ~56% used. How is the digital ocean agent actually determining used memory percentages?

Where is the status of the bug?
We need this resolving as we are getting mem alerts, and have to log in to console each time to run Free -h

Can we get a timescale please?

We dug through both the frontend and backend code and were not able to find an incorrect memory calculation in the code despite a comment to the contrary I found on an older bug report. Our best guess is that the discrepancy is due to the delay between processing/displaying metrics in the dashboard and viewing the metrics via the CLI. If you are able to replicate it please let us know. thanks

  • I do not know where the calculation is wrong, but it for sure is (and very badly). We have a 4GB instance where Dashboard reports increasingly memory use, right now at 55% - but the machine is idle. Free reports:

                total        used        free      shared  buff/cache   available
       Mem:      3.9G        206M        651M         43M        3.0G        3.3G
       Swap:      0B          0B          0B
    • This might be useful: it indeed seems to be related to cache. It seems to increase when some i/o intensive process runs. I can see because this happens infrequently and I know the time slots.

    • Thanks for the follow-up. Could you please create a support ticket with the droplet name so we can further investigate? I’ll pass your details and thoughts around cache on to our eng team.


Have another answer? Share your knowledge.