Why my CentOS droplet is eating all the RAM memory?

I started with a 2vCPU 2GB CentOS 7 x64 droplet installed from marketplace with a Plesk image. I have only create a Wordpress website with no more than 400 visits per month and followed the initial server setup for CentOS 7 on DigitalOcean.

Then I have started getting alerts from the DigitalOcean monitoring service because the droplet was using more than 70% of memory. I receive those mails 7 times per day, memory usage and resolved ones.

So I thought I was running out of memory, then I have purchased 2GB extra. So my droplet is now a 2vCPU 4GB. After 3 days, same as before, the alert emails for memory beyond 70% started to get my inbox.

At this particular moment, the server is upside that 70% of memory so I ran the command I know DigitalOcean uses to calculate memory (

cat /proc/meminfo

The results are:

MemTotal:	3880364 kB
MemFree:          287080 kB
MemAvailable:    2561780 kB
Buffers:              32 kB
Cached:           994716 kB
SwapCached:            0 kB
Active:          1224116 kB
Inactive:         455532 kB
Active(anon):     769364 kB
Inactive(anon):    27284 kB
Active(file):     454752 kB
Inactive(file):   428248 kB
Unevictable:           0 kB
Mlocked:               0 kB
SwapTotal:             0 kB
SwapFree:              0 kB
Dirty:                16 kB
Writeback:             0 kB
AnonPages:        684948 kB
Mapped:           101816 kB
Shmem:            111748 kB
Slab:            1750432 kB
SReclaimable:    1678876 kB
SUnreclaim:        71556 kB
KernelStack:        5552 kB
PageTables:        17372 kB
NFS_Unstable:          0 kB
Bounce:                0 kB
WritebackTmp:          0 kB
CommitLimit:     1940180 kB
Committed_AS:    3978072 kB
VmallocTotal:   34359738367 kB
VmallocUsed:       95708 kB
VmallocChunk:   34359537660 kB
HardwareCorrupted:     0 kB
AnonHugePages:    344064 kB
CmaTotal:              0 kB
CmaFree:               0 kB
HugePages_Total:       0
HugePages_Free:        0
HugePages_Rsvd:        0
HugePages_Surp:        0
Hugepagesize:       2048 kB
DirectMap4k:       98276 kB
DirectMap2M:     4096000 kB
DirectMap1G:           0 kB

The command free -m results are:

                     total          used        free      shared buff/cache  available
Mem:           3789         935         240         109        2613        2464
Swap:             0           0           0

Plesk on the other side, trough Grafana is telling me that memory:memory-used:value 858 Mb memory:memory-cached:value 971 Mb memory:memory-slab_recl:value 1.6 Gb

Any ideas? Anything that would help me understand this situation and solve it?

Submit an answer

This textbox defaults to using Markdown to format your answer.

You can type !ref in this text area to quickly search our full set of tutorials, documentation & marketplace offerings and insert the link!

Sign In or Sign Up to Answer

These answers are provided by our Community. If you find them useful, show some love by clicking the heart. If you run into issues leave a comment, or add your own answer to help others.

Want to learn more? Join the DigitalOcean Community!

Join our DigitalOcean community of over a million developers for free! Get help and share knowledge in Q&A, subscribe to topics of interest, and get courses and tools that will help you grow as a developer and scale your project or business.

It doesn’t look like an overwhelming number of HTTP processes running, but check your access_log for lots of requests to XML-RPC. That seems to be a favorite attack point and enough requests can easily overwhelm a droplet. I would also check your security log, /var/log/secure on centos, for ssh login attempts. I recently had a server overwhelmed by login attempts and had to put a more aggressive fail2ban policy in place.

Hi @danielpcharrua,

I’ll recommend using the top or htop command. These commands will show you the top commands using CPU,RAM, time it has been running etc.

It’s possible the problem is related to a lot of traffic or the websites/applications are too heavy which generates the RAM being eaten. You’ll be able to confirm this with the above commands. Additionally, you can use the ps aux command to see all the processes and see which one you would need to either kill or further investigate why it’s taking so much memory.

Additionally, check if you haven’t actually configured any cron jobs which spawn and create traffic. The cron jobs might be generating MySQL queries which does tend to take a lot of Memory if they are being executed for more than 60 seconds.

It’s good to enable slow query log for MySQL to see if there are any slow queries. If there are, I’ll recommend working towards optimizing your queries.

Lastly, I believe it will be best to test this as soon as you receive an e-mail about high load.

Regards, KDSys