I have a droplet with 4 GB of memory and MySQL running on it, Apache and PHP.
I don’t understand why MySQL not releasing the memory and don’t stop to take more and more memory until reached the 4 GB and making the server crashed. I noticed when the server crash the disk was reading some data very fast (~150 Mb/s).
I have already tried to set innodb_buffer_pool_size to 64M but seems not to work.
Finnaly, I have add some swap space of 4 GB to avoid crashing for the moment.
These answers are provided by our Community. If you find them useful, show some love by clicking the heart. If you run into issues leave a comment, or add your own answer to help others.
Click below to sign up and get $100 of credit to try our products over 60 days!
@florianbrgnr,
The output you attached doesn’t raise any flags for me so the next thing I would recommend is that you check out the SQL load on the system. One thing that could be happening is expensive queries are utilising memory to complete their tasks and this memory is not reflected as returned to the system however it’s still available for processes to use should it be required. The problem of OOM killer is that the (expensive) SQL queries will continue to arrive and push the server towards using swap or the OOM Killer will kill the process using the most memory which will almost always be MySQL.
There’s another utility you can use to analyse the MySQL Slow Log file that will give you some insight into your expensive queries. My advice is to spend some time tuning the worst offenders so you can reduce the memory needed to handle them.
The result of which will have a summary of queries ranked by load that you can analyse by using the EXPLAIN command.
e.g.
Good luck!
Hello @florianbrgnr,
I’d like to offer some further advice here if I may. It’s hard to diagnose things exactly based on the detail provided but I hope I can add some value here.
MySQL thrives on caching data in memory but there are also some per-thread buffers that can cause some issues if your SQL is not optimal. Due to the fact your memory is shared between 2 critical processes we might experience OOM killer caused by normal activity on the host.
You can get a better picture of what’s going on with the server by using a utility to summarise resource utilisation.
Percona’s
pt-summary
andpt-mysql-summary
are a couple of programs that you can use to check what’s going on.Are your crashes predictable? Are there any imports or long running jobs ongoing when your instance is killed by the OS? If you can capture the output of
SHOW PROCESS LIST
on the instance when the symptoms are in the 11th hour and post here.If you run the two utilities (they are open source Perl scripts so feel free to check them out prior to running on your server) and post the results here I’ll check them out.
BR
Andmoo
Hi @florianbrgnr,
It’s possible your thread or query cache size actually cause the problems.
Can you try and add the following to your my.cnf file
Once these values have been configured, go ahead and restart MySQL.
One last thing, which MySQL version are you using?
Kind regards, KDSys