My droplet stopped responding and Nginx errored out, which coincided with 1/5/15 load graph massively peaking. I’m trying to find out what was the cause.
It is a basic droplet with 8GB memory, 4CPUs, 160GB SSD. Running Wordpress and NextJS/React - nothing crazy. On average, getting 1,000-1,500 visitors per day.
All other metrics during that time remained stable (CPU - below 5%, Memory - below 20%, Disk I/O, Disk usage, Bandwidth). I looked both into Google Analytics and in Nginx access logs, and there was no real traffic spike or any unusual access requests.
Is there any way for me to trace what happened? What could be the usual cause?
This textbox defaults to using Markdown to format your answer.
You can type !ref in this text area to quickly search our full set of tutorials, documentation & marketplace offerings and insert the link!
These answers are provided by our Community. If you find them useful, show some love by clicking the heart. If you run into issues leave a comment, or add your own answer to help others.
Click below to sign up and get $200 of credit to try our products over 60 days!
In addition to what has already been mentioned, I could suggest checking out the following answer on how to find the processes that are consuming most of your resources:
In addition to that, I would also recommend the following script, which you could use to summarize your access logs and find any potential suspicious requests:
Let me know how it goes!
Well, you can check the logs in the /var/log folder for any abnormal behavior. It’s possible a process got stuck and generated the load. Apart from logs there is not much you can do unless you catch the high load while it’s happening.
If you do while it’s happening, use tools like top, htop to see which processes are taking the most CPU/RAM and try to see why they have been generated ( if not by traffic).