Question

Hacking attempt fixed with Fail2Ban, but high memory usage remains

We were regularly having apache silently crash for the last few days, and found that it was likely due to a vigorous hacking attempt on the site, which we have fixed by implementing Fail2Ban.

However we are unsure if we’ve fixed the apache crashing issue completely, initially the droplet was running at 1.1GB of free memory, this is now down to 165MB (apache had been crashing when the memory dipped to 70-84MB). We’re concerned the site may crash again under these conditions.

In addition there appears to be 7 apache instances running simultaneously, which does not seem right.

Any guidance of suggestions would be very much appreciated.

Subscribe
Share

Submit an answer
You can type!ref in this text area to quickly search our full set of tutorials, documentation & marketplace offerings and insert the link!

These answers are provided by our Community. If you find them useful, show some love by clicking the heart. If you run into issues leave a comment, or add your own answer to help others.

This comment has been deleted

Those are brute force attacks and they’re extremely common. Yes, you should use Fail2Ban or similar solution, but if still bothers you you could take some extra precautions. Keys and changing the SSH port are good ones, so far the brute force attempts Fail2Ban had to block on my droplet every day dropped from over 50 to 0 after merely changing the port.

Regarding RAM: To prevent crashes during peak usage implement a SWAP file if you didn’t already. [Tutorial]

Linux is smart when comes to RAM, it caches the most accessed files but it’ll release some memory when it becomes needed. I find it odd that you say your Apache crashes with 80MB free. I have a LAMP setup on a 512MB droplet and haven’t crashed once, not even when stress testing my server and remaining with only 10MB free RAM.

Also, apache does create a lot of processes. It’s natural, it needs them to serve your site to the visitors. Are you sure it isn’t just an overload of the server caused by a traffic spike?

Unrelated, I now realise there are 100s of apache processes, so I guess that is not a problem.

The vigorous hacking attempt was seen in the access logs as a continuous, frequent, attempt to login to root access from several different IP addresses, the log had already expanded to 42GB.

What made you think there was a “vigorous hacking attempt”? What’s wrong with 7 apache workers?