Hacking attempt fixed with Fail2Ban, but high memory usage remains

March 7, 2015 1.7k views
Apache Security Ubuntu

We were regularly having apache silently crash for the last few days, and found that it was likely due to a vigorous hacking attempt on the site, which we have fixed by implementing Fail2Ban.

However we are unsure if we've fixed the apache crashing issue completely, initially the droplet was running at 1.1GB of free memory, this is now down to 165MB (apache had been crashing when the memory dipped to 70-84MB). We're concerned the site may crash again under these conditions.

In addition there appears to be 7 apache instances running simultaneously, which does not seem right.

Any guidance of suggestions would be very much appreciated.

4 Answers

What made you think there was a "vigorous hacking attempt"? What's wrong with 7 apache workers?

Unrelated, I now realise there are 100s of apache processes, so I guess that is not a problem.

The vigorous hacking attempt was seen in the access logs as a continuous, frequent, attempt to login to root access from several different IP addresses, the log had already expanded to 42GB.

Those are brute force attacks and they're extremely common. Yes, you should use Fail2Ban or similar solution, but if still bothers you you could take some extra precautions. Keys and changing the SSH port are good ones, so far the brute force attempts Fail2Ban had to block on my droplet every day dropped from over 50 to 0 after merely changing the port.

Regarding RAM: To prevent crashes during peak usage implement a SWAP file if you didn't already. [Tutorial]

Linux is smart when comes to RAM, it caches the most accessed files but it'll release some memory when it becomes needed. I find it odd that you say your Apache crashes with 80MB free. I have a LAMP setup on a 512MB droplet and haven't crashed once, not even when stress testing my server and remaining with only 10MB free RAM.

Also, apache does create a lot of processes. It's natural, it needs them to serve your site to the visitors. Are you sure it isn't just an overload of the server caused by a traffic spike?

  • It seems like use of fail2ban has stopped the attack which is great news.

    I've following the instructions provided to add the SWAP file - that sounds like an excellent idea, however the swap file isn't currently being utilised, I will await for the RAM to drop to see if the swap file begins to be utilised.

    I will take a look at the site metrics and see if we are seeing a traffic spike.


  • The SWAP only starts to be used when your RAM usage reaches a certain point, which is 60% by default I think. In fact it could be changed to something like 90% without problems, since it's a VPS server. The SWAP should be only used during emergencies, avoiding crashes, because even SSDs are slower than RAM.

Have another answer? Share your knowledge.