By paandittya
I have been hosting my web api server on Digital Ocean for over 2 years now. In last 5-6 months I noticed my droplet dashboard has a persistent periodic (generally every 1 hour) CPU% utilization spike as shown in this image link . At first I thought that may be I accidentally pushed some orphan process via my git pull in the api which is running periodically. So I started to investigate my code. However I found that there is nothing wrong with my application. So my suspicion increased and then I started checking my authentication logs. There I saw huge number of malicious SSH auth attempts and subsequent failure logs. Although I have UFW and Fail2Ban in place for protection and I am using passwordless ssh keys, but these spam events cause spikes in my server cpu usage, disk IOs and network in outs. I am just wondering if this is normal (which I do not think because a few months ago this was not that frequent). I have also noticed a dramatic upsurge in my web server 4xx logs. Is anyone else experiencing such events. If yes, then what counter measures have you taken. If no, then can anyone please guide me how such events are mitigated?
This textbox defaults to using Markdown to format your answer.
You can type !ref in this text area to quickly search our full set of tutorials, documentation & marketplace offerings and insert the link!
What you’re seeing is pretty common — bots regularly scan and brute-force exposed servers. Since you already have SSH keys, UFW, and Fail2Ban in place, you’re well-protected, but the repeated attempts can still cause CPU and I/O spikes.
A few things that can help:
Change the default SSH port and restrict access with AllowUsers/AllowGroups.
Use DigitalOcean Cloud Firewalls to block bad traffic before it hits your droplet.
Consider adding Cloudflare or another WAF in front of your API to cut down the 4xx bot traffic.
It’s mostly background noise on the internet, but tightening firewall rules and adding a proxy layer should reduce the spikes.
Hi there,
Spikes like that can come from a few places. Bots hammering SSH/web is super common, but it could just as easily be cron jobs, log rotation, or traffic bursts.
A few quick checks you can run on the server:
# See running processes sorted by CPU
top
# Check cron jobs
crontab -l
ls /etc/cron.* -R
# Look at auth logs for SSH attempts
tail -f /var/log/auth.log
# Check syslog around the spike times
grep "$(date '+%b %e %H')" /var/log/syslog
If it looks like mostly SSH scans, you can tighten things up with the DO Cloud Firewall, move SSH off port 22, and keep Fail2Ban tuned. If it lines up with cron/log rotation, then it’s just normal system activity.
https://docs.digitalocean.com/products/networking/firewalls/
This comment has been deleted
Get paid to write technical tutorials and select a tech-focused charity to receive a matching donation.
Full documentation for every DigitalOcean product.
The Wave has everything you need to know about building a business, from raising funding to marketing your product.
Stay up to date by signing up for DigitalOcean’s Infrastructure as a Newsletter.
New accounts only. By submitting your email you agree to our Privacy Policy
Scale up as you grow — whether you're running one virtual machine or ten thousand.
Sign up and get $200 in credit for your first 60 days with DigitalOcean.*
*This promotional offer applies to new accounts only.