Hello DigitalOcean Support Team,
I am experiencing intermittent issues with one of my droplets hosting a WordPress website, where the system enters an unresponsive state and requires a manual reboot in order to function normally again.
Context:
Droplet is used solely to host a WordPress website
From time to time, I observe sudden spikes in CPU (100%) and memory usage
After these spikes, the system becomes unresponsive (site becomes inaccessible)
A reboot always restores normal operation
I have already increased the droplet resources once, but I do not believe this is a capacity issue. Under normal conditions, the droplet rarely (if never) exceeds 50% of the allocated CPU and memory.
Request for assistance: I would appreciate your help with:
Checking whether there are infrastructure-level issues (host node, disk I/O, network, etc.)
Advising how to better diagnose the root cause of these spikes and freezes
Recommending monitoring or configuration changes to prevent the system from reaching this unresponsive state
If needed, I can provide:
Droplet ID (I have only one)
Approximate timestamps of the incidents (latest was just other day)
Monitoring screenshots or logs
Thank you in advance for your assistance.
Best regards, Aleksandar Petrov
This textbox defaults to using Markdown to format your answer.
You can type !ref in this text area to quickly search our full set of tutorials, documentation & marketplace offerings and insert the link!
Hi there,
This usually ends up being something inside the Droplet rather than a platform issue, especially if a reboot always brings things back to normal.
In my experience, sudden 100% CPU or memory spikes on WordPress are often caused by traffic bursts, bots, or a plugin/theme doing something expensive.
Even if average usage looks low, short spikes can still push the system into a bad state. I’d start by checking logs around the time it happens, especially system logs and your web server / PHP-FPM logs.
If the Droplet is completely unresponsive and you can’t SSH in, the Recovery Console is really handy. You can log in as root and inspect logs, disk usage, or services even when the Droplet looks “dead” from the outside. https://docs.digitalocean.com/products/droplets/how-to/recovery/recovery-console/
One common thing I’ve seen is missing or very small swap. Without swap, a short memory spike can cause the system to lock up or start killing processes. Adding a bit of swap often helps stabilize things. https://www.digitalocean.com/community/tutorials/how-to-add-swap-space-on-ubuntu-20-04
I’d also enable DigitalOcean Monitoring and set a couple of alerts. That way you can correlate CPU, memory, and disk I/O spikes with exact timestamps in your logs. https://docs.digitalocean.com/products/monitoring/
On the WordPress side, caching makes a big difference. Page caching and limiting heavy admin-ajax usage usually reduce sudden load. If traffic or bots are involved, putting something like Cloudflare in front of the site can help absorb spikes.
Heya,
Hi Alexander,
What you’re describing almost always turns out to be something happening inside the droplet rather than an infrastructure issue. If a reboot immediately restores normal operation, that’s a strong sign the problem is a runaway process, memory exhaustion, or something at the application level.
On WordPress droplets, the usual causes are a plugin going wild (backup, security, search, etc.), overlapping wp-cron jobs, aggressive bot traffic hitting login or XML-RPC, or MySQL consuming too much memory/CPU. When memory gets exhausted, the kernel can start killing processes (OOM), and the system may appear frozen.
The most helpful thing you can do is capture data before rebooting next time. If SSH is still responsive, run top or htop and see what process is at 100%. Also check for out-of-memory events with:
dmesg -T | grep -i oom
or
grep -i kill /var/log/messages
If you see “Killed process …” messages, that confirms memory exhaustion.
It’s also worth checking logs around the time of the spike:
/var/log/syslog
/var/log/nginx/error.log or /var/log/apache2/error.log
MySQL logs if enabled
If you’re using default WordPress cron, consider disabling it in wp-config.php and moving it to a real system cron job. Overlapping cron runs are a common source of CPU spikes.
From the platform side, actual host node issues are rare and usually show up as disk I/O errors or kernel messages in dmesg/journalctl. If you’re not seeing hardware or filesystem errors, it’s very unlikely to be the underlying infrastructure.
If you can share the droplet size, whether swap is enabled, and what process shows high CPU during a spike, that would make it much easier to narrow this down further.
Hope that this helps!
Get paid to write technical tutorials and select a tech-focused charity to receive a matching donation.
Full documentation for every DigitalOcean product.
The Wave has everything you need to know about building a business, from raising funding to marketing your product.
Stay up to date by signing up for DigitalOcean’s Infrastructure as a Newsletter.
New accounts only. By submitting your email you agree to our Privacy Policy
Scale up as you grow — whether you're running one virtual machine or ten thousand.
Sign up and get $200 in credit for your first 60 days with DigitalOcean.*
*This promotional offer applies to new accounts only.