Report this

What is the reason for this report?

My Droplet's disk is full but I can't find what's taking up the space

Posted on April 17, 2026

I have a 25GB Droplet running Ubuntu 22.04 with Nginx and a small PHP app. I got an alert that disk usage is at 95% but when I check my app files and uploads folder, everything looks normal and small.

I ran df -h and it confirms the disk is almost full, but I cannot figure out where the space went. Is there a way to find what is eating up all the space?

Thanks



This textbox defaults to using Markdown to format your answer.

You can type !ref in this text area to quickly search our full set of tutorials, documentation & marketplace offerings and insert the link!

These answers are provided by our Community. If you find them useful, show some love by clicking the heart. If you run into issues leave a comment, or add your own answer to help others.

Hi there,

This is a classic one. The culprit is almost always logs, old Docker layers, or large files in a directory that a basic check misses.

Run this from the root directory to find where the space actually went:

du -h / --max-depth=3 2>/dev/null | sort -rh | head -20

This will point you at the biggest directories fast.

Most common causes:

Logs filling up. Check /var/log first. Nginx, PHP, and system logs can grow huge if log rotation is not configured.

du -sh /var/log/*

Journal logs. systemd keeps its own logs and they can quietly eat gigabytes:

journalctl --disk-usage
journalctl --vacuum-time=7d

Docker if you are running any containers, old images and volumes pile up fast:

docker system df
docker system prune

Deleted files still held open by a process. This one catches people off guard. A process can hold a file handle open after deletion, and the space does not free up until the process restarts. Check with:

lsof +L1

Start with the du command and share what comes back if you are still stuck.

Heya, @a53db3f658c041b8a8e241eb7b1e7a

Classic one, almost always logs or deleted files still held open by a process.

Run this to find the big stuff:

du -sh /* 2>/dev/null | sort -rh | head -20

Then drill into whichever directory shows large - usually /var/log is the culprit. Check du -sh /var/log/* and see if anything is huge.

Also check for deleted files still held open by running processes - this is sneaky because du won’t show them but df will:

lsof +L1 | grep deleted

If you see anything there, restarting the relevant service frees that space immediately without deleting anything.

Other common culprits on an Nginx/PHP setup - /tmp, old PHP session files, core dumps in /var/crash, or Docker if you have it installed (docker system df).

What does the du output show at the top level?

Regards

The developer cloud

Scale up as you grow — whether you're running one virtual machine or ten thousand.

Start building today

From GPU-powered inference and Kubernetes to managed databases and storage, get everything you need to build, scale, and deploy intelligent applications.