Hi, since a couple of weeks I have an unstable Wordpress site on a DigitalOcean droplet. The actual WP backup file is about 150MB. After investigating file sizes (df -h) it turns out that ‘/dev/vda1/’ has used 20G (of 25G) and ‘/proc/kcore’ is even 128T…
Is there a way to ‘shrink’ these files of delete old logs?
Thanks.
This textbox defaults to using Markdown to format your answer.
You can type !ref in this text area to quickly search our full set of tutorials, documentation & marketplace offerings and insert the link!
These answers are provided by our Community. If you find them useful, show some love by clicking the heart. If you run into issues leave a comment, or add your own answer to help others.
Sign up for Infrastructure as a Newsletter.
Working on improving health and education, reducing inequality, and spurring economic growth? We'd like to help.
Get paid to write technical tutorials and select a tech-focused charity to receive a matching donation.
Hi @olivervogt,
I’ll recommend finding out which service is using this process and restarting it. The process itself can’t hold 128T of space due to the fact the droplet doesn’t have as much.
As for the space issue, what I’ll recommend is using something in the midst of
This command will find the 5 biggest directories in your /
To display the largest folders/files including the sub-directories, run:
If you want to display the biggest file sizes only, then run the following command:
I’ll recommend using these commands to try and find out where the biggest files are located and if they are logs truncate them or if anything else deal with them accordingly.
It’s possible your website has grown in space and you just need to upgrade our space. Having said that, you’ll need to decide that on the spot once you know where the space issues are coming from.
Regards, KDSys