No Space Left on Device and MySQL crashes

Posted January 15, 2015 38.7k views

I’ve had periodic problems with MySQL crashes. It happens periodically and I can’t figure out why. I’ll restart the service and it works fine for a few days.

Now there’s a more immediate problem

I was copying the logs and I started getting errors that said “No space left on device.”

Same error happens when I attempt to create a directory using FTP “No Space Left On Device.” So I tried to re-log in FTP and I get this bit of luck:
Status: Connection established, waiting for welcome message…
Response: 220 ProFTPD 1.3.4a Server (Zpanel FTP Server) [::ffff:]
Command: AUTH TLS
Response: 500 AUTH not understood
Command: AUTH SSL
Response: 500 AUTH not understood
Status: Insecure server, it does not support FTP over TLS.
Command: USER thelemur
Response: 331 Password required for xxxxxxxx
Command: PASS **********
Error: Could not connect to server

I restarted the droplet. I can use SSH to connect, but I still get the same No Space errors and ftp problems.

And now my sites look like SQL has crashed again. When I try to restart the service, I get “Job failed to start.”

If it matters, my droplet is Ubuntu 12.04, i’m using Apache and zpanel.

Anyone know what’s going on?

1 comment

These answers are provided by our Community. If you find them useful, show some love by clicking the heart. If you run into issues leave a comment, or add your own answer to help others.

Submit an Answer
5 answers

do this:

du -hs /*

it will tell you which directories on / are using your space, if by example you get something like “50G /var” then do #du -hs /var/* and check what inside of /var is using your space, and gain and again till you find it.

If you are using MySQL InnoDB maybe is your binary logs that are fulling your HD as it creates a new one each time it crash, but also can be logs ie: /var/log/…

Do the #du -hs /* and all the procedure as I stated before and you will find out where and maybe why you are running out of space. Feel free to paste here the results and ask for more help.

What do you think “no space left on device” means?

  • Ok, I’m going to pretend you weren’t just an ass.

    I know what “no space left on device” means. There was space. Nothing was added. THINGS WERE REMOVED and it’s still reporting no space.

    NOW do you have any answers that aren’t smart ass?

  • What’s the output of df -h and df -i?

  • “`df -h
    Filesystem Size Used Avail Use% Mounted on
    /dev/vda 30G 30G 0 100% /
    udev 494M 12K 494M 1% /dev
    tmpfs 100M 244K 100M 1% /run
    none 5.0M 0 5.0M 0% /run/lock
    none 498M 0 498M 0% /run/shm
    overflow 1.0M 32K 992K 4% /tmp


    df -i
    Filesystem Inodes IUsed IFree IUse% Mounted on
    /dev/vda 1966080 188725 1777355 10% /
    udev 126393 389 126004 1% /dev
    tmpfs 127381 313 127068 1% /run
    none 127381 4 127377 1% /run/lock
    none 127381 1 127380 1% /run/shm
    overflow 127381 6 127375 1% /tmp

    I did check this. df -h reports it's full. But as I said, we were able to restart and connect to ftp as early as 10 minutes before the lockout happened. Then it started throwing all this. I deleted files after this. even if that's all I took out, it should have at least 2% free.
  • df -h doesn’t lie, you have no space left.

  • then what is filling it up? because it is a fact that we did not fill up that space.

  • Use du --max-depth=1 -m . in / and then do it again in specific directories until you found it.

  • @dessareilly - Your opening post stated the following: I was copying the logs and I started getting errors that said “No space left on device.” Where were you copying the logs to, and how large were those logs?

  • I was copying logs to an Internet accessible location so a friend could help me.
    It appeared (from my connection over FTP and browsing through the SSH shell) that no files had copied.
    Turns out a rather large one was. Multiple times.
    I used du -ax / | awk '{if($1 > 10240) print $1/1024 "MB" " " $2 }' | sort -n | tail -n 15 and discovered this. Files are deleted and things are working now. Thanks for the help Woet and gndo.

Tks Dane and digital ocean team. Your suggestions were pertinent. I was in a state of pnic when nothing worked at 100% mem utlization and non even the commend to know which first 20 big files were. removing the useles tmp files and restarting my droplet brought me from 100->80%( after temp file removal) ->13%( after droplet restart), I am still working on root casue but not in a state of panic atleast :) Tks..:)

Hi, I don’t think this may be a normal disk space issue, what happened on a server when I searched for this error is MariaDB strangely wrote a big file to /tmp and /tmp on that server is 4G, so I had that issue on a Plesk server, so why mariaDB may write a temp file which is so big like that, the life of this server is many years, and this is the first time it stops because no free space on /tmp partition.