I selected a 100gb volume to be added to one of my droplets. One of the things I need to do with this volume is to write millions of files to it. I selected DO to format and mount the volume for me, but I quickly saw that the default number of inodes that DO creates is very low for the size of the volume.
(output from df -i)

/dev/sda       3276800    11 3276789    1% /mnt/volume_nyc1_01

I ended up increasing the number of inodes:

/dev/sda       26214400    11 26214389    1% /mnt/volume_nyc1_01

(Command I used: mkfs.ext4 -N 26214400 /dev/sda)

While this isn’t a problem per se, it would be nice that DO handles this for users automatically. Is there a reason why DO chose a relatively small inode number for a large volume?

1 comment

These answers are provided by our Community. If you find them useful, show some love by clicking the heart. If you run into issues leave a comment, or add your own answer to help others.

×
Submit an Answer
1 answer

As you have correctly pointed out you can change the number of inodes. Your application seems unique in that most users aren’t storing millions of files and inodes themselves take up space, 256 bytes per inode. While that seems largely irrelevant, if you 10x the number of inodes by default, and then everyone creates a volume, you will end up with a large amount of used up diskspace just for inodes and would provide less overall storage for customers. So it’s easier for just customers to adjust according to their needs and otherwise provide an inode amount that works for the majority of customer use cases by default.