Up til now I thought that large discrepancies between the total size of a filesystem and the free space reported by the operating system is due to the reserved blocks. But as it turns out, there's another space hog: the inode table.
I recently created an 560 GB ext4 partition with default settings (i.e. using a simple
mkfs.ext4 /dev/mapper/vg01_mypartition
). I knew that it'd reserve 5% of the disk space for root only writes, but didn't care at the time.
Some time later I decided to zero out the reserved block count, because I did not have any use for them. I umounted the partition, executed
tune2fs -r 0 /dev/mapper/vg01_mypartition
and checked the results with
dumpe2fs -h /dev/mapper/vg01_mypartition
. To my surprise it still reported a 9 GB discrepancy between "Block count" and "Free blocks".
After a bit of googling I arrived at this askubuntu.com question where psusi explained the deal with the inode table. I've checked the inode usage on our file server (as a reference of a filesystem with millions of files of various sizes) and found that only 4% of the allocated inode table was in use.
Since I created this 560 GB partition for virtual machine disk images (files with a size of several GBs), the default inode table was way overkill. I checked another virtual machine server and it's VM partition's inode table and only 0.0001983642578125% of it was used.
I reformatted the 560 GB partition with 100000 inodes (it'll be more than enough for the enitre life cycle of the server
) using
mkfs.ext4 -m 0 -N 100000 /dev/mapper/vg01_mypartition
and the previously "vanished" 9 GB disk space suddenly reappeared.
Recent comments
2 years 31 weeks ago
4 years 1 week ago
4 years 1 week ago
4 years 3 weeks ago
4 years 4 weeks ago
4 years 10 weeks ago
4 years 10 weeks ago
4 years 11 weeks ago
4 years 11 weeks ago
4 years 11 weeks ago