When managing Linux filesystems, many admins focus solely on disk space usage while overlooking inode consumption - until they hit the dreaded "No space left on device" error despite having plenty of free space. This exact situation occurred when df -i
showed 80% inode usage with only 60% disk utilization on an LVM-backed filesystem.
$ df -i /data
Filesystem Inodes IUsed IFree IUse% Mounted on
/dev/mapper/vg-data 1.2M 960K 240K 80% /data
$ df -h /data
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/vg-data 100G 60G 40G 60% /data
Inodes store metadata about files - permissions, ownership, timestamps, and pointers to data blocks. The total inode count is fixed at filesystem creation based on:
- Filesystem type (ext4, xfs, etc.)
- Block size
- Bytes-per-inode ratio
With millions of small files (common in log systems, email servers, or cache directories), you can exhaust inodes long before filling disk space.
A critical misconception is that expanding an LVM volume will automatically increase inodes. This isn't true for most filesystems. For ext4 (the most common Linux filesystem):
# This grows the physical volume but NOT inodes
lvresize -L +20G /dev/mapper/vg-data
resize2fs /dev/mapper/vg-data
Option 1: Recreate Filesystem with More Inodes
For ext4, calculate needed inodes during mkfs using -i bytes-per-inode
(default 16384). Smaller values mean more inodes:
# Backup data first!
mkfs.ext4 -i 8192 /dev/mapper/vg-data
Option 2: Switch to XFS (Dynamic Inodes)
XFS dynamically allocates inodes, eliminating this issue:
umount /data
mkfs.xfs -f /dev/mapper/vg-data
mount /data
Option 3: Cleanup Strategy
Identify inode-heavy directories with:
find /data -xdev -type f | cut -d "/" -f 2 | sort | uniq -c | sort -n
Then implement automated cleanup for temp files/logs:
# Delete PHP session files older than 7 days
find /data/sessions -name "sess_*" -type f -mtime +7 -delete
For new deployments, consider these defaults:
# ext4 with more inodes
mkfs.ext4 -T small /dev/sdX1
# Or use XFS
mkfs.xfs /dev/sdX1
Monitor inodes proactively with Nagios/Icinga alerts on df -i
output.
When managing Linux systems with numerous small files, you might encounter an inode exhaustion issue even when disk space appears sufficient. This happens because:
- Each file consumes one inode regardless of size
- Filesystems allocate fixed inode counts at creation
df -i
shows your current inode usage vs availability
First, verify your exact inode usage with:
df -i
Filesystem Inodes IUsed IFree IUse% Mounted on
/dev/vda1 5242880 4194304 1048576 80% /
To find directories consuming excessive inodes:
find /path/to/search -xdev -type f | cut -d "/" -f 2 | sort | uniq -c | sort -n
1. Extending the Filesystem (When Possible)
For XFS filesystems (common on modern Linux):
# Grow the underlying LVM volume first
lvextend -L +20G /dev/vg00/lv_data
# Then resize the XFS filesystem
xfs_growfs /mount/point
Note: This won't increase inodes on ext4/ext3 filesystems, as inode count is fixed at creation.
2. Creating a New Filesystem with More Inodes
For ext4, specify inode count at creation:
mkfs.ext4 -N 2000000 /dev/sdb1
Or set bytes-per-inode ratio (smaller number = more inodes):
mkfs.ext4 -i 16384 /dev/sdb1 # Default is 16384 bytes/inode
3. Alternative Approaches
- Archive small files:
tar czf archive.tar.gz /path/to/small/files
- Use a database for small data instead of filesystem
- Distribute files across multiple filesystems
For future systems, consider:
# Check inode usage during monitoring
echo "*/5 * * * * root df -i >> /var/log/inode_usage.log" > /etc/cron.d/inode-check
When creating new filesystems for small-file workloads:
# For 1 million expected files (ext4 example)
mkfs.ext4 -N 1000000 -i 8192 /dev/sdX1