Understanding Ext4 Filesystem Inode Limits: Why 6 Million Files Caused Storage Issues


2 views

When working with ext4 filesystems, many developers encounter a puzzling situation: despite having ample disk space, they hit an invisible wall when creating files. This typically manifests with errors like ENOSPC (No space left on device) even when df -h shows available storage.

In ext4, inodes are metadata structures that store information about files. Each file consumes exactly one inode. The key parameters are set at format time:

# Default mkfs.ext4 command creates 1 inode per 16KB
mkfs.ext4 /dev/sdX

# Custom format with more inodes:
mkfs.ext4 -i 8192 /dev/sdX  # 1 inode per 8KB

The theoretical maximum inodes in ext4 is 2^32 (~4 billion), but practical limits are much lower:

  • Default configuration: ~6-7 million inodes (depends on block size)
  • Maximum configurable: ~50 million inodes (requires special formatting)

Check current inode usage with these commands:

df -i  # Shows inode usage per filesystem
tune2fs -l /dev/sdX | grep -i inode  # Shows filesystem inode info

For applications requiring massive small files, consider these solutions:

# Solution 1: Reformat with custom inode ratio
mkfs.ext4 -i 4096 -T largefile4 /dev/sdX

# Solution 2: Use directory hashing (ext4 feature)
tune2fs -O dir_index /dev/sdX

# Solution 3: Alternative storage approaches
sqlite3 :memory: "CREATE VIRTUAL TABLE fs USING fts5(content)"

A web scraping service handling 8M+ HTML fragments implemented this solution:

#!/bin/bash
# Migration script for inode-constrained systems

# 1. Create new filesystem with optimized inodes
mkfs.ext4 -i 2048 -I 256 -J size=400 /dev/sdb1

# 2. Enable extra features
tune2fs -o journal_data_writeback /dev/sdb1
tune2fs -O dir_index,has_journal,extent /dev/sdb1

# 3. Mount with optimal parameters
mount -o noatime,nodiratime,data=writeback /dev/sdb1 /mnt/bigstorage

I recently ran into an interesting storage problem where an ext4 filesystem with ample free space (multiple terabytes available) suddenly refused to create new files. The culprit? We'd hit the default inode limit of 6,291,456 files (exactly 6,291,456 to be precise) on a standard 16TB filesystem.


# Check your filesystem's inode stats:
df -i
Filesystem       Inodes  IUsed   IFree IUse% Mounted on
/dev/sdb1      6291456 6291456      0  100% /data

The 6 million limit isn't arbitrary - it's the default inode count calculated during mkfs.ext4 based on these factors:

  • Default bytes-per-inode ratio: 16,384 (16KB)
  • Formula: total_inodes = (fs_size_in_bytes) / bytes_per_inode
  • For our 16TB FS: 16,000,000,000,000 / 16,384 = ~976,562,500 potential inodes

However, ext4 reserves 5% of inodes for root by default (tunable via -m option), reducing available inodes for userspace.

This becomes particularly problematic in these scenarios:

  • Object storage systems with millions of small files
  • Email servers with massive maildirs
  • Scientific computing with numerous result files
  • IoT device logging at scale

Here are three approaches I've used successfully:

1. Formatting with Custom Inode Settings


# Create filesystem with more aggressive inode allocation
mkfs.ext4 -i 4096 -T largefile4 /dev/sdX

# Or for maximum inodes (1 inode per 1KB):
mkfs.ext4 -i 1024 -N 10000000 /dev/sdX

2. Converting Existing Filesystems

For mounted filesystems, you can dynamically allocate more inodes:


# Requires kernel 5.6+ and e2fsprogs 1.46+
resize2fs -N 20000000 /dev/sdX

3. Architectural Workarounds

When you can't reformat:


# Implement directory hashing
for i in {0..255}; do
  mkdir -p /data/$(printf "%02x" $i)
  mv /data/*${i}*.log /data/$(printf "%02x" $i)/
done

Add these to your monitoring:


#!/bin/bash
THRESHOLD=90
INODE_USAGE=$(df -i /data | awk 'NR==2 {print $5}' | sed 's/%//')

if [ $INODE_USAGE -ge $THRESHOLD ]; then
  alert-sysadmin.sh "Inode crisis on /data: ${INODE_USAGE}% used"
fi