When working with ext3 filesystems on Linux servers, you might encounter the warning:
EXT3-fs warning (device dm-3): ext3_dx_add_entry: Directory index full!
This typically occurs when the directory hash index (HTREE) reaches its maximum capacity. The ext3 filesystem uses a hashed B-tree (HTREE) structure to speed up directory lookups, but each directory node has a fixed limit of around 10,000-20,000 entries.
First, verify your current inode usage:
df -i
Filesystem Inodes IUsed IFree IUse% Mounted on
/dev/mapper/ddf1_p3 77037568 9996012 67041556 13% /
Then check filesystem features with:
tune2fs -l /dev/mapper/ddf1_p3 | grep features
Filesystem features: has_journal ext_attr resize_inode dir_index filetype needs_recovery sparse_super large_file
Immediate workaround:
# Temporarily disable directory indexing:
debugfs -w /dev/mapper/ddf1_p3
debugfs: features ^dir_index
debugfs: quit
Permanent solution:
# 1. Backup the directory
rsync -a /problem/directory/ /backup/location/
# 2. Unmount the filesystem
umount /mountpoint
# 3. Run e2fsck with HTREE rebuild
e2fsck -fD /dev/mapper/ddf1_p3
# 4. Remount
mount /mountpoint
For web servers serving static assets, consider these architectural improvements:
# Example Nginx configuration to split directories
location ~ ^/assets/([a-z0-9])/([a-z0-9])/(.*)$ {
alias /data/assets/$1/$2/$3;
}
Or implement a sharding script for existing directories:
#!/bin/bash
# Directory sharding script
for file in /data/www/lighttpd/images/*; do
hash=$(echo "$file" | md5sum | cut -c1-2)
mkdir -p "/data/www/lighttpd/images/$hash"
mv "$file" "/data/www/lighttpd/images/$hash/"
done
If the issue persists after HTREE rebuild, consider:
- Upgrading to ext4 (which has improved HTREE implementation)
- Implementing XFS for directories with millions of files
- Checking for filesystem corruption with
fsck -fn /dev/device
During routine FTP operations via proftpd, kernel logs started showing repetitive warnings:
EXT3-fs warning (device dm-3): ext3_dx_add_entry: Directory index full!
Sep 16 15:30:34 xx last message repeated 489 times
This occurred while serving static assets through lighttpd 1.4.28-1 on CentOS 5.3, despite the filesystem showing plenty of available inodes (13% usage).
The ext3 filesystem uses two directory indexing methods:
1. Classic linear directory (old-style)
2. Hash-based directory index (dx)
The dx index has a fixed-size hash table (by default 2 levels deep) that can store about:
- First level: ~500 entries
- Second level: ~65,000 entries
The error occurs when:
1. The directory uses dx indexing (dir_index feature enabled)
2. The directory contains more than ~65k files
3. The hash table becomes full and can't grow further
Key verification commands:
# Check filesystem features
tune2fs -l /dev/mapper/ddf1_p3 | grep features
# Count actual files in problematic directory
find /data/www/lighttpd -xdev -type f | wc -l
Immediate Workaround
Disable directory indexing temporarily:
# For existing directories:
debugfs -w /dev/mapper/ddf1_p3
rm_dir_index /data/www/lighttpd/problem_directory
quit
Or recreate the directory:
mkdir /data/www/lighttpd/new_dir
rsync -a /data/www/lighttpd/problem_directory/ /data/www/lighttpd/new_dir/
rm -rf /data/www/lighttpd/problem_directory
mv /data/www/lighttpd/new_dir /data/www/lighttpd/problem_directory
Long-term Solutions
Option 1: Directory restructuring
# Example sharding script
for file in /data/www/lighttpd/large_dir/*
do
prefix=$(echo $file | md5sum | cut -c1-2)
mkdir -p "/data/www/lighttpd/shards/$prefix"
mv "$file" "/data/www/lighttpd/shards/$prefix/"
done
Option 2: Filesystem conversion
# Convert to ext4 (supports larger dx indexes)
tune2fs -O dir_index,extents,uninit_bg /dev/mapper/ddf1_p3
fsck -pf /dev/mapper/ddf1_p3
For lighttpd configurations:
# In lighttpd.conf
server.follow-symlink = "enable"
server.upload-dirs = ( "/var/tmp" )
# Enable mmap for static files
server.network-backend = "writev"
Filesystem monitoring script example:
#!/bin/bash
THRESHOLD=50000
DIRS=$(find /data/www/lighttpd -type d)
for dir in $DIRS; do
count=$(find "$dir" -maxdepth 1 -type f | wc -l)
if [ $count -gt $THRESHOLD ]; then
logger -t DIRWARN "Large directory detected: $dir ($count files)"
fi
done