When dealing with massive ext3 filesystems (2.7TB in this case), fsck's directory block allocation becomes memory-intensive. The standard approach fails with:
Error allocating directory block array: Memory allocation failed e2fsck: aborted
On 32-bit systems with limited RAM (512MB here), even substantial swap space (4GB) may not suffice due to address space limitations.
Tune2FS Prevention: Configure fewer checks to reduce memory needs:
tune2fs -c 0 -i 0 /dev/sdX
This disables time-based checking while allowing manual fsck when needed.
Alternative Checking Tools: Consider xfs_repair if converting to XFS is possible:
mkfs.xfs -f /dev/sdX xfs_repair /dev/sdX
Phase-by-Phase Checking: Force fsck to run specific phases separately:
fsck.ext3 -p -f -v /dev/sdX # Normal safe mode fsck.ext3 -n /dev/sdX # Read-only check
Filesystem Resizing: Temporarily shrink the filesystem for checking:
resize2fs /dev/sdX 500G # Temporary shrink fsck.ext3 /dev/sdX resize2fs /dev/sdX # Restore original size
Create a wrapper script to manage system resources during fsck:
#!/bin/bash sync echo 3 > /proc/sys/vm/drop_caches ionice -c 3 fsck.ext3 -f -y /dev/sdX
Add these to /etc/sysctl.conf for better memory handling:
vm.overcommit_memory = 2 vm.overcommit_ratio = 80 vm.swappiness = 60
For mission-critical systems, consider these last resorts:
- Use dd to create an image of the partition and check offline
- Mount read-only and perform selective checks on critical directories
- Stagger checks across filesystem sub-trees using find+xargs
When dealing with legacy systems running Debian Etch (or similar older distributions), memory constraints pose significant challenges for filesystem maintenance. The specific case involves:
- 512MB physical RAM
- 32-bit kernel limitations
- 2.7TB ext3 filesystem
- Failed fsck with "Error allocating directory block array"
ext3's fsck implementation requires loading the entire directory structure into memory. The formula for estimating required memory is:
Memory_needed = (num_directories * 40 bytes) + overhead
For a 2.7TB filesystem with millions of directories, this easily exceeds 512MB. Even with 4GB swap, the 32-bit address space limitation (typically ~3GB user space) becomes the hard ceiling.
1. Force fsck with Reduced Memory Footprint
Use these e2fsck flags to minimize memory usage:
e2fsck -f -D -n /dev/sdX
Where:
-f
: Force check even if clean-D
: Optimize directories (reduces memory pressure)-n
: Open filesystem read-only (safety measure)
2. Incremental Checking with Scripting
Break the check into manageable chunks:
#!/bin/bash DEVICE=/dev/sdX BLOCK_SIZE=65536 # 64MB chunks TOTAL_BLOCKS=$(blockdev --getsize64 $DEVICE) / $BLOCK_SIZE for ((i=0; i<$TOTAL_BLOCKS; i++)); do START=$((i * BLOCK_SIZE)) END=$((START + BLOCK_SIZE - 1)) debugfs -R "check_blocks $START $END" $DEVICE done
3. Alternative Tools for Large Filesystems
Consider these specialized utilities:
# For metadata-only checking: xfs_admin -c "check_metadata" /dev/sdX # For offline checking (requires unmount): umount /dev/sdX fsck.ext3 -b 32768 -c /dev/sdX # Use superblock backup
For systems where reboot is possible but 64-bit upgrade isn't:
# Temporary highmem kernel (if PAE supported): apt-get install linux-image-686-pae # Adjust vm settings: sysctl -w vm.overcommit_memory=2 sysctl -w vm.overcommit_ratio=95
For future maintenance:
- Schedule regular
tune2fs -c 100 /dev/sdX
(check every 100 mounts) - Implement LVM snapshots before major checks
- Consider migrating to XFS for >1TB filesystems