How to Recover Corrupted ext4 Filesystem After Failed fsck on LUKS-Encrypted RAID5 Array


2 views

When attempting to expand my 9TB ext4 filesystem on a LUKS-encrypted RAID5 array (originally 6x2TB drives being upgraded to 3TB), I encountered a cascade of filesystem corruption after running e2fsck. The initial symptoms included:

EXT4-fs (dm-2): warning: mounting fs with errors, running e2fsck is recommended
mount: wrong fs type, bad option, bad superblock on /dev/mapper/candybox

The key diagnostic outputs revealed:

EXT4-fs (dm-2): ext4_check_descriptors: Checksum for group 0 failed (26534!=65440)
e2fsck: Group descriptors look bad... trying backup blocks...
e2fsck: unable to set superblock flags on candy

Here's the step-by-step approach I developed:

1. Create a Disk Image

ddrescue /dev/mapper/candybox candybox.img candybox.ddlog
ddrescue -r 2 /dev/mapper/candybox candybox.img candybox.ddlog

2. Attempt Superblock Recovery

mkfs.ext4 -L candy candybox.img -m 0 -S
mount -o loop candybox.img /mnt2

3. Advanced Recovery Techniques

For deeper recovery, we can use debugfs:

debugfs -w candybox.img
debugfs: lsdel
debugfs: stat 
debugfs: dump  recovered_file

Or try testdisk for partition recovery:

testdisk /dev/mapper/candybox
[Proceed] > [Intel] > [Analyze] > [Quick Search] > [Write]

For programmers dealing with similar scenarios, consider these approaches:

# Try mounting with journal recovery disabled
mount -o ro,noload /dev/mapper/candybox /mnt/recovery

# Use ext4magic for journal-based recovery
ext4magic /dev/mapper/candybox -j /var/log/journal -f /path/to/recover

Key takeaways for developers managing large storage systems:

- Always maintain backup superblocks: dumpe2fs /dev/mapper/candybox | grep -i superblock
- Consider implementing LVM snapshots before major operations
- Monitor filesystem health proactively: smartctl -a /dev/sdX


After performing a drive upgrade procedure on my LUKS-encrypted RAID5 array, I encountered catastrophic filesystem corruption when attempting to resize the ext4 partition. The system had been functioning perfectly for years until I began replacing 2TB drives with 3TB units using standard mdadm procedures:

mdadm --fail /dev/mdX /dev/sdX1
mdadm --remove /dev/mdX /dev/sdX1
mdadm --add /dev/mdX /dev/sdY1
mdadm --grow /dev/mdX --size=max

The fatal error occurred when I blindly ran e2fsck -y without proper investigation after encountering the "filesystem is dirty" warning. This aggressive approach led to:

  • Massive inode removal messages
  • Mount failures with superblock errors
  • Checksum validation failures across all block groups
  • Soft lockups during recovery attempts

After creating a full disk image with ddrescue, I experimented with multiple recovery approaches:

# Basic recovery attempt
ddrescue /dev/mapper/candybox candybox.img candybox.logfile

# Superblock reconstruction
mkfs.ext4 -L candy candybox.img -m 0 -S

# Read-only mount with journal disabled
mount -o loop,ro,noload candybox.img /mnt/recovery

When standard tools failed, I implemented these techniques:

1. Superblock Reconstruction

Attempting to rebuild critical filesystem structures:

debugfs -w candybox.img
debugfs: stats
debugfs: testi 
debugfs: ncheck 

2. Journal Recovery Procedures

Forcing journal replay with careful parameters:

tune2fs -f -O ^has_journal candybox.img
e2fsck -fy -b 32768 -B 4096 candybox.img

3. Low-Level Forensic Analysis

Using The Sleuth Kit for deep filesystem inspection:

fls -rdp candybox.img
icat candybox.img [inode] > recovered_file

Key takeaways from this recovery ordeal:

  • Always maintain multiple backup superblocks
  • Implement proper monitoring for filesystem errors
  • Document all array modification procedures
  • Consider using XFS for large storage arrays

For engineers facing similar situations:

  1. Create complete disk image (ddrescue preferred)
  2. Attempt read-only mounting first
  3. Work on copies, never original media
  4. Document every command and output
  5. Consider professional data recovery services for critical data