When using mdadm
to convert from RAID-1 to RAID-5, the array must initially contain exactly 2 devices. This constraint exists because:
- RAID-1's mirroring mechanism becomes problematic with >2 devices during conversion
- The metadata structure transition requires clean parity calculation starting points
- RAID-5's minimum device requirement (3 devices) creates transitional complexity
For your specific 3-device RAID-1 arrays, follow this sequence:
# First reduce two arrays to 2 devices each
mdadm --manage /dev/md0 --fail /dev/sdb1
mdadm --manage /dev/md0 --remove /dev/sdb1
mdadm --manage /dev/md1 --fail /dev/sdc1
mdadm --manage /dev/md1 --remove /dev/sdc1
After freeing devices, grow the target array:
# Add the spare devices to your target RAID-1
mdadm --manage /dev/md2 --add /dev/sdb1
mdadm --manage /dev/md2 --add /dev/sdc1
# Verify the 4-device configuration
mdadm --detail /dev/md2
Convert the 4-device RAID-1 to RAID-5:
# First reshape to RAID-5 layout
mdadm --grow /dev/md2 --level=5 --raid-devices=4
# Monitor progress (in another terminal)
watch -n 1 cat /proc/mdstat
# After completion, verify the new layout
mdadm --examine /dev/md2
Common issues during conversion:
- Kernel version requirements: Ensure Linux 3.5+ for reliable reshaping
- Space calculation: Free space must exceed 2x largest device capacity
- Backup strategy: Always have current backups before conversion
Post-conversion tuning recommendations:
# Optimize read-ahead for RAID-5
blockdev --setra 65536 /dev/md2
# Adjust stripe cache size
echo 32768 > /sys/block/md2/md/stripe_cache_size
When working with mdadm, RAID-1 arrays fundamentally require exactly two devices for conversion to RAID-5. This isn't just an arbitrary limitation - it's rooted in how parity calculation works during the transformation process. RAID-5 needs to establish parity blocks across the array, and this operation becomes mathematically complex when dealing with mirrored data from more than two devices.
Your current setup with three RAID-1 arrays (each containing 3 devices) presents an interesting challenge. Let's break down the correct transformation path:
# Current device layout (example):
/dev/sda1 /dev/sdb1 /dev/sdc1 → RAID-1 (Array1)
/dev/sdd1 /dev/sde1 /dev/sdf1 → RAID-1 (Array2)
/dev/sdg1 /dev/sdh1 /dev/sdi1 → RAID-1 (Array3)
# Step 1: Reduce two arrays to 2 devices each
mdadm --manage /dev/md0 --fail /dev/sdc1
mdadm --manage /dev/md0 --remove /dev/sdc1
# Repeat similar process for second array
Once you've prepared your 4-device RAID-1 array, the conversion command sequence would be:
# Verify array status first
mdadm --detail /dev/md2
# Grow array to RAID-5 layout
mdadm --grow /dev/md2 --level=5 --raid-devices=4
# Monitor progress (in another terminal)
watch cat /proc/mdstat
Several technical factors come into play during this operation:
- The metadata version (1.2 vs 1.0) affects conversion capabilities
- Filesystem alignment requirements change between RAID levels
- Performance characteristics will shift dramatically post-conversion
Expect significant I/O patterns changes:
# Before conversion (RAID-1)
Reads: Parallel across all devices
Writes: Mirrored to all devices
# After conversion (RAID-5)
Reads: Still parallel but with parity calculation
Writes: Require read-modify-write cycle for parity updates
In some cases, especially with large arrays, it may be more efficient to:
- Backup data completely
- Destroy existing arrays
- Create new RAID-5 array
- Restore data