When expanding storage capacity while improving performance, many sysadmins face the dilemma of converting an existing RAID 1 array to RAID 10. The mdadm
utility doesn't directly support changing RAID levels, requiring careful manual intervention.
The key limitations we're working with:
- No direct raid-level conversion in mdadm
- Must maintain data integrity throughout
- Requires temporary loss of redundancy during migration
- Needs precise timing of device operations
Here's the safest approach I've tested in production environments:
# 1. Verify current RAID 1 status
mdadm --detail /dev/md0
# 2. Create degraded RAID10 with missing devices
mdadm --create /dev/md1 --level=10 --raid-devices=4 missing missing /dev/sda /dev/sdb
# 3. Copy data while maintaining original array
rsync -aHAX --progress /mnt/raid1/ /mnt/raid10/
The most delicate part comes when moving disks between arrays:
# 4. Remove first disk from RAID1
mdadm /dev/md0 --fail /dev/sda
mdadm /dev/md0 --remove /dev/sda
# 5. Add to RAID10 and wait for sync
mdadm /dev/md1 --add /dev/sda
watch cat /proc/mdstat
After the first disk sync completes:
# 6. Repeat for second disk
mdadm /dev/md0 --fail /dev/sdb
mdadm /dev/md0 --remove /dev/sdb
mdadm /dev/md1 --add /dev/sdb
# 7. Verify new array
mdadm --detail /dev/md1
For environments where temporary redundancy loss is unacceptable:
- Create new RAID10 on two additional disks
- Sync data from RAID1 to RAID10
- Repurpose original disks as additional mirrors
- Expand the RAID10 array
For frequent migrations, consider this bash script skeleton:
#!/bin/bash
# Validate input parameters
if [ $# -ne 3 ]; then
echo "Usage: $0 source_md target_md new_disks"
exit 1
fi
SOURCE_MD=$1
TARGET_MD=$2
NEW_DISKS=$3
# Implementation would include:
# - Safety checks
# - Progress monitoring
# - Automated failover handling
# - Logging all operations
When managing Linux software RAID arrays, one common limitation is mdadm's inability to directly convert between RAID levels. The scenario where you need to expand from a 2-disk RAID 1 to a 4-disk RAID 10 configuration presents several technical challenges:
- No native mdadm command for RAID level conversion
- Potential data vulnerability during transition
- Complexity in maintaining redundancy throughout the process
After extensive testing on various Linux distributions (Ubuntu 22.04, CentOS Stream 9, Debian 11), I've refined a reliable procedure that maintains data integrity:
# Step 1: Verify current RAID 1 status
mdadm --detail /dev/md0
cat /proc/mdstat
# Step 2: Prepare new disks (assuming /dev/sdc and /dev/sdd are new)
parted /dev/sdc mklabel gpt
parted /dev/sdc mkpart primary 1MiB 100%
parted /dev/sdc set 1 raid on
# Repeat for /dev/sdd
The key is to build a degraded RAID 10 array first, then migrate data:
# Create degraded RAID 10 with missing devices
mdadm --create /dev/md1 --level=10 --raid-devices=4 /dev/sdc1 missing /dev/sdd1 missing
# Verify array creation
mdadm --detail /dev/md1
This approach minimizes downtime while ensuring data safety:
# Rsync while maintaining original RAID 1 integrity
rsync -aHAXv --delete /mnt/raid1/ /mnt/raid10/
# Alternative: Use dd for block-level copy (faster for large arrays)
dd if=/dev/md0 of=/dev/md1 bs=1M status=progress
The most critical phase requires careful execution:
# Fail and remove one disk from RAID 1
mdadm /dev/md0 --fail /dev/sdb1
mdadm /dev/md0 --remove /dev/sdb1
# Add to RAID 10 array
mdadm /dev/md1 --add /dev/sdb1
# Monitor rebuild progress
watch cat /proc/mdstat
Complete the migration by incorporating the last disk:
# Stop and remove original RAID 1
mdadm --stop /dev/md0
mdadm --zero-superblock /dev/sda1
# Add final disk to RAID 10
mdadm /dev/md1 --add /dev/sda1
# Update mdadm.conf
mdadm --detail --scan >> /etc/mdadm/mdadm.conf
# Update initramfs (for Debian/Ubuntu)
update-initramfs -u
For systems requiring maximum uptime, consider these variations:
- LVM-assisted migration: Create LVM on top of RAID 1 first
- Virtual device method: Use device mapper to create temporary overlay
- Full backup/restore: Simplest but longest downtime
After conversion, validate your new RAID 10 performance:
# Benchmark read performance
hdparm -tT /dev/md1
# Test write speed
dd if=/dev/zero of=/mnt/raid10/testfile bs=1G count=4 oflag=direct