The situation you're encountering is common when dealing with legacy metadata (version 0.90) in mdadm RAID arrays. The key indicators in your mdadm --detail
output show:
State : clean, degraded
Active Devices : 1
Working Devices : 3
Spare Devices : 2
This reveals your array is operating in a degraded state with only one active device, while the two new drives are incorrectly being treated as spares rather than active members.
For version 0.90 metadata arrays, you'll need to explicitly remove the failed devices before adding new ones:
# First mark devices as failed (if not already)
sudo mdadm /dev/md1 --fail /dev/sdc2
# Then remove them from the array
sudo mdadm /dev/md1 --remove /dev/sdc2
However, since your devices show as "removed" but still occupy slots, we need a more thorough approach.
When dealing with persistent removed devices, the most reliable solution is to rebuild the array configuration:
# Stop the array first
sudo mdadm --stop /dev/md1
# Then reassemble with correct devices
sudo mdadm --assemble /dev/md1 /dev/sda2 /dev/sdb2 /dev/sdc2 --force
Your 0.90 metadata format is quite old. Consider upgrading to 1.2 metadata for better management:
# Backup current array
sudo mdadm --detail --scan > /etc/mdadm/mdadm.conf
# Stop and recreate with new metadata
sudo mdadm --stop /dev/md1
sudo mdadm --create /dev/md1 --level=1 --raid-devices=3 --metadata=1.2 /dev/sda2 /dev/sdb2 /dev/sdc2
After reconstruction, verify your array status:
cat /proc/mdstat
mdadm --detail /dev/md1
sudo mdadm --examine /dev/sd[a-c]2
You should see all devices in sync without any removed or spare designations.
To maintain array health:
- Regularly check
/proc/mdstat
- Set up email alerts in mdadm.conf
- Consider using LVM on top of RAID for easier management
- Document your RAID configuration details
When dealing with mdadm RAID1 arrays, a common frustration occurs when replaced drives stubbornly remain listed as removed
while new drives only appear as spares. This typically happens with metadata version 0.90 arrays where device removal wasn't handled cleanly.
Let's analyze the critical components of the array status:
$ mdadm --detail /dev/md1
[...]
Active Devices : 1
Working Devices : 3
Spare Devices : 2
[...]
Number Major Minor RaidDevice State
0 0 0 0 removed
1 0 0 1 removed
2 8 34 2 active sync /dev/sdc2
3 8 18 - spare /dev/sdb2
4 8 2 - spare /dev/sda2
The system shows two removed devices (0 and 1), one active device (2), and two spares (3 and 4) that should be active members.
Here's the step-by-step method to properly rebuild the array:
# First, fail the removed devices (if still listed)
sudo mdadm /dev/md1 --fail /dev/sd[ab]2
# Remove the failed devices completely
sudo mdadm /dev/md1 --remove /dev/sd[ab]2
# Add the new devices as active members
sudo mdadm /dev/md1 --add /dev/sda2
sudo mdadm /dev/md1 --add /dev/sdb2
# Force the spares to become active
sudo mdadm --grow /dev/md1 --raid-devices=3 --force
The 0.90 metadata format has specific limitations. For more reliable operations, consider upgrading to 1.2:
# Create backup first!
sudo mdadm --examine --scan > /etc/mdadm/mdadm.conf.bak
# Stop the array
sudo mdadm --stop /dev/md1
# Recreate with modern metadata
sudo mdadm --create /dev/md1 --level=1 --raid-devices=3 \
--metadata=1.2 /dev/sda2 /dev/sdb2 /dev/sdc2
After reconstruction, monitor progress and verify:
watch -n 5 cat /proc/mdstat
# Expected final output should show:
# md1 : active raid1 sda2[0] sdb2[1] sdc2[2]
# 1454645504 blocks [3/3] [UUU]
- Always update
/etc/mdadm/mdadm.conf
after array changes - Consider using UUID references instead of device names
- For critical arrays, upgrade to metadata 1.2 or higher
- Implement monitoring (e.g., mdadm --monitor or smartd)