The output from mdadm --detail
shows all four RAID1 arrays (md0
through md3
) are in a degraded state with only one active device each. The second RAID device is showing as "removed" rather than failed, which explains why the provider's diagnostics found no disk errors.
From the output, we can map the RAID components:
md0 → /dev/sda1 (active) and missing member
md1 → /dev/sda2 (active) and missing member
md2 → /dev/sda3 (active) and missing member
md3 → /dev/sda4 (active) and missing member
The correct procedure involves identifying the missing drive (likely /dev/sdb
in a two-disk setup) and adding its partitions back to each array:
# First verify the partitions exist on the second disk
fdisk -l /dev/sdb
# Then add each partition to its respective array
mdadm /dev/md0 --add /dev/sdb1
mdadm /dev/md1 --add /dev/sdb2
mdadm /dev/md2 --add /dev/sdb3
mdadm /dev/md3 --add /dev/sdb4
# Monitor rebuild progress
watch cat /proc/mdstat
For more complex scenarios, you can create a recovery script:
#!/bin/bash
RAID_DEVICES=(
"/dev/md0:/dev/sdb1"
"/dev/md1:/dev/sdb2"
"/dev/md2:/dev/sdb3"
"/dev/md3:/dev/sdb4"
)
for pair in "${RAID_DEVICES[@]}"; do
IFS=':' read -r md_device disk_partition <<< "$pair"
echo "Adding $disk_partition to $md_device"
mdadm $md_device --add $disk_partition
done
echo "Rebuild started. Monitor with: watch cat /proc/mdstat"
After re-adding the drives, verify the rebuild status:
# Check detailed status
mdadm --detail /dev/md[0-3]
# Verify the RAID state has changed from [U_] to [UU]
cat /proc/mdstat
Consider these monitoring improvements:
- Set up email alerts in
/etc/mdadm/mdadm.conf
- Implement regular
mdadm --monitor
checks - Add RAID status checks to your monitoring system
Your system shows all four RAID arrays (md0-md3) in a degraded state with only one active device (sda1-sda4 respectively). The [U_]
notation indicates the second drive in each mirror pair is missing, though importantly, none are marked as failed.
mdadm -D /dev/md0
State : clean, degraded
Number Major Minor RaidDevice State
0 8 1 0 active sync /dev/sda1
1 0 0 1 removed
First confirm the expected second drive identifier (typically sdb in two-drive systems):
lsblk -o NAME,SIZE,ROTA,MODEL | grep -v loop
NAME SIZE ROTA MODEL
sda 1.8T 1 WDC WD2003FYYS-02W0B0
sdb 1.8T 1 WDC WD2003FYYS-02W0B0 # This should be your missing drive
For each degraded array, execute these commands after confirming the drive is physically present:
# For md0 (/dev/sda1 mirror)
mdadm /dev/md0 --add /dev/sdb1
# For md1 (/dev/sda2 mirror)
mdadm /dev/md1 --add /dev/sdb2
# For md2 (/dev/sda3 mirror)
mdadm /dev/md2 --add /dev/sdb3
# For md3 (/dev/sda4 mirror)
mdadm /dev/md3 --add /dev/sdb4
Track the resynchronization process:
watch -n 5 cat /proc/mdstat
Personalities : [raid1]
md3 : active raid1 sdb4[2] sda4[0]
1839089920 blocks super 1.2 [2/1] [U_]
[=>...................] recovery = 7.8% (143726336/1839089920)
After completion, confirm all arrays show [UU]
:
mdadm --detail /dev/md0 | grep -E "State|RaidDevice"
State : clean
RaidDevice State : 0 active sync /dev/sda1
RaidDevice State : 1 active sync /dev/sdb1
Add these lines to /etc/mdadm/mdadm.conf for automatic assembly:
ARRAY /dev/md0 metadata=1.2 name=rescue:0 UUID=872ad258:c42ccb36:e9e19c96:98b55ee9
ARRAY /dev/md1 metadata=1.2 name=rescue:1 UUID=18cb39fc:9eaea61c:0074a6c2:661b5862
ARRAY /dev/md2 metadata=1.2 name=rescue:2 UUID=eb9be750:7ff778b4:31fd7ce9:9d86d191
ARRAY /dev/md3 metadata=1.2 name=rescue:3 UUID=c9b748ef:332d3bf9:5fa8fef1:5b433b0a
Update initramfs after configuration changes:
update-initramfs -u