Recently I upgraded my Ubuntu 11.04 server from 2x160GB drives to 2x500GB drives in a RAID1 configuration. While the physical disks show correct sizes in fdisk
and sfdisk
, the RAID array stubbornly remains at its original size:
# mdadm --examine /dev/sdb3 Used Dev Size : 152247936 (145.19 GiB 155.90 GB) Array Size : 152247936 (145.19 GiB 155.90 GB)
The root cause lies in the RAID superblock metadata which retains the original device size. This is a common issue when using metadata version 0.90 (as shown by the "Magic: a92b4efc" in the examine output).
Current status shows:
# cat /proc/mdstat md2 : active raid1 sdb3[0] sda3[1] 152247936 blocks [2/2] [UU]
Here's how to properly resize the array and filesystem:
# First, stop the array (if possible) mdadm --stop /dev/md2 # Reassemble with updated size detection mdadm --assemble /dev/md2 /dev/sd[ab]3 --update=devicesize # Verify the new size is detected mdadm --examine /dev/sd[ab]3 | grep "Dev Size" # Grow the array to maximum possible size mdadm --grow /dev/md2 --size=max # Check the new size in /proc/mdstat cat /proc/mdstat # Finally, resize the filesystem resize2fs /dev/md2
If you encounter issues:
- Ensure all partitions are properly aligned (check with
parted
) - Verify both disks show identical partition layouts
- Consider updating to metadata version 1.2 for future flexibility
# To convert metadata version (be cautious!) mdadm --grow /dev/md2 --metadata=1.2
When setting up new arrays, consider using:
mdadm --create /dev/md0 --level=1 --metadata=1.2 --raid-devices=2 /dev/sd[ab]1
This ensures the array will automatically recognize larger devices when replaced.
When upgrading RAID1 arrays with larger drives, many administrators encounter the same frustrating scenario:
# mdadm --grow /dev/mdX --size=max
# resize2fs /dev/mdX
# Nothing changes!
The root cause lies in how mdadm stores device size information in the metadata. When examining your array:
# mdadm --examine /dev/sdb3
Used Dev Size : 152247936 (145.19 GiB 155.90 GB)
Array Size : 152247936 (145.19 GiB 155.90 GB)
This metadata was created with the original 160GB drives and persists even after physical disk replacement.
Here's how to properly expand your RAID1 array:
First, verify current component sizes:
# cat /proc/partitions | grep sd[ab]3
8 3 152248005 sda3
8 19 152248005 sdb3
Now the critical step - reassemble with updated metadata:
# mdadm --stop /dev/md2
# mdadm --assemble /dev/md2 /dev/sda3 /dev/sdb3 --update=devicesize
Then grow the array:
# mdadm --grow /dev/md2 --size=max
# resize2fs /dev/md2
Check the new array size:
# mdadm --detail /dev/md2 | grep Size
Array Size : 484383744 (461.89 GiB 496.00 GB)
Confirm filesystem expansion:
# df -h /dev/md2
Filesystem Size Used Avail Use% Mounted on
/dev/md2 455G 134G 298G 31% /
For systems with multiple partitions in RAID1, you'll need to repeat this process for each array. The key differences between metadata versions:
0.90 metadata: Stores size in superblock
1.x metadata: Uses device size dynamically
If you encounter "device too small" errors during assembly, ensure:
1. Partitions are properly aligned
2. Both drives show identical sizes in /proc/partitions
3. No residual metadata from previous configurations
For automated solutions in deployment scripts:
#!/bin/bash
MDDEV=/dev/md2
PARTS="/dev/sda3 /dev/sdb3"
mdadm --stop $MDDEV
mdadm --assemble $MDDEV $PARTS --update=devicesize
mdadm --grow $MDDEV --size=max
resize2fs $MDDEV
Always backup critical data before array modifications. The --update=devicesize parameter is particularly important when:
- Migrating from smaller to larger drives
- Changing RAID levels that affect storage capacity
- Recovering arrays after partial failures
For reference, this solution applies to:
- Linux kernels 2.6.32+
- mdadm versions 3.2.5+
- Both ext3/ext4 filesystems