When examining the existing ZFS pool configuration with zpool status
, we see:
pool: unas
state: ONLINE
scan: scrub repaired 1.50M in 36h3m with 0 errors on Thu Jun 9 08:06:41 2016
config:
NAME STATE READ WRITE CKSUM
unas ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
ata-WDC_WD30EFRX-68EUZN0_WD-WCC4N1VUU0LX ONLINE 0 0 0
ata-WDC_WD30EFRX-68EUZN0_WD-WCC4N7FSX6F9 ONLINE 0 0 0
Before adding new drives, ensure they're properly recognized by the system:
lsblk
sudo fdisk -l
Check for existing partitions that might need to be wiped:
sudo wipefs -a /dev/sdX
sudo wipefs -a /dev/sdY
The correct command to expand the pool with a new mirrored pair is:
sudo zpool add unas mirror /dev/sdX /dev/sdY
Where sdX
and sdY
are your new drive identifiers. For better device persistence, consider using by-id paths:
sudo zpool add unas mirror /dev/disk/by-id/ata-NEW-DRIVE-1 /dev/disk/by-id/ata-NEW-DRIVE-2
After adding the new VDEV, verify the pool status:
sudo zpool status unas
Expected output should show both mirror VDEVs:
pool: unas
state: ONLINE
config:
NAME STATE READ WRITE CKSUM
unas ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
ata-WDC_WD30EFRX-68EUZN0_WD-WCC4N1VUU0LX ONLINE 0 0 0
ata-WDC_WD30EFRX-68EUZN0_WD-WCC4N7FSX6F9 ONLINE 0 0 0
mirror-1 ONLINE 0 0 0
ata-NEW-DRIVE-1 ONLINE 0 0 0
ata-NEW-DRIVE-2 ONLINE 0 0 0
When adding new VDEVs to a ZFS pool:
- Data is distributed across all VDEVs (striping)
- Each VDEV's performance characteristics affect the whole pool
- Adding more VDEVs generally increases IOPS capacity
If you wanted to expand an existing mirror (rather than add a new VDEV), you would use:
sudo zpool attach unas existing-disk new-disk
But this creates a 3-way mirror rather than adding a new VDEV.
Common issues when expanding pools:
# If you get "cannot label 'sdX': failed to wipe partition table"
sudo sgdisk --zap-all /dev/sdX
# For "no such device in pool" errors
sudo zpool export unas
sudo zpool import -d /dev/disk/by-id unas
Before making any changes to your ZFS pool, it's crucial to understand the current configuration. Running zpool status
shows:
$ sudo zpool status
pool: unas
state: ONLINE
scan: scrub repaired 1.50M in 36h3m with 0 errors on Thu Jun 9 08:06:41 2016
config:
NAME STATE READ WRITE CKSUM
unas ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
ata-WDC_WD30EFRX-68EUZN0_WD-WCC4N1VUU0LX ONLINE 0 0 0
ata-WDC_WD30EFRX-68EUZN0_WD-WCC4N7FSX6F9 ONLINE 0 0 0
errors: No known data errors
Before adding new drives to your pool, ensure they are properly formatted and ready for ZFS:
$ lsblk # Identify new drives
$ sudo sgdisk --zap-all /dev/sdX # Replace X with your drive letter
$ sudo sgdisk --zap-all /dev/sdY # For the second new drive
The command you suggested is indeed correct for adding a mirrored VDEV to an existing pool:
$ sudo zpool add unas mirror /dev/sdX /dev/sdY
However, it's better practice to use disk IDs instead of device names:
$ sudo zpool add unas mirror /dev/disk/by-id/ata-NEW-DRIVE-1 /dev/disk/by-id/ata-NEW-DRIVE-2
After adding the new mirrored VDEV, verify your pool status:
$ sudo zpool status unas
pool: unas
state: ONLINE
scan: none requested
config:
NAME STATE READ WRITE CKSUM
unas ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
ata-WDC_WD30EFRX-68EUZN0_WD-WCC4N1VUU0LX ONLINE 0 0 0
ata-WDC_WD30EFRX-68EUZN0_WD-WCC4N7FSX6F9 ONLINE 0 0 0
mirror-1 ONLINE 0 0 0
ata-NEW-DRIVE-1 ONLINE 0 0 0
ata-NEW-DRIVE-2 ONLINE 0 0 0
errors: No known data errors
When expanding your ZFS pool with additional VDEVs:
- All VDEVs must be of the same type (in this case, all mirrors)
- Data is distributed across all VDEVs (not striped unless you specifically create striped VDEVs)
- Adding VDEVs increases IOPS potential but doesn't expand existing datasets
For frequent pool expansions, consider creating a script:
#!/bin/bash
# Script to add mirrored VDEV to existing ZFS pool
POOL_NAME="unas"
DRIVE1="/dev/disk/by-id/ata-NEW-DRIVE-1"
DRIVE2="/dev/disk/by-id/ata-NEW-DRIVE-2"
# Verify drives exist
if [ ! -e "$DRIVE1" ] || [ ! -e "$DRIVE2" ]; then
echo "Error: One or both drives not found"
exit 1
fi
# Add mirrored VDEV
sudo zpool add "$POOL_NAME" mirror "$DRIVE1" "$DRIVE2"
# Verify the addition
sudo zpool status "$POOL_NAME"