How to Resolve ZFS Pool Auto-Expand Failure on Linux After SAN Storage Expansion


3 views

When working with ZFS on Linux (particularly older kernels like 2.6.32), you might encounter situations where your ZFS pool refuses to auto-expand even after:

  • Increasing the underlying SAN storage capacity
  • Setting autoexpand=on
  • Attempting zpool online -e
  • Exporting/importing the pool

The user's scenario shows several important details:

zpool list shows:
dfbackup      214G   207G  7.49G    96%  1.00x  ONLINE

While fdisk reports:
Disk /dev/disk/by-id/virtio-sbs-XLPH83: 268.4 GB

Several factors could prevent ZFS from recognizing the expanded space:

  1. Partitioning Issues: The disk uses GPT partitioning but is being checked with fdisk (which has limited GPT support)
  2. Kernel Limitations: Older kernels may have different behavior with storage resizing
  3. Device Mapper Layer: SAN devices often go through device mapper which might need refreshing

Here's a comprehensive approach to resolve this:

1. Verify Actual Device Size

Use proper GPT-aware tools:

# Use parted instead of fdisk
parted /dev/disk/by-id/virtio-sbs-XLPH83 print

2. Refresh Device Mapper

For SAN/LVM devices:

# Rescan SCSI devices
echo 1 > /sys/class/block/virtio-sbs-XLPH83/device/rescan

# Or for multipath:
multipathd -k"resize map virtio-sbs-XLPH83"

3. Force ZFS to Re-examine the Device

Try this sequence:

# Export the pool
zpool export dfbackup

# Clear ZFS labels
zpool labelclear -f /dev/disk/by-id/virtio-sbs-XLPH83

# Reimport with force
zpool import -d /dev/disk/by-id/ -f dfbackup

# Online with expansion flag
zpool online -e dfbackup /dev/disk/by-id/virtio-sbs-XLPH83

4. Alternative: Backup and Recreate

If all else fails:

# Create new pool with correct size
zpool create newpool /dev/disk/by-id/virtio-sbs-XLPH83

# Send/receive the data
zfs send dfbackup@migrate | zfs receive newpool/migrated

Consider these best practices:

  • Use autoexpand=on during pool creation
  • Prefer newer ZFS versions and kernels
  • For SAN devices, verify at multiple layers (HBA, OS, multipath)

When dealing with ZFS storage expansion on Linux, particularly with SAN-backed devices, several factors can prevent the pool from recognizing the new capacity even with autoexpand=on. The problem often lies in partition alignment, GPT vs. MBR formatting, or kernel-level device recognition.

# Verify current pool status
zpool list
zpool get autoexpand [poolname]

# Check underlying device capacity
lsblk -b /dev/sdX
blockdev --getsize64 /dev/sdX

# Inspect partition table (use parted for GPT)
parted /dev/sdX unit s print

1. Partition Table Limitations: If the device uses GPT (as in your case), traditional tools like fdisk may report incorrect sizes. Always use parted or gdisk:

parted /dev/disk/by-id/virtio-sbs-XLPH83 unit s print

2. Kernel Device Rescan: Some SAN systems require explicit rescan commands:

# For SCSI devices
echo 1 > /sys/class/block/sdX/device/rescan

# Alternative method via sg3_utils
sg_scan -i

When standard methods fail, this sequence often works:

  1. Export the pool: zpool export dfbackup
  2. Rescan the device: partprobe /dev/disk/by-id/virtio-sbs-XLPH83
  3. Destroy and recreate the partition table (backup first!):
    parted /dev/disk/by-id/virtio-sbs-XLPH83
    (parted) mklabel gpt
    (parted) mkpart primary 2048s 100%
    (parted) set 1 zfs on
    (parted) quit
  4. Reimport with forced expansion: zpool import -d /dev/disk/by-id/ -f -o autoexpand=on dfbackup

If the issue persists, check kernel messages for device recognition problems:

dmesg | grep -i capacity
journalctl -k --since "1 hour ago" | grep -i block

For virtualized environments (like your virtio device), ensure the hypervisor has properly exposed the new capacity to the guest OS.

As a last resort, you can create a new pool with the expanded device and replicate data:

# Create new pool with full capacity
zpool create newpool /dev/disk/by-id/virtio-sbs-XLPH83

# Replicate data (example using zfs send/recv)
zfs snapshot dfbackup@migrate
zfs send dfbackup@migrate | zfs recv newpool/backup