When working with ZFS on Linux systems, particularly in cloud environments like Google Cloud Platform, administrators often encounter situations where disk capacity expansions don't automatically propagate to the ZFS pool. The typical scenario involves:
# lsblk output showing unallocated space
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sdb 8:16 0 150G 0 disk
├─sdb1 8:17 0 100G 0 part
└─sdb9 8:25 0 8M 0 part
Before attempting any resizing operations, verify these critical components:
# Verify autoexpand is enabled
zpool get autoexpand zdata
# Check current pool status
zpool status zdata
# Confirm disk partition layout
fdisk -l /dev/sdb
The complete solution involves both partition table modification and ZFS pool operations:
1. Removing Unnecessary Partitions
First eliminate any partition that might block the expansion (in this case partition #9):
parted /dev/sdb rm 9
2. Resizing the Primary Partition
Extend the main partition to utilize all available space:
parted /dev/sdb resizepart 1 100%
3. Triggering ZFS Pool Expansion
Execute these commands in sequence to activate the new space:
# Online the disk with expansion flag
zpool online -e zdata /dev/sdb1
# Export and reimport the pool
zpool export zdata
zpool import zdata
Confirm the changes took effect:
# Check new partition layout
lsblk
# Verify pool recognizes new capacity
zpool list zdata
When working with Google Cloud Persistent Disks:
- Always verify disk resizing completed at the GCP level first
- Cloud-init may need a reboot to detect new disk geometry
- Consider using
growpart
as an alternative to parted
For frequent operations, consider this bash script:
#!/bin/bash
DISK=/dev/sdb
POOL=zdata
# Resize partition
parted -s $DISK rm 9
parted -s $DISK resizepart 1 100%
# Refresh ZFS
zpool online -e $POOL ${DISK}1
zpool export $POOL
zpool import $POOL
# Verification
zpool list $POOL
When working with ZFS on cloud platforms like Google Cloud, disk expansion often requires specific steps beyond just resizing the underlying storage. Here's what's happening in this scenario:
# Current disk layout shows unused space
lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sdb 8:16 0 150G 0 disk
├─sdb1 8:17 0 100G 0 part
└─sdb9 8:25 0 8M 0 part
Before proceeding, ensure:
- Autoexpand is enabled:
zpool set autoexpand=on zdata
- The pool isn't fragmented (your 49% is acceptable)
- No active snapshots are blocking operations
Here's the complete procedure that worked:
# First, remove the unnecessary partition (sdb9)
parted /dev/sdb rm 9
# Then resize the main partition to 100% of available space
parted /dev/sdb resizepart 1 100%
# Inform ZFS about the new capacity
zpool online -e zdata /dev/sdb1
# Optional but recommended: export and reimport the pool
zpool export zdata
zpool import zdata
# Verify the new capacity
zpool list zdata
When working with Google Persistent Disks:
- Always resize the disk first in GCP Console
- Wait 2-3 minutes for changes to propagate
- Use
growpart
ifparted
fails:# Install cloud-utils if needed apt-get install cloud-utils # Expand partition growpart /dev/sdb 1
If the pool doesn't reflect the new size:
- Check kernel partition table:
partprobe /dev/sdb
- Verify ZFS version supports autoexpand:
zfs upgrade
- Try a full reboot if online methods fail
Remember that ZFS expansion works at the vdev level, so this method applies specifically to single-disk configurations like this example.