How to Migrate ZFS Filesystems Between Pools in Solaris 10 While Maintaining Data Integrity


1 views

When expanding storage capacity on Solaris 10 systems, administrators often need to relocate ZFS filesystems from the root pool (rpool) to secondary storage pools. The key requirements are:

  • Minimal downtime
  • Preservation of all ZFS properties
  • Maintenance of data integrity
  • Ability to handle large datasets efficiently

The most reliable method uses ZFS send/receive functionality. This approach works while filesystems are mounted, though stopping services accessing the data is recommended for consistency.

# Create snapshot of source filesystem
zfs snapshot rpool/data@migration

# Send snapshot to new pool (basic method)
zfs send rpool/data@migration | zfs receive newpool/data

# For large filesystems, use incremental send
zfs send -i rpool/data@migration rpool/data@current | zfs receive -F newpool/data

For production systems, consider these enhanced approaches:

# 1. Preserving all properties (-p flag)
zfs send -p rpool/data@migration | zfs receive -p -d newpool

# 2. Using mbuffer for network transfers
zfs send rpool/data@migration | mbuffer -s 128k -m 1G | \
ssh remotehost "mbuffer -s 128k -m 1G | zfs receive newpool/data"

# 3. Encrypted transfer
zfs send rpool/data@migration | gpg -e -r admin@domain | \
ssh remotehost "gpg -d | zfs receive newpool/data"

After migration:

# Verify data integrity
diff -r /rpool/data /newpool/data

# Compare properties
zfs get all rpool/data
zfs get all newpool/data

# Cleanup old filesystem (when ready)
zfs destroy -r rpool/data

For cases where send/receive isn't suitable:

  • zfs rename: Only works within the same pool
  • Third-party tools: Like znapzend for automated migrations
  • Physical device relocation: For entire pool migrations

When extending storage capacity on Solaris 10 servers, administrators often need to relocate existing ZFS filesystems from the root pool (rpool) to new storage pools. The primary challenges are:

  • Maintaining data integrity during transfer
  • Minimizing service downtime
  • Preserving all ZFS properties and snapshots

The most robust approach uses ZFS send/receive functionality. This method works while filesystems are online, though freezing applications temporarily during final sync is recommended.

# Create recursive snapshot of source filesystem
zfs snapshot -r rpool/data@migration

# Send the snapshot to new pool (basic version)
zfs send rpool/data@migration | zfs recv newpool/data

# For complex scenarios with multiple snapshots:
zfs send -R -I rpool/data@first_snap rpool/data@migration | zfs recv -Fduv newpool/

For critical systems requiring minimal downtime:

# Create clone on target pool
zfs clone rpool/data@migration newpool/data_clone

# Promote the clone when ready
zfs promote newpool/data_clone

# Verify and remove original
zfs list -t all
zfs destroy rpool/data

To maintain service continuity:

# Preserve original mountpoint (if needed)
zfs set mountpoint=legacy newpool/data

# Update /etc/vfstab if using legacy mounts
sed -i 's/rpool\/data/newpool\/data/g' /etc/vfstab

# For live services, consider:
svcadm disable application
# Perform final incremental sync
zfs send -i @migration rpool/data@final | zfs recv newpool/data
svcadm enable application

Essential post-migration checks:

# Verify data checksums
diff -r /rpool/data /newpool/data

# Compare properties
zfs get all rpool/data > old_props
zfs get all newpool/data > new_props
diff old_props new_props

# Remove old snapshots when confirmed
zfs destroy -r rpool/data@migration

For regular migrations between pools:

# Set up periodic snapshots and syncs
0 * * * * /usr/bin/flock -n /var/tmp/zfs_sync.lock \
  zfs snapshot -r rpool/data@hourly_$(date +\%Y\%m\%d\%H\%M) && \
  zfs send -I rpool/data@prev_hourly rpool/data@hourly | \
  zfs recv -Fdu newpool/

Remember to test migrations in a non-production environment first, especially when dealing with complex ZFS configurations like deduplication or compression settings.