When working with ZFS mirrored pools for offsite backup rotation, many administrators expect detaching and reattaching disks to trigger incremental updates. However, the default behavior often forces a full resilver, which becomes particularly inefficient with large datasets.
Here's what typically happens when using zpool detach
and zpool attach
:
# Initial setup
zpool create backup mirror sda sdb sdc sdd
# Detach a disk
zpool detach backup sdd
# Later reattachment triggers full resilver
zpool attach backup sdc sdd
Using offline
and online
commands instead can sometimes preserve the incremental synchronization:
# Take disk offline
zpool offline backup sdd
# Bring back online (may still trigger full resilver)
zpool online backup sdd
For truly efficient offsite backup rotation, consider these approaches:
Method 1: ZFS Send/Receive
# Create separate pool for backup disk
zpool create backup_disk sdd
# Initial full send
zfs send backup@initial | zfs receive backup_disk/backup
# Subsequent incremental updates
zfs send -i backup@initial backup@current | zfs receive backup_disk/backup
Method 2: ZFS Bookmark-Based Incrementals
# Create bookmark for last sync
zfs bookmark backup@last_sync backup#last_sync
# Send incremental using bookmark
zfs send -i backup#last_sync backup@current | zfs receive backup_disk/backup
ZFS often performs full resilvers because:
- The pool loses track of the disk's transaction history after detach
- Device identification might change (by-id vs by-path)
- The disk's write pointer becomes invalid after being removed
These zpool properties might help in some scenarios:
zpool set autoreplace=on backup
zpool set autoexpand=on backup
Always verify the actual resilver behavior:
zpool status -v backup
zfs get all backup
For reliable offsite rotation:
- Use ZFS send/receive for predictable behavior
- Maintain consistent device identification
- Document your rotation schedule and commands
- Consider using ZFS replication tools like sanoid/syncoid
When working with ZFS mirrored pools, I've observed that the standard zpool detach
and zpool attach
commands trigger full resilvering operations, even when reconnecting a previously healthy disk. This behavior occurs because:
# Typical commands triggering full resilver
zpool detach tank mirror-1
zpool attach tank mirror-1 /dev/disk-new
Testing reveals that using offline
/online
instead of detach/attach can sometimes preserve the existing data:
# Try this sequence instead:
zpool offline tank mirror-1
# (physically remove disk)
# Later when reconnecting:
zpool online tank mirror-1
However, this method has limitations:
- Works best when the disk hasn't been modified elsewhere
- May still trigger resilvering if the disk was offline too long
- Depends on ZFS version and configuration
For reliable incremental updates of offsite backup disks, the most robust approach is:
# Create separate single-disk pool
zpool create backup /dev/backup-disk
# Initial full send
zfs send tank@initial | zfs receive backup/replica
# Subsequent incremental updates
zfs send -i tank@initial tank@new-snapshot | zfs receive backup/replica
For more efficient transfers without keeping all intermediate snapshots:
# Create bookmark
zfs bookmark tank@snap1 tank#book1
# Later incremental send using bookmark
zfs send -i tank#book1 tank@snap2 | zfs receive backup/replica
When dealing with rotating backup disks, consider these optimizations:
- Use
zfs send -L
for large blocks - Enable compression (
zfs set compression=lz4
) - Consider
mbuffer
for network transfers - Schedule transfers during low-usage periods