When dealing with ZFS data recovery scenarios, the zpool export
command plays a crucial role in preparing storage devices for cross-system portability. Unlike traditional filesystems, ZFS maintains extensive metadata on the disks themselves, but proper export ensures this metadata remains consistent.
Consider this disaster recovery case:
# Bad scenario - sudden system failure without export
$ zpool import
pool: tank
id: 129378129378
state: UNAVAIL
status: One or more devices are missing.
action: The pool cannot be imported due to damaged devices or data.
Versus a properly exported pool:
# Clean exported pool
$ zpool import
pool: tank
id: 129378129378
state: ONLINE
status: The pool was previously exported.
action: The pool can be imported using its name or numeric identifier.
ZFS maintains three critical metadata types:
- Pool configuration (stored in disk labels)
- File system hierarchy (stored in the uberblock)
- User properties (stored with file system metadata)
The key difference between exported and non-exported pools lies in how the state information is recorded in the ZFS label.
For maximum recovery safety:
# After pool creation or significant changes
$ zpool create tank mirror /dev/sda /dev/sdb
$ zpool export tank
# Regular maintenance script
#!/bin/bash
zpool list -H -o name | while read pool; do
zpool export $pool && zpool import $pool
done
Avoid frequent exports in these cases:
- Pools with active iSCSI targets
- When using ZFS as a root filesystem
- During heavy write operations
For partially damaged pools, try:
# Force import with missing devices
$ zpool import -f -m tank
# Scan for old transaction groups
$ zpool import -F tank
# Import read-only mode
$ zpool import -o readonly=on tank
Complement exports with regular metadata backups:
# Save pool configuration
$ zpool get all tank > /backup/tank.config
$ zfs list -r tank > /backup/tank.hierarchy
# For critical pools, save raw metadata
$ zdb -uuul tank > /backup/tank.zdb_backup
When dealing with ZFS storage configurations, the question of pool portability becomes critical for disaster recovery scenarios. The fundamental mechanics work like this:
# Basic pool creation example
zpool create tank mirror /dev/sda /dev/sdb
zfs create tank/dataset
ZFS automatically writes pool configuration to each device in the vdev labels (four copies per device). This metadata includes:
- Pool name and GUID
- Vdev topology
- Dataset hierarchy
- Property settings
While ZFS can often import non-exported pools, explicit exporting creates cleaner metadata states:
# Proper export/import sequence
zpool export tank
zpool import -d /dev/disk/by-id tank
Key differences between exported and non-exported pools:
Scenario | Metadata State | Recovery Complexity |
---|---|---|
Properly exported | Clean shutdown state | Simple import |
Unexported | Last transaction state | May require -FX options |
In a burned-host scenario, these factors affect recovery success:
# Advanced import options for damaged pools
zpool import -F -X -N -d /dev/disk/by-id
Critical metadata components that must survive:
- Pool configuration (stored in vdev labels)
- ZFS uberblocks (multiple copies per device)
- MOS (Meta Object Set) references
For maximum protection, combine these approaches:
# Save pool configuration
zpool status tank > /secure/location/tank.status
zpool get all tank > /secure/location/tank.properties
Additional safety measures:
- Regular 'zdb -C' output captures
- Storing device identification by-id rather than by-path
- Maintaining checksum verification logs