ZFS Recovery: Troubleshooting “No Pools Available” When Importing a Damaged Mirrored Pool


2 views

When dealing with ZFS pool recovery scenarios, one particularly frustrating situation occurs when zpool import stubbornly reports "no pools available" despite physical evidence that the pool exists. Let's examine this specific case where a FreeBSD server's mirrored pool became inaccessible after losing one disk.

The zdb -lu output reveals crucial information about why standard import commands fail. The pool metadata shows:

    state: 0
    txg: 0
    pool_guid: 16827460747202824739
    vdev_tree:
        type: 'mirror'
        children[1]:
            DTL: 3543
            create_txg: 4
            resilvering: 1

Key indicators of our problem:

  • The pool state shows 0 (inactive/unimported)
  • Transaction group (txg) is 0 - potentially corrupted metadata
  • One mirror member is missing (device GUID 3324029433529063540)
  • Resilvering was in progress when the failure occurred

When standard import methods fail, we need to escalate our approach:

# Try importing with missing device tolerance
zpool import -m -f -F -d /dev/da0p3 ztmp

# Force import with incomplete transaction groups
zpool import -fFX -d /dev/da0p3 ztmp

# Attempt raw device scanning
zpool import -d /dev/da0

If the pool still won't import, we can manually reconstruct the configuration:

# Create a temporary pool configuration file
echo "mirror /dev/da0p3 missing" > /tmp/recover.conf

# Import using the manual configuration
zpool import -c /tmp/recover.conf -d /dev/da0p3 ztmp

The original configuration uses /dev/gptid/ paths which may not exist in the recovery environment. We can work around this by:

# Create symlinks to match the original configuration
ln -s /dev/da0p3 /dev/gptid/d7b6a47e-8b0e-11e1-b750-f46d04227f12

# Or use the physical device path directly
zpool import -d /dev/da0p3 -o readonly=on ztmp

If the pool remains unimportable, we can still attempt raw data recovery:

# Mount the ZFS partition read-only
mount -t zfs -o ro /dev/da0p3 /mnt/recover

# Use zdb to extract critical data
zdb -R ztmp 0:fd:1 > /recovery/important_data.bin

Remember that damaged pools should always be imported read-only first (-o readonly=on) to prevent further corruption.


When dealing with ZFS mirror configurations, losing one disk shouldn't prevent accessing your data - that's the whole point of mirroring. However, as shown in the diagnostic output, the system fails to recognize the remaining pool member:

# zpool import
# zpool import -D
# zpool status
no pools available

The partition layout shows a properly configured GPT partition with ZFS data in da0p3:

3. Name: da0p3
   Mediasize: 1905891737600 (1.7T)
   rawtype: 516e7cba-6ecf-11d6-8ff8-00022d09712b
   type: freebsd-zfs

The zdb output reveals crucial information about the pool structure:

name: 'ztmp'
pool_guid: 16827460747202824739
top_guid: 15350190479074972289
type: 'mirror'
children[0]:
    guid: 3060075816835778669
children[1]:
    guid: 3324029433529063540

When standard import fails, we need to try more advanced methods:

# Try importing by GUID
zpool import -f -d /dev/da0p3 16827460747202824739

# Alternative approach using device path
zpool import -f -d /dev ztmp

# With cachefile bypass
zpool import -f -c /etc/zfs/zpool.cache ztmp

Since we're dealing with a degraded mirror, we need to force ZFS to accept the remaining disk:

# Create a temporary file with the surviving device
echo "$(zdb -l /dev/da0p3 | grep path | cut -d' ' -f5)" > /tmp/vdev.txt

# Attempt import with forced configuration
zpool import -d /tmp -f ztmp

If all else fails, we can reconstruct the pool configuration manually:

# Create a temporary pool config
cat > /tmp/recovery.conf <

After successful import, immediately check pool status:

zpool status ztmp
zfs list ztmp
zfs mount ztmp

Remember to scrub the pool and consider creating backups immediately once you regain access to your data.