Discrepancy Between `zfs list` and `zpool list` Outputs: Understanding RAIDZ2 Storage Reporting in ZFS


7 views

When working with your 12-disk RAIDZ2 pool (10 data + 2 parity), the raw numbers work like this:

Raw capacity: 12 disks × 6TB = 72TB
Usable capacity: 10 disks × 6TB = 60TB

The key difference lies in what each command measures:

  • zpool list shows physical storage before accounting for RAID overhead (72TB raw, 65TB reported due to ZFS overhead)
  • zfs list shows logical available space after RAID overhead (60TB theoretical, 48TB reported)

Let's examine where those 65T (zpool) and 48T (zfs) values come from:

# Physical storage calculation (zpool)
Total raw: 72TB
ZFS metadata overhead: ~10% → 72 × 0.9 ≈ 65TB

# Logical storage calculation (zfs)
Usable raw: 60TB
ZFS overhead + reservations: 60 × 0.8 ≈ 48TB

To get more detailed information about your pool's storage:

# Show detailed space breakdown
zpool list -o name,size,alloc,free,capacity,dedupratio,health

# Alternative view with compression ratio
zfs list -o name,used,available,compressratio,mountpoint

Here's how the numbers typically look for a 6×6TB RAIDZ2 pool:

# Theoretical maximum
Raw: 36TB
Usable: 24TB (4 data disks × 6TB)

# Actual reports
zpool list SIZE: ~33TB
zfs list AVAIL: ~19TB
  • ZFS metadata overhead (typically 0.5-1% per TB)
  • Default 3.2% pool reservation for maintenance
  • Compression savings (if enabled)
  • Block size and recordsize settings

The discrepancy is normal, but watch for these warning signs:

# Check for unhealthy space fragmentation
zpool list -o name,size,frag,capacity

# If fragmentation > 70% or capacity > 90%
# Consider adding more storage

When working with ZFS storage systems, many administrators notice differences between the storage capacity reported by zfs list and zpool list. This becomes particularly noticeable in RAIDZ configurations where the raw disk capacity doesn't directly translate to usable space.

The key difference lies in what each command measures:

# zpool list shows physical storage characteristics
NAME     SIZE  ALLOC   FREE
intp1    65T   1.02M   65.0T

# zfs list shows logical filesystem characteristics  
NAME     USED  AVAIL
intp1    631K  48.0T

zpool list reports the raw storage capacity of the entire pool before accounting for:

  • RAIDZ parity overhead (2 disks worth in your 10+2 configuration)
  • ZFS metadata and internal structures
  • Reserved space (defaults to 3.2% of pool capacity)

For your specific 12-disk RAIDZ2 (10+2) configuration with 6TB disks:

Raw capacity: 12 * 6TB = 72TB
Usable capacity: 10 * 6TB = 60TB
zpool list reports: ~65TB (includes parity but not other overheads)
zfs list reports: 48TB (after all overheads and reservations)

Several ZFS features contribute to this discrepancy:

# Check your current reservation settings
zfs get reservation,refreservation [poolname]

# View space accounting details
zfs get used,available,referenced [poolname]

The 48TB available in zfs list accounts for:

  • Default 3.2% space reservation for pool metadata
  • Additional safety margins ZFS maintains
  • Potential differences in block size calculations

When monitoring storage capacity:

# For physical capacity monitoring:
zpool list -o name,size,alloc,free,cap,health

# For filesystem-available space:
zfs list -o name,used,avail,refer,mountpoint

Remember that zfs list shows what's actually available for files, while zpool list shows the raw storage characteristics before ZFS overheads.

For deeper analysis of your space usage:

# Show detailed space breakdown
zfs get all [poolname] | grep -E 'reservation|refreservation|available|used'

# Check compression ratio impact
zfs get compressratio [poolname]

# Examine actual physical vs logical usage
zpool get all [poolname] | grep -E 'size|alloc|free'

These commands help identify exactly where your storage capacity is being allocated.