Why Does a New XFS Filesystem Show 78GB Used on a 12TB RAID 6 Array?


2 views

When setting up a new XFS filesystem on a 12TB RAID 6 array, you might notice that df -h reports approximately 78GB as "used" even before storing any files. This behavior can be confusing, especially when the filesystem log only accounts for about 2GB.

XFS, like other modern filesystems, allocates space for metadata structures during creation. Here's what consumes the initial space:

  • Allocation Groups (AGs): XFS divides storage into allocation groups (11 in your case, as shown by agcount=11 in xfs_info)
  • Inode Space: The isize=512 parameter indicates inode size allocation
  • Journal Log: Your 2GB internal log (blocks=521728 at 4K blocksize)
  • Free Space B+trees: Structures tracking available space

Let's break down the math for your specific configuration:

# Calculate AG size in bytes
ag_size = 268435455 blocks * 4096 bytes/block = ~1.07TB per AG

# Total metadata overhead (simplified estimate):
11 AGs * (metadata per AG) + journal = ~78GB total

You can examine detailed space usage with:

# Show detailed space allocation
xfs_spaceman -c "report -p" /dev/sda1

# Alternative method using xfs_db (requires unmount)
xfs_db -r /dev/sda1
xfs_db> sb 0
xfs_db> print

If you want to minimize overhead in future creations:

# Example mkfs.xfs with reduced AG count
mkfs.xfs -d agcount=4 /dev/sda1

# For very large filesystems, consider:
mkfs.xfs -d su=64k,sw=8 -l size=512m /dev/sda1

This behavior is normal for XFS, but investigate if:

  • Used space grows unexpectedly after creation
  • You see different behavior with other filesystem types
  • The percentage used becomes significant on smaller drives

Your RAID 6 configuration (dual parity) doesn't directly affect filesystem overhead, but ensure proper alignment:

# Check RAID stripe settings
mdadm --detail /dev/mdX

# Optimal mkfs.xfs for RAID 6:
mkfs.xfs -d su=256k -d sw=10 /dev/sda1

When creating a new XFS filesystem on a 12TB RAID 6 array, you might notice approximately 78GB marked as "used" despite zero actual files being present. This behavior is indeed by design and stems from XFS's architectural decisions.

The space consumption comes from three primary sources:

1. Internal log (journal): ~2GB (visible in xfs_info output)
2. AG free space management: ~75GB
3. Superblock and metadata: ~1GB

XFS divides the filesystem into Allocation Groups (AGs), each with its own management structures. Your configuration shows:

xfs_info /export/libvirt/
meta-data=/dev/sda1 isize=512 agcount=11, agsize=268435455 blks

Key calculation:

11 AGs × (512B inodes + management overhead) × 268,435,455 blocks ≈ 75GB

Use xfs_db to inspect detailed allocation:

# xfs_db -c 'sb 0' -c 'print' -c 'agf 0' -c 'print' /dev/sda1
# xfs_spaceman -df /export/libvirt

For large arrays, consider these mkfs.xfs options:

# mkfs.xfs -m crc=1,finobt=1 -d agcount=32 -l size=2g -i maxpct=5 /dev/sda1

The preallocated space enables:

  • Faster crash recovery
  • Better parallel I/O across AGs
  • Optimized metadata operations

For comprehensive analysis:

# xfs_estimate -v /dev/sda1
# xfs_metadump -w /dev/sda1 | xfs_mdrestore - /tmp/metadump