Resolving “No Space Left on Device” Errors on BTRFS Despite Available Storage


4 views

Many Linux administrators encounter the frustrating "No space left on device" error when working with BTRFS filesystems, even when df shows available space. This typically occurs due to BTRFS's unique storage allocation mechanism.

# Check filesystem space with traditional tools
df -Th
# Output shows available space but operations fail

BTRFS manages storage through chunks allocated for different purposes:

  • Data chunks
  • Metadata chunks
  • System chunks

The key insight comes from examining BTRFS-specific information:

# Check BTRFS filesystem details
btrfs filesystem show
btrfs filesystem df /mountpoint

In our case, the metadata allocation was nearly exhausted (1.74GiB used of 2.00GiB allocated), while data chunks still had available space. BTRFS cannot create new files when metadata space is full, regardless of available data space.

Here are three approaches to resolve this issue:

# 1. Balance the filesystem (try with different usage thresholds)
sudo btrfs filesystem balance start -dusage=5 -musage=5 /mountpoint

# 2. Extend metadata allocation (if possible)
sudo btrfs filesystem resize max /mountpoint

# 3. Force metadata allocation (emergency measure)
sudo btrfs filesystem balance start -m /mountpoint

To avoid recurrence:

  • Monitor metadata usage regularly
  • Set up alerts when metadata reaches 75% capacity
  • Consider increasing metadata allocation during filesystem creation
# Create BTRFS filesystem with larger metadata allocation
mkfs.btrfs -m raid1 -d raid10 /dev/sdX /dev/sdY

When troubleshooting, always check kernel messages:

dmesg | grep -i btrfs
journalctl -b -k --grep="btrfs"

Look for messages about failed allocations or space pressure in the metadata area.


When working with BTRFS filesystems, you might encounter the frustrating "No space left on device" error even when df -h shows ample free space. This typically occurs due to BTRFS's unique storage allocation mechanism.

The key insight comes from the btrfs fi show output showing all devices at near capacity (10GB used out of 10GB), despite the filesystem reporting available space. BTRFS requires free space on each device in the RAID array for certain operations.

# Here's what the problematic output looks like:
btrfs fi show
Label: none  uuid: 6546c241-e57e-4a3f-bf43-fa933a3b29f9
        Total devices 4 FS bytes used 11.86GiB
        devid    1 size 10.00GiB used 10.00GiB path /dev/xvdh
        devid    2 size 10.00GiB used 9.98GiB path /dev/xvdi
        devid    3 size 10.00GiB used 9.98GiB path /dev/xvdj
        devid    4 size 10.00GiB used 9.98GiB path /dev/xvdk

To resolve this, we need to perform a balance operation that will redistribute data and free up space on each device:

# First attempt (may fail if completely out of space)
sudo btrfs fi balance start -dusage=5 /mnt/durable

# If that fails, manually free up some space first:
find /mnt/durable -type f -size +100M -exec ls -lh {} \; | sort -k5 -rh | head -n 10

# After deleting some files, retry the balance
sudo btrfs fi balance start -dusage=5 /mnt/durable

For production systems, consider these proactive steps:

  • Set up monitoring for device-level usage, not just filesystem usage
  • Configure regular balance operations during maintenance windows
  • Leave at least 10% free space on each device in the array
# Example cron job for regular balancing
0 3 * * 0 root /sbin/btrfs fi balance start -dusage=80 /mnt/durable

In some cases, metadata allocation can cause similar symptoms. Check metadata usage with:

btrfs fi df /mnt/durable
# If metadata is full, try:
sudo btrfs fi balance start -musage=5 /mnt/durable

The RAID10 setup in this case requires free space on multiple devices. For future deployments, consider:

  • Using larger devices to allow for better balance
  • Implementing monitoring for individual device usage
  • Setting up alerts when any single device reaches 85% capacity