Disk Space Mismatch: Investigating df vs du Discrepancy in Linux Filesystems


20 views

When checking disk usage on Linux systems, administrators often encounter puzzling discrepancies between df and du outputs. The scenario you described - where df reports significantly more used space than du - is particularly common with LVM-managed filesystems like /dev/mapper/vg00-var.

These utilities measure disk usage differently:

df -h  # Shows filesystem-level allocation
du -h  # Shows file-level summation
  • Deleted but still-open files: Files removed but held by running processes
  • Filesystem overhead: Journaling, metadata, and reserved blocks
  • Hidden allocations: Snapshots, sparse files, or LVM thin provisioning

To identify the missing space, try these commands:

# Check for deleted but open files
lsof +L1 | grep '/var'

# Verify LVM allocation
lvdisplay /dev/vg00/var
vgs vg00

# Examine filesystem details
tune2fs -l /dev/mapper/vg00-var

If using thin provisioning, the allocated space might exceed what's visible in files:

# Check thin pool usage
lvs -o+data_percent,metadata_percent vg00

For the specific case of 1.7GB discrepancy:

  1. Restart services holding deleted files
  2. Check for and clean old snapshots
  3. Consider filesystem check/repair

For precise measurement at the block level:

# Count used blocks directly
dumpe2fs -h /dev/mapper/vg00-var | grep -i block
debugfs -R "stat /" /dev/mapper/vg00-var

Remember that some space is always reserved for root (typically 5%), which explains part of the difference.


When working with Linux systems, many administrators encounter this puzzling situation:

# df -h /var
Filesystem            Size  Used Avail Use% Mounted on
/dev/mapper/vg00-var  4.0G  3.8G  205M  95% /var

# du -kscxh /var/*
2.1G    total

The math doesn't add up - 2.1GB (du) + 205MB (free) = 2.3GB, leaving 1.7GB unaccounted for.

Several factors can cause this difference:

  • Deleted files still held by processes: Files deleted but still open by running processes
  • Reserved blocks: Filesystem reserves space (typically 5%) for root
  • Journaling overhead: Space used by filesystem journals (ext3/4, xfs, etc.)
  • Hidden directories: Contents not accessible without root privileges
  • Sparse files: Files that appear larger than their actual disk usage

Here are some diagnostic commands to pinpoint the issue:

# Check for deleted but still open files
sudo lsof +L1 /var

# View reserved blocks percentage
sudo tune2fs -l /dev/mapper/vg00-var | grep Reserved

# Calculate journal size (for ext filesystems)
sudo dumpe2fs -h /dev/mapper/vg00-var | grep Journal

# Check for sparse files
find /var -type f -printf "%S\t%p\n" | awk '$1 < 1.0 {print}'

If the issue is caused by deleted-but-open files:

# List processes holding deleted files
sudo lsof +L1 /var | awk '$8 ~ /^DEL$/ {print $2}' | sort -u

# Restart affected services (example for Apache)
sudo systemctl restart apache2

# Alternative: Find and kill processes
for pid in $(sudo lsof +L1 /var | awk '$8 ~ /^DEL$/ {print $2}' | sort -u); do
    sudo kill -9 $pid
done

Different filesystems handle space differently:

  • ext4: Uses 5% reserved blocks by default (adjust with tune2fs -m 1 /dev/device)
  • XFS: Has dynamic journal sizing (check with xfs_info /mountpoint)
  • Btrfs: Uses copy-on-write which may show different usage patterns

For deeper investigation, try these techniques:

# Compare with ncdu (more accurate alternative to du)
sudo ncdu -x /

# Check for filesystem corruption
sudo fsck -n /dev/mapper/vg00-var

# Mount with full access to all files
sudo mount -o remount,rw,nosuid,nodev,noexec,relatime /var

Remember that tools like df report filesystem-level usage while du shows file-level usage - they're designed to measure different things.