Linux Disk Space Mystery: Why DF Shows Full When DU Reports Available Space


2 views

Every Linux sysadmin has faced this head-scratcher: df -h reports 100% disk usage while du -sh /* shows plenty of available space. Here's how to systematically investigate and resolve this common but frustrating issue.

# Typical conflicting outputs
$ df -h /
Filesystem      Size  Used Avail Use% Mounted on
/dev/sda1       50G   50G     0 100% /

$ du -sh /*
4.0G    /home
2.1G    /var
1.8G    /usr
... Total far less than 50G

1. Deleted Files Held by Running Processes

# Find processes holding deleted files
$ lsof +L1 | grep deleted
apache2  1234  www-data    4u   REG    8,1  1073741824  123456 /var/log/apache.log (deleted)

2. Mount Point Issues

# Check for hidden mounts
$ mount | grep -v "^/dev"
$ findmnt --df /path

3. LVM or Filesystem Corruption

# Check filesystem consistency
$ fsck -n /dev/sda1

Finding Large Hidden Files

# Search for large files not accounted for
$ ncdu -x /
$ find / -xdev -type f -size +100M -exec ls -lh {} \;

Checking for Sparse Files

# Identify sparse files
$ find / -xdev -type f -printf "%S\t%p\n" | grep -v "^1.000000"

Freeing Up Space Immediately

# Clear system logs (if safe)
$ journalctl --vacuum-size=100M
$ rm -f /var/log/*.gz

Preventing Future Issues

# Set up monitoring
$ crontab -l
0 * * * * df -h > /var/log/disk_usage.log

For XFS users:

# Check for fragmentation
$ xfs_db -c frag -r /dev/sda1

For ext4 users:

# Check reserved blocks
$ tune2fs -l /dev/sda1 | grep "Reserved block count"

Every Linux admin has encountered this scenario: df reports your disk is full while du shows significantly less usage. Let's break down why this happens and how to investigate.

The key differences between these commands:

  • df reports filesystem-level statistics from the kernel
  • du calculates disk usage by traversing directories

Here are the most frequent causes:

# Check for deleted files still held by processes
lsof +L1 | grep deleted

# Look for large sparse files
find / -type f -size +100M -exec ls -lh {} \;

Advanced diagnostic commands:

# Check filesystem journal size (ext3/4)
dumpe2fs /dev/sda3 | grep Journal

# Verify inode usage
df -i

# Check for filesystem errors
fsck -n /dev/sda3

# Find large open files (requires lsof)
lsof -nP +L1 | awk '$5 == "REG" && $7 > 1048576 {print}' | sort -k7 -nr

When you identify the cause:

# For deleted files held by processes:
# Option 1: Restart the holding process
# Option 2: Clear space by truncating the file
: > /proc/[pid]/fd/[fd-number]

# For filesystem journal issues:
tune2fs -J size=100M /dev/sda3

Add these to your monitoring:

# Daily disk health check script
#!/bin/bash
THRESHOLD=90
CURRENT=$(df / --output=pcent | tail -1 | tr -d '% ')
if [ $CURRENT -gt $THRESHOLD ]; then
    echo "Alert: Disk usage at $CURRENT%" | mail -s "Disk Alert" admin@example.com
fi