Debugging Disk Space Usage: Why df and du Show Mismatched Results on Linux EC2 Instances


2 views

When your Linux server's root partition shows 98% usage in df -h but directory sizes don't add up in du -sh /*, you're facing one of these common scenarios:

# Quick verification commands:
df -h              # Shows filesystem usage
du -sh /*          # Shows directory sizes
lsof +L1           # Lists deleted but still open files

The difference between df and du typically stems from:

  • Deleted files still held by running processes
  • Hidden filesystems or mount points
  • Database tablespaces or log files
  • Docker/container storage

1. Check for Deleted but Open Files

lsof +L1 | grep deleted
# Example output:
# mysqld  1234 mysql 5r REG 8,1 4294967296 123456 /var/lib/mysql/ibdata1 (deleted)

This reveals processes holding references to deleted files. Restarting the process will free the space.

2. Verify Mount Points

mount | grep /dev/sd
findmnt --real

3. Advanced Space Analysis Tools

# Install ncdu for interactive analysis
apt-get install ncdu
ncdu -x /

On AWS instances, additional factors to check:

# Check EBS volume allocations
lsblk
# Verify no full snapshots are mounted
aws ec2 describe-snapshots --region us-east-1

For database servers (common in EC2):

# MySQL space check
SELECT table_schema "Database", 
ROUND(SUM(data_length + index_length) / 1024 / 1024, 2) "Size (MB)" 
FROM information_schema.tables 
GROUP BY table_schema;

# PostgreSQL space check
SELECT pg_size_pretty(pg_database_size(current_database()));

Implement these cron jobs to prevent future issues:

# Daily disk check
0 3 * * * df -h > /var/log/disk_usage.log
# Weekly file cleanup
0 4 * * 0 find /var/log -name "*.log" -mtime +30 -delete

When your Linux system reports nearly full disk space while directory sizes don't add up, you're likely dealing with one of these common scenarios:

df -h
Filesystem      Size  Used Avail Use% Mounted on
/dev/sda1       9.9G  9.1G  284M  98% /

Versus:

du -sh /*
31G     /backups
5.5M    /bin
136K    /boot
[...]
Total: ~45G (but partition is only 9.9G)

1. Deleted Files Still Held by Processes
Check for processes holding deleted files:

lsof | grep deleted
COMMAND     PID   USER   FD   TYPE DEVICE SIZE/OFF     NODE NAME
java      25682   root    1w   REG   8,1 2147483648 1234567 /var/log/app.log (deleted)

2. LVM or Filesystem Overhead
Check with these commands:

tune2fs -l /dev/sda1 | grep -i 'block count'
dumpe2fs -h /dev/sda1 | grep -i 'reserved blocks'

Using ncdu for Visual Analysis

sudo apt install ncdu
ncdu -x /

Checking for Sparse Files

find / -type f -printf '%S\t%p\n' | sort -n

1. EBS Volume Investigation

sudo lsblk -f
sudo file -s /dev/xvda1

2. Snapshot Space Reclamation

sudo fstrim -v /

For Deleted Files Still in Use

# Identify the PID
lsof +L1
# Clear space by restarting the service
sudo systemctl restart servicename

Filesystem Repair

sudo umount /
sudo fsck -y /dev/sda1
sudo mount -a

Add these to your monitoring:

# Daily disk usage report
df -h > /var/log/disk_usage_$(date +\%Y\%m\%d).log
# Alert script
if [ $(df / --output=pcent | tr -dc '0-9') -gt 90 ]; then
    echo "Disk space critical" | mail -s "Disk Alert" admin@example.com
fi