When managing Linux servers, you might encounter a puzzling situation where df
reports significantly more disk usage than du
. Here's a typical example:
# df -h /
Filesystem Size Used Avail Use% Mounted on
/dev/sda3 270G 240G 17G 94% /
# du -hxs /
124G /
This 116GB discrepancy indicates something is consuming space that du
isn't accounting for.
Several factors can cause this difference:
- Deleted files held by running processes
- Filesystem journal or metadata overhead
- Reserved blocks (especially with ext3/4)
- Filesystem corruption
- Mount point overlaps
The most common culprit is deleted files still held open by processes. Check with:
# lsof | grep deleted
java 1234 user1 4u REG 8,3 4294967296 1234 /var/log/app.log (deleted)
To find the total size of such files:
# lsof -nP | grep deleted | awk '{print $7}' | xargs -I{} ls -la {} | awk '{sum += $5} END {print sum}'
For ext3/4 filesystems, examine reserved blocks and journal:
# tune2fs -l /dev/sda3 | grep -E "Reserved block count|Journal"
Reserved block count: 5400000
Journal inode: 8
Calculate reserved space (blocks * block size):
# echo $((5400000 * 4096 / 1024 / 1024 / 1024))GB
20GB
For deeper analysis, use these approaches:
# Find largest directories (excluding mounted filesystems):
du -hx --max-depth=1 / | sort -h
# Check for filesystem errors:
fsck -n /dev/sda3
# Compare inode usage:
df -i /
Depending on your findings:
- Restart processes holding deleted files
- Clear old log files in /var/log
- Adjust reserved blocks percentage:
tune2fs -m 1 /dev/sda3
- Consider filesystem maintenance or expansion
Remember that some discrepancy is normal due to filesystem overhead, but large differences warrant investigation.
Every Linux sysadmin has encountered this scenario: df
reports your filesystem is nearly full, while du
shows significantly less usage. Let's break down the technical investigation.
First, establish your baseline measurements:
# Get filesystem usage
df -h /
# Get actual disk usage
du -hxs /
# Check for deleted files still in use
lsof +L1 | grep '/.*deleted'
1. Open Deleted Files (most frequent cause):
# Find processes holding deleted files
sudo lsof -nP +L1
2. Filesystem Journal/Reserved Space (ext3 specifics):
# Check reserved blocks (typically 5%)
tune2fs -l /dev/sda3 | grep -i "block count"
3. LVM Snapshots or Thin Provisioning:
# Check for LVM volumes
lvdisplay
For deeper analysis, try these approaches:
# Find largest files/directories (excluding mounted filesystems)
du -hx --max-depth=1 / 2>/dev/null | sort -hr | head -20
# Alternative disk usage analyzer
ncdu -x /
For containerized environments:
# Check Docker/container storage
docker system df
podman system df
Here's a real-world debugging session:
# Find the rogue process
sudo lsof -nP | grep deleted | grep httpd
# Output:
httpd 1234 www-data 4u REG 253,3 2147483648 123456 /var/log/apache2/access.log (deleted)
# Solution: gracefully restart Apache
sudo systemctl restart apache2
For ext3/ext4 filesystems specifically:
# Check filesystem errors
fsck -nv /dev/sda3
# View superblock info
dumpe2fs -h /dev/sda3
Create a cron job to alert on discrepancies:
#!/bin/bash
THRESHOLD=10 # Percentage difference to alert on
DU_USAGE=$(du -sx / | awk '{print $1}')
DF_USAGE=$(df --output=used / | tail -1)
DIFF=$(( (DU_USAGE - DF_USAGE) * 100 / DF_USAGE ))
if [ ${DIFF#-} -gt $THRESHOLD ]; then
echo "WARNING: Disk usage discrepancy detected" | mail -s "Storage Alert" admin@example.com
fi