Resolving “df Shows Full Disk but ncdu Reports Low Usage” Discrepancy in Linux Systems


2 views

Many Linux administrators encounter this puzzling scenario: df reports nearly full storage while disk usage analyzers like ncdu show significantly lower consumption. This typically occurs on ext3/ext4 filesystems and cloud instances where deleted files remain allocated due to running processes.

The discrepancy stems from how Linux handles deleted files still held by processes:

# Check for deleted but held files
sudo lsof | grep deleted

# Example output:
apache2  1234  www-data   4u   REG   8,1  1048576   1234 /var/log/apache2/access.log (deleted)
mysql    5678    mysql    5u   REG   8,1 52428800   5678 /var/lib/mysql/ibdata1 (deleted)

1. Clearing Held Files:

# Option A: Restart holding services
sudo systemctl restart apache2 mysql

# Option B: Find and kill specific processes
sudo kill -9 $(sudo lsof -t /var/log/apache2/access.log)

2. Checking for Reserved Blocks:

# View reserved blocks percentage
sudo tune2fs -l /dev/xvda1 | grep "Reserved block count"

# Reduce reserved blocks (for non-root filesystems)
sudo tune2fs -m 1 /dev/xvda1

For comprehensive analysis:

# Show all mounted filesystems with inode usage
df -iTh

# Find largest open deleted files
sudo lsof +L1 | awk '$5 == "DEL"' | sort -n -k 7

# Check for filesystem errors
sudo fsck -nv /dev/xvda1
  • Implement log rotation for services like Apache and MySQL
  • Monitor disk space with tools like du -sh /* regularly
  • Consider using LVM for easier disk management in cloud environments
  • Set up alerts when disk usage exceeds 80%

For AWS instances using EBS volumes:

# Check for orphaned EBS snapshots
aws ec2 describe-snapshots --owner-ids self

# Resize EBS volume if needed (after taking snapshot)
aws ec2 modify-volume --volume-id vol-123456 --size 20

When your Ubuntu server reports 98% disk usage via df -Th but ncdu shows only 20% usage, you're facing one of Linux's classic storage mysteries. This discrepancy typically occurs due to:

# Sample df output showing the discrepancy
Filesystem    Type    Size  Used Avail Use% Mounted on
/dev/xvda1    ext4    7.9G  7.7G  172M  98% /

Deleted files held by processes: The most frequent cause - files deleted while still held open by running processes.

# Check for such files using lsof
sudo lsof +L1 | grep deleted

# Sample output:
apache2   1234 www-data    4w   REG  254,1 2147483648     0  /var/log/apache2/access.log (deleted)

LVM snapshots or thin provisioning: Especially relevant in cloud environments like AWS EC2.

Checking filesystem overhead:

sudo tune2fs -l /dev/xvda1 | grep -i 'block count'

Finding large hidden files:

# Find files larger than 100MB
sudo find / -xdev -type f -size +100M -exec ls -lh {} \;

# Check for sparse files
sudo find / -xdev -type f -printf '%S\t%p\n' | grep -v '^1\.0'

In AWS EC2 instances, remember to check:

  • Instance store volumes (if used)
  • EBS volume snapshots in progress
  • CloudWatch logs accumulation

For recurring issues, consider setting up a periodic cleanup script:

#!/bin/bash
# Clean up common space hogs
journalctl --vacuum-size=100M
apt-get clean
docker system prune -f 2>/dev/null
rm -rf /tmp/*