Debugging “/tmp Full but Empty” Issue on CentOS: Hidden Files and Process Locks Explained


3 views

When your df -h shows 100% utilization of /tmp but ls reveals an apparently empty directory, you're facing one of Linux's classic head-scratchers. This commonly occurs when:

# Check inode usage (often overlooked)
df -i /tmp

# Alternative directory listing showing hidden files
ls -la /tmp

# Find processes holding deleted files
lsof +L1 /tmp | grep deleted

Based on the provided system output where /tmp is a separate 97MB partition, these are the most likely causes:

  • Deleted-but-open files: Applications wrote temp files that were deleted but remain occupied by running processes
  • Hidden cache directories /h2>

    For immediate relief, try these commands in sequence:

    # Safe cleanup of old files
    find /tmp -type f -atime +1 -delete
    
    # Kill processes holding deleted files (be careful!)
    lsof +L1 /tmp | awk '/deleted/ {print $2}' | xargs -r kill -9
    
    # Alternative if you prefer systemd-tmpfiles
    systemd-tmpfiles --clean
    

    For a more surgical approach when dealing with specific applications:

    # MySQL example
    mysql -e "PURGE BINARY LOGS BEFORE NOW();"
    
    # PHP session cleanup
    find /tmp -name "sess_*" -mtime +7 -delete
    

    Add these to your /etc/fstab for the /tmp mount point:

    tmpfs /tmp tmpfs rw,nosuid,nodev,noexec,size=2G 0 0
    

    Or implement a daily cleanup cron job:

    0 3 * * * root find /tmp -type f -atime +1 -delete
    

    When standard tools don't reveal the issue, try these deeper inspection methods:

    # Check for mount namespace conflicts
    mount | grep tmp
    
    # Inspect file descriptors of running processes
    for pid in /proc/[0-9]*; do 
      ls -la $pid/fd | grep /tmp
    done
    
    # Audit filesystem events in real-time
    auditctl -w /tmp -p war -k tmp_monitor
    

    
    When your df -h shows 100% usage but ls reveals an empty directory, you're facing one of Linux's classic storage mysteries. On CentOS 6.3, this typically indicates either:
    
    1. Deleted files held by running processes
    2. Mount point confusion
    3. Filesystem corruption
    
    
    First, verify actual disk usage with:
    
    lsof +L1 /tmp | grep deleted
    sudo du -shx /tmp/*
    
    
    For mount point verification:
    
    mount | grep /tmp
    findmnt /tmp
    
    
    
    When processes hold open handles to deleted files, they continue occupying space. Try this comprehensive cleanup:
    
    
    # List processes holding deleted files
    sudo lsof -nP +L1 | awk '/\/tmp/ && /deleted/ {print $2}' | sort -u
    
    # Alternative with file descriptors
    sudo ls -la /proc/*/fd/* 2>/dev/null | grep -E '/tmp/|deleted'
    
    # Safe cleanup procedure
    sudo service httpd stop  # example service
    sudo rm -vf /tmp/*
    sudo service httpd start
    
    
    
    For stubborn cases, use lower-level tools:
    
    
    # Check inode usage
    df -i /tmp
    
    # Filesystem debug
    sudo debugfs -w /dev/mapper/vg0-lv_tmp
    debugfs: ls -l /tmp
    debugfs: stat /tmp/lost+found
    
    
    
    Implement these in your provisioning scripts:
    
    
    # /etc/fstab entry example
    /dev/mapper/vg0-lv_tmp /tmp ext4 rw,nosuid,nodev,noexec,relatime 0 0
    
    # Cron job for regular cleanup
    0 3 * * * root find /tmp -type f -atime +7 -delete
    
    
    
    Consider these architectural changes:
    
    1. Mount /tmp as tmpfs:
    
    echo "tmpfs /tmp tmpfs defaults,size=2G 0 0" >> /etc/fstab
    
    
    2. Implement separate partitions for services:
    
    mkdir -p /var/tmp/service1
    mount -o bind /var/tmp/service1 /opt/service1/tmp