When monitoring systems with tools like Munin, noticing a daily 7-8% inode increase without corresponding disk space growth typically indicates a "small files epidemic". This occurs when processes create numerous tiny files that consume inodes disproportionately to their storage footprint.
Start with these fundamental commands to analyze inode distribution:
# Check filesystem inode usage
df -ih
# Count inodes per directory (sorted)
find /path/to/search -xdev -type f | cut -d "/" -f 2-3 | sort | uniq -c | sort -n
For deeper analysis, consider these approaches:
# Real-time inode monitoring
inotifywait -m /path -e create --format "%w%f" | while read FILE; do \
echo "$(date) - Created: $FILE"; \
ls -i "$FILE"; \
done
# Find directories with most inodes
find / -xdev -printf "%h\n" | cut -d "/" -f 1,2 | sort | uniq -c | sort -n
Frequent offenders include:
- PHP session files (check /var/lib/php/sessions)
- Mail queues (/var/spool/postfix)
- Docker container layers (/var/lib/docker)
- Log rotation fragments (/var/log)
For persistent issues, implement cleanup cron jobs:
# Example: Clean PHP sessions older than 24h
find /var/lib/php/sessions -type f -name "sess_*" -mtime +1 -delete
# Docker system prune
docker system prune -f --filter "until=24h"
Configure alerts when inodes reach critical thresholds (80% used):
# Add to /etc/crontab
0 * * * * root [ $(df -ih / | awk 'NR==2 {print $5}' | tr -d '%') -gt 80 ] && \
echo "Inode warning: $(df -ih /)" | mail -s "Inode Alert" admin@example.com
When monitoring a development server with Munin, I observed an unusual pattern: while disk space remained stable, inode usage increased by 7-8% daily. This indicates a potential issue with numerous small files being created somewhere in the filesystem.
Each file, directory, or symbolic link consumes one inode. To check overall inode usage:
df -i Filesystem Inodes IUsed IFree IUse% Mounted on /dev/sda1 52428800 183456 52245344 1% /
The most effective command to locate inode-heavy directories:
find / -xdev -printf '%h\n' | sort | uniq -c | sort -rn | head -20
This command:
- Scans the entire filesystem (excluding other mounts with -xdev)
- Counts files per directory
- Sorts by count
- Shows top 20 offenders
For systems with millions of files, this faster method works better:
time find / -xdev -type d | while read dir; do echo $(ls -A "$dir" 2>/dev/null | wc -l) "$dir" done | sort -rn | head -20
Case 1: Email servers with millions of small files in /var/spool
# Clean up old mail files find /var/spool -type f -mtime +30 -delete
Case 2: PHP session files accumulation
# Check session.save_path in php.ini # Temporary solution: find /var/lib/php/sessions -name "sess_*" -mtime +1 -delete
Create a cron job to track inode usage trends:
#!/bin/bash DATE=$(date +%Y-%m-%d) INODES=$(df -i / | awk 'NR==2 {print $5}') echo "$DATE,$INODES" >> /var/log/inode_usage.log
- Implement log rotation for applications
- Configure tmpwatch for /tmp directories
- Set up monitoring alerts for inode thresholds
For GUI-based analysis, consider:
sudo apt install ncdu ncdu -x /
This provides an interactive disk usage analyzer that also shows inode counts.