When working with Linux systems, many administrators notice an interesting phenomenon: after performing file operations (like creating backups), the system shows increased memory usage in buffers/cache that doesn't immediately free up. This isn't a memory leak - it's actually an intelligent memory management feature.
# Typical output of free -m after file operations
total used free shared buff/cache available
Mem: 7982 1523 1024 123 5434 6120
Swap: 2047 0 2047
Linux uses unused RAM for disk caching (page cache and dentry/inode caches) to speed up future operations. When you:
- Create large backup files
- Read/write databases
- Process log files
The system caches these file operations in memory. This cached memory is marked as "available" and can be reclaimed instantly when applications need more memory.
The backup script in question:
#!/bin/bash
str=$(date +%Y-%m-%d-%H-%M-%S)
tar pzcf /home/backups/sites/mysite-$str.tar.gz /var/sites/mysite/public_html/www
mysqldump -u mysite -pmypass mysite | gzip -9 > /home/backups/sites/mysite-$str.sql.gz
Creates two large files that get cached. Even after deletion, Linux may keep the cache for potential reuse.
For accurate memory monitoring, use these commands instead of just looking at 'free':
# Show actual available memory (including reclaimable cache)
free -h --wide
# Detailed memory breakdown
cat /proc/meminfo
# Cache pressure tuning (values between 1-100, higher means more aggressive reclaim)
sysctl vm.vfs_cache_pressure=50
While generally unnecessary, you can clear cache in test environments with:
# Clear pagecache, dentries and inodes (requires root)
sync; echo 3 > /proc/sys/vm/drop_caches
# For production systems, better to adjust swappiness instead
sysctl vm.swappiness=10
- Monitor "available" memory rather than just "free"
- Adjust vm.swappiness for database servers (lower values)
- Consider using tmpfs for temporary backup operations
- Implement proper swap monitoring
If you've ever run free -m
after large file operations and wondered why your "available" memory appears consumed, you're not alone. This behavior is particularly noticeable after backup operations, database dumps, or any substantial file I/O.
Linux aggressively uses unused RAM for disk caching through:
# Breakdown from /proc/meminfo
Buffers: Temporary raw disk block storage
Cached: Pages from file reads
Page Cache: Unified cache for both
During backups, this becomes evident:
# Before backup
$ free -h
total used free shared buff/cache available
Mem: 7.7G 1.2G 5.8G 123M 728M 6.1G
# During backup operation
$ free -h
total used free shared buff/cache available
Mem: 7.7G 1.5G 1.2G 123M 5.0G 5.8G
The kernel automatically manages these caches with these priorities:
- Immediately allocate to applications when needed
- Keep recently accessed files cached
- Maintain buffers for disk operations
To confirm cached memory behavior:
# Monitor in real-time
watch -n1 "grep -E '^(Cached|Buffers|MemFree)' /proc/meminfo"
# Detailed breakdown
cat /proc/meminfo | grep -E '^(MemTotal|MemFree|Buffers|Cached|SwapCached|Active|Inactive)'
# Manual cache dropping (for testing only!)
echo 3 > /proc/sys/vm/drop_caches
# Better: adjust vm.swappiness
sysctl vm.swappiness=10
# Configure vfs_cache_pressure
sysctl vm.vfs_cache_pressure=50
Instead of basic free
, use these alternatives:
# Shows actual available memory (Linux 3.14+)
free -h --giga --wide
# Comprehensive view
vmstat -s -SM
# Per-process breakdown
top -o %MEM
For systems running regular backups:
# Limit cache for backup user
ionice -c2 -n7 /root/sites_backup.sh
# Use rsync with --drop-cache option
rsync --archive --drop-cache /source /dest
Remember: The kernel will automatically free cached memory when applications need it. Manual intervention is rarely required.