Many sysadmins coming from Windows backgrounds often panic when they see low "free" memory in Linux. But Linux fundamentally handles memory differently - unused RAM is wasted RAM. The kernel aggressively caches files and data in spare memory to accelerate future operations through:
- Page cache (file contents)
- Inodes and directory entries
- Slab allocations (kernel objects)
# Typical memory breakdown
$ free -h
total used free shared buff/cache available
Mem: 62G 12G 800M 1.2G 49G 48G
Swap: 8.0G 1.3G 6.7G
While regular cache dropping isn't recommended, these scenarios justify it:
- Benchmarking storage performance: Clearing caches ensures fair tests
- Memory pressure investigations: Isolating true memory leaks
- Post-massive-file-operation cleanup: After processing terabytes of logs
Example when testing raw disk IO:
# 1. Drop caches
sync; echo 3 > /proc/sys/vm/drop_caches
# 2. Run benchmark
fio --name=test --ioengine=libaio --rw=read --bs=4k --numjobs=16 \
--size=1G --runtime=60 --time_based --group_reporting
Forcing cache eviction has significant consequences:
Impact | Description |
---|---|
Performance penalty | Subsequent disk reads instead of cache hits |
CPU overhead | Increased system calls and page faults |
I/O burst | Sudden disk activity spikes |
Instead of cron jobs, consider these approaches:
- Tune vm.vfs_cache_pressure:
# Increase reclaim aggressiveness sysctl -w vm.vfs_cache_pressure=100
- Use cgroups v2 for memory limits:
# Create memory-limited cgroup mkdir /sys/fs/cgroup/testgroup echo "4G" > /sys/fs/cgroup/testgroup/memory.max
These metrics reveal whether your system benefits from caching:
# Cache hit rate (higher is better)
grep -E '^(Cached|Buffers)' /proc/meminfo
vmstat -s | grep "pages paged in"
# Detailed slab info
sudo slabtop -o | head -20
Linux aggressively uses available RAM for disk caching (page cache and dentry/inode caches) to improve performance. This is often misunderstood as "memory being wasted" when viewing tools like free -m
showing low "free" memory.
$ free -h
total used free shared buff/cache available
Mem: 32G 5.2G 500M 1.3G 26G 25G
The /proc/sys/vm/drop_caches
interface exists for specific debugging and performance testing scenarios:
- Measuring true disk performance without cache effects
- Benchmarking applications with cold caches
- Debugging memory pressure issues
In production environments, regularly dropping caches is generally unnecessary and can hurt performance. However, exceptions include:
# Good use case: Before running benchmarks
sync
echo 3 > /proc/sys/vm/drop_caches
./run_benchmark.sh
# Problematic use case: Scheduled daily cache drops
0 0 * * * root /usr/bin/sync && echo 3 > /proc/sys/vm/drop_caches
Instead of blindly dropping caches, monitor cache hit rates:
# Check page cache effectiveness
$ sar -B 1
Linux 5.4.0-135-generic (...) 03/01/2023 _x86_64_ (32 CPU)
12:00:01 AM pgpgin/s pgpgout/s fault/s majflt/s pgfree/s pgscank/s pgscand/s pgsteal/s %vmeff
12:10:01 AM 0.00 0.00 38.71 0.00 127.42 0.00 0.00 0.00 0.00
For systems with genuine memory pressure, consider:
- Tuning swappiness:
vm.swappiness = 10
- Using cgroups v2 memory limits
- Implementing proper memory monitoring
# Better than dropping caches: Adjust swappiness
echo 10 > /proc/sys/vm/swappiness
The Linux kernel's memory management is sophisticated enough to handle cache allocation without manual intervention. Regular cache dropping indicates either a misunderstanding of Linux memory management or a deeper system configuration issue that should be addressed properly.