On my Red Hat Enterprise Linux 8 server with 8GB RAM, I observed something puzzling:
$ free -h
total used free shared buff/cache available
Mem: 7.7Gi 3.2Gi 487Mi 1.0Gi 4.0Gi 3.2Gi
Despite processes only consuming ~3.2GB, free memory appears alarmingly low at 487MB. Let's decode what's really happening.
Modern Linux systems (including RHEL) employ sophisticated memory management techniques:
- Page Cache: Filesystem data cached in RAM
- Slab Allocator: Kernel object caching
- Buffers: Temporary storage for block I/O
This explains our earlier free
output where buff/cache
consumed 4GB.
For accurate memory analysis, use these commands:
# Detailed memory breakdown
$ cat /proc/meminfo
# Per-process memory usage
$ ps aux --sort=-%mem | head
# Slab allocation details
$ slabtop -o
# Page cache statistics
$ vmstat -s
The key metrics to watch:
MemFree: 499236 kB
MemAvailable: 3401296 kB
Buffers: 12452 kB
Cached: 3184944 kB
MemAvailable is the critical value - it estimates memory available for new processes without swapping. In this case, we actually have ~3.4GB available despite the low free memory.
For monitoring scripts, use this Python snippet to get accurate available memory:
import re
def get_available_memory():
with open('/proc/meminfo') as f:
meminfo = f.read()
match = re.search(r'MemAvailable:\s+(\d+)\s+kB', meminfo)
if match:
return int(match.group(1)) / 1024 # Convert to MB
return 0
print(f"Available memory: {get_available_memory():.2f} MB")
Real memory pressure indicators:
$ vmstat 1 5
procs -----------memory---------- ---swap--
r b swpd free buff cache
1 0 0 487236 12452 3184944
Watch for:
- High
si
(swap in) orso
(swap out) values - Consistently low
MemAvailable
- High
buff/cache
with active swapping
For specialized workloads, consider adjusting:
# Check current settings
$ sysctl vm.swappiness vm.vfs_cache_pressure
# Temporary adjustments
$ sudo sysctl -w vm.swappiness=10
$ sudo sysctl -w vm.vfs_cache_pressure=50
These values control how aggressively Linux reclaims memory from cache versus swapping.
What appears as "missing" memory is actually being efficiently utilized by the kernel. The free
command's output is often misinterpreted - focus on available
rather than free
memory for accurate assessment of your system's memory state.
Many Red Hat Linux administrators encounter this puzzling scenario: system monitoring tools report low free memory while process memory consumption doesn't account for all allocated RAM. Here's what's really happening under the hood.
Modern Linux kernels (including Red Hat's) aggressively utilize unused RAM for disk caching and buffers to improve performance. This cached memory appears as "used" in tools like free -m
but is actually immediately available when applications need it.
Consider this typical free
output:
$ free -m total used free shared buff/cache available Mem: 7982 3052 243 125 4686 4251 Swap: 2047 0 2047
- Used: Includes application memory PLUS caches/buffers
- Buff/cache: Disk cache that can be reclaimed instantly
- Available: The true free memory including reclaimable cache
To see memory allocation details, use:
$ cat /proc/meminfo MemTotal: 8175204 kB MemFree: 248932 kB MemAvailable: 4352748 kB Buffers: 142844 kB Cached: 3927244 kB SwapCached: 0 kB ...
Database systems like MySQL/PostgreSQL often exhibit this behavior because:
- They use large shared memory segments
- Linux caches frequently accessed database files
Check shared memory with:
$ ipcs -m ------ Shared Memory Segments -------- key shmid owner perms bytes nattch status 0x0052e2c1 327680 mysql 600 256000000 10
For database servers, consider adjusting these kernel parameters in /etc/sysctl.conf
:
vm.swappiness = 10 vm.vfs_cache_pressure = 50 vm.dirty_ratio = 20 vm.dirty_background_ratio = 10
Instead of just checking free memory, use these more accurate commands:
$ vmstat -s $ cat /proc/meminfo | grep -i available $ ps aux --sort=-%mem | head
For persistent monitoring, configure tools like:
$ sudo dnf install procps-ng sysstat $ sar -r 1 3 # Sample memory usage every 1 second, 3 times
Actual memory pressure indicators include:
- Consistently low "Available" value in
free
- High swap usage despite available RAM
- OOM killer activating frequently
For advanced analysis, use:
$ slabtop -o $ pmap -x [pid] $ numastat -m