In our Java-heavy infrastructure (4 Tomcat instances with JVM heap settings ranging from 1.5GB to 3GB), we observed that Linux swap usage exhibits pathological growth patterns. The system has:
Total RAM: 8GB
Swap partition: 8GB
JVM configurations:
- Tomcat A: -Xms3000m -Xmx3000m
- Tomcat B/C/D: -Xms1500m -Xmx1500m
The sar -r
output reveals:
kbmemfree kbmemused %memused kbbuffers kbcached kbswpfree kbswpused %swpused kbswpcad
48260 8125832 99.41 196440 2761852 7197688 1190912 14.20 316044
75504 8098588 99.08 198032 2399460 7197688 1190912 14.20 316032
And free -m
shows:
total used free shared buffers cached
Mem: 7982 7937 45 0 32 2088
-/+ buffers/cache: 5816 2166
Swap: 8191 1163 7028
Through extensive testing, we identified three core mechanisms at play:
# Check current swappiness (default 60)
cat /proc/sys/vm/swappiness
# View swap cache statistics
grep -i swap /proc/meminfo
# Track process-specific swap usage
for file in /proc/*/status; do
awk '/VmSwap|Name/{printf $2 " " $3}END{print ""}' $file
done | sort -k 2 -n -r | head
Even with 2GB+ available RAM (2166MB
in buffers/cache), the JVM's memory allocation patterns trigger swap usage:
# Monitoring JVM memory spikes
jstat -gcutil $(pgrep java) 1000
After experimenting with various approaches, these adjustments proved most effective:
# 1. Reduce swappiness for Java-heavy systems
echo 10 > /proc/sys/vm/swappiness
# 2. Enable swap cache trimming (requires kernel 3.5+)
echo 1 > /proc/sys/vm/swap_utilization_threshold
# 3. Periodic manual reclaim (emergency measure)
swapoff -a && swapon -a
# 4. JVM flag to hint OS about memory usage
-XX:+UseContainerSupport -XX:MaxRAMPercentage=75.0
For deeper analysis, these tools provide crucial insights:
# 1. Track page-in/page-out events
vmstat -SM 1
# 2. Identify memory pressure events
dmesg | grep -i oom
# 3. Monitor process memory changes
smem -s swap -k -c "name swap pss" | grep tomcat
# 4. Kernel slab cache diagnostics
slabtop -o | head -20
During my monitoring of a production Linux server (8GB RAM) running multiple Tomcat instances, I observed a peculiar behavior:
# Typical sar -r output showing swap creep
kbmemfree kbmemused %memused kbbuffers kbcached kbswpfree kbswpused %swpused kbswpcad
48260 8125832 99.41 196440 2761852 7197688 1190912 14.20 316044
75504 8098588 99.08 198032 2399460 7197688 1190912 14.20 316032
The swap usage percentage consistently grows during peak loads but never decreases, even when memory becomes available.
The server configuration reveals important clues:
# Current memory status
$ free -m
total used free shared buffers cached
Mem: 7982 7937 45 0 32 2088
-/+ buffers/cache: 5816 2166
Swap: 8191 1163 7028
Despite having 2GB available memory (buffers/cache), the system won't reclaim swap space. This suggests either:
- A kernel memory management policy issue
- JVM memory handling peculiarity
- Application-level memory fragmentation
Linux typically uses swap space in three scenarios:
- Direct swapping: When physical memory is exhausted
- Opportunistic swapping: Kernel proactively moves idle pages to swap
- Memory pressure: vm.swappiness triggered behavior
In our case, the JVM's -Xmx settings (3GB for primary instance) create an interesting dynamic where the kernel may treat JVM heap differently from native memory.
Here's how I approached the investigation:
# 1. Identify swapped processes
$ for file in /proc/*/status ; do awk '/VmSwap|Name/{printf $2 " " $3}END{ print ""}' $file; done | sort -k 2 -n -r | head
# 2. Check swappiness value
$ cat /proc/sys/vm/swappiness
# 3. Monitor page faults
$ vmstat -SM 1 5
procs -----------memory---------- ---swap-- -----io---- -system-- ------cpu-----
r b swpd free buff cache si so bi bo in cs us sy id wa st
0 0 1163 45 32 2088 0 0 3 2 12 15 8 2 89 1 0
Java applications present unique challenges:
- Garbage collection patterns affect memory residency
- JVM's view of "free" memory differs from OS perspective
- Large heap allocations may trigger early swapping
Try adding these JVM flags for better diagnostics:
-XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+PrintHeapAtGC
-Xloggc:/path/to/gc.log
Based on my testing, these approaches yielded results:
1. Adjust swappiness dynamically:
# Temporary setting (survives reboot)
echo 10 > /proc/sys/vm/swappiness
# Permanent setting
sysctl -w vm.swappiness=10
2. Force swap cache drop:
# Requires root
swapoff -a && swapon -a
3. JVM optimization:
# Add these to JVM options
-XX:+UseConcMarkSweepGC
-XX:+UseCMSInitiatingOccupancyOnly
-XX:CMSInitiatingOccupancyFraction=70
For long-term stability:
- Implement memory monitoring with Nagios/Zabbix
- Set up OOM killer notifications
- Consider cgroups for memory isolation
- Evaluate newer JVM versions with better memory management