When your Java application suddenly fails with error='Cannot allocate memory' (errno=12)
, it's typically because the JVM hits a hard memory limit. From the /proc/meminfo
output, we can see the server has:
MemTotal: 1027040 kB (~1GB RAM) SwapTotal: 0 kB (no swap space)
The "it worked yesterday" phenomenon occurs because Linux memory management is dynamic. Your system might have had:
- Other processes releasing memory overnight
- Kernel caches being flushed
- Memory fragmentation issues accumulating
1. Create swap space (temporary solution):
sudo fallocate -l 2G /swapfile sudo chmod 600 /swapfile sudo mkswap /swapfile sudo swapon /swapfile
2. Reduce JVM heap size:
java -Xms256m -Xmx512m -jar your_application.jar
Configuration for limited-memory servers:
# In application.conf or equivalent: jvm-memory { initial = 256m max = 768m metaspace = 128m stack = 512k }
Docker memory limits example:
docker run -d \ -m 1g \ --memory-swap 1.5g \ -e JAVA_OPTS="-XX:+UseContainerSupport -XX:MaxRAMPercentage=75.0" \ your-java-image
Create a heap dump when memory gets critical:
java -XX:+HeapDumpOnOutOfMemoryError \ -XX:HeapDumpPath=/tmp/heapdump.hprof \ -XX:+ExitOnOutOfMemoryError \ -jar your_app.jar
Analyze with jhat or VisualVM:
jhat /tmp/heapdump.hprof # Then open http://localhost:7000
For JVM internal memory usage:
java -XX:NativeMemoryTracking=detail \ -XX:+UnlockDiagnosticVMOptions \ -XX:+PrintNMTStatistics \ -jar app.jar
Then check allocations:
jcmd <pid> VM.native_memory summary
- Set conservative JVM heap sizes (-Xmx)
- Enable swap space (minimum 1.5x RAM)
- Monitor memory usage with tools like Prometheus
- Consider switching to a more memory-efficient JVM like Eclipse OpenJ9
- Implement proper OOM handling in your application
When a Java application that previously ran smoothly suddenly fails with memory allocation errors, it's often a sign of underlying system resource issues. The error message:
Java HotSpot(TM) 64-Bit Server VM warning: INFO: os::commit_memory(0x00000000d5550000, 715849728, 0) failed; error='Cannot allocate memory' (errno=12)
indicates the JVM cannot allocate approximately 715MB of memory. This typically happens when either:
- Physical memory is exhausted
- Swap space is disabled or full
- Process limits are too restrictive
First, verify your current memory situation:
free -h
cat /proc/meminfo
ps aux --sort=-%mem | head -10
In the reported case, we see:
- Total memory: 1GB (1027040 kB)
- Free memory: 626MB
- No swap configured (SwapTotal: 0 kB)
For production systems facing this issue:
# Create swap if none exists (4GB example)
sudo fallocate -l 4G /swapfile
sudo chmod 600 /swapfile
sudo mkswap /swapfile
sudo swapon /swapfile
Add to /etc/fstab for persistence:
/swapfile none swap sw 0 0
Modify your Java startup parameters:
# Reduce heap size (adjust based on available memory)
java -Xms256m -Xmx512m -jar yourapp.jar
# Alternatively, if using Play Framework:
activator -J-Xms256m -J-Xmx512m -Dhttp.port=80 start
Key JVM memory parameters to consider:
- -XX:MaxDirectMemorySize=256m
- -XX:MaxMetaspaceSize=128m
- -Xss256k (reducing thread stack size)
For persistent issues, enable JVM native memory tracking:
java -XX:NativeMemoryTracking=detail -XX:+UnlockDiagnosticVMOptions \
-XX:+PrintNMTStatistics -Xms512m -Xmx1g yourapp.jar
After running, check memory usage:
jcmd <pid> VM.native_memory summary
Implement basic monitoring with a shell script:
#!/bin/bash
while true; do
date >> memory.log
free -m >> memory.log
ps aux --sort=-%mem | head -5 >> memory.log
sleep 60
done
For production systems, consider tools like:
- Prometheus + Grafana
- New Relic
- Datadog