When managing shared compute resources, Java processes can be particularly greedy with CPU cores due to their default parallel processing behavior. On our 60-core Ubuntu server, we need to ensure fair resource distribution among multiple users running diverse Java applications.
The most reliable approach is using Linux's taskset
command to set CPU affinity before launching the JVM:
taskset -c 0-9 java -jar application.jar
This restricts the process to cores 0 through 9. For dynamic allocation based on available cores:
#!/bin/bash MAX_CORES=10 AVAILABLE_CORES=$(nproc) ACTUAL_CORES=$((AVAILABLE_CORES > MAX_CORES ? MAX_CORES : AVAILABLE_CORES)) taskset -c 0-$((ACTUAL_CORES-1)) java -jar "$@"
While not as strict as OS-level controls, these JVM flags can influence threading behavior:
-XX:ActiveProcessorCount=10 -XX:ParallelGCThreads=5 -XX:ConcGCThreads=2
For ForkJoinPool applications, set the parallelism:
System.setProperty("java.util.concurrent.ForkJoinPool.common.parallelism", "10");
In Docker environments, use CPU limits:
docker run --cpus=10 -it java-image
Or in Kubernetes:
resources: limits: cpu: "10"
Combine with cgroups
for enforcement:
cgcreate -g cpu:/java-limited cgset -r cpu.cfs_quota_us=100000 java-limited # 10 cores worth of time cgexec -g cpu:java-limited java -jar app.jar
Here's a complete wrapper script for production use:
#!/bin/bash # Limit to 10 cores max MAX_CORES=10 USER_CORES=${1:-$MAX_CORES} # Validate input if [[ ! "$USER_CORES" =~ ^[0-9]+$ ]] || [ "$USER_CORES" -gt "$MAX_CORES" ]; then echo "Error: Core count must be ≤ $MAX_CORES" >&2 exit 1 fi # Get available cores minus 1 for zero-indexing LAST_CORE=$((USER_CORES - 1)) # Execute with core limitation exec taskset -c 0-${LAST_CORE} java "${@:2}"
On a shared 60-core Linux server, multiple users run various Java applications, some of which are third-party. The challenge is ensuring no single Java process exceeds 10 CPU cores to maintain fair resource allocation. This requires both JVM-level and OS-level controls.
The most effective approach is using Linux's cgroups
(Control Groups) to enforce CPU limits system-wide:
# Create a cgroup for Java processes
sudo cgcreate -g cpu:/java_limit
# Set CPU quota (10 cores = 1000% in cgroup terms)
echo 1000 > /sys/fs/cgroup/cpu/java_limit/cpu.cfs_quota_us
echo 100000 > /sys/fs/cgroup/cpu/java_limit/cpu.cfs_period_us
# Apply to specific process
cgclassify -g cpu:/java_limit $(pgrep -f java)
For applications you control, use these JVM flags:
java -XX:ActiveProcessorCount=10 -XX:ParallelGCThreads=5 -XX:ConcGCThreads=3 -jar application.jar
Key parameters:
-XX:ActiveProcessorCount
: Limits visible CPUs-XX:ParallelGCThreads
: Controls GC thread count-XX:ConcGCThreads
: Limits concurrent GC threads
For modern deployments, Docker provides cleaner isolation:
docker run --cpus=10 -it openjdk:11 java -jar app.jar
Combine with monitoring tools for enforcement:
# Check CPU usage per process
ps -eo pid,args,%cpu --sort=-%cpu | head -20
# Automated enforcement script
#!/bin/bash
MAX_CORES=10
for pid in $(pgrep -f java); do
cpu_usage=$(ps -p $pid -o %cpu | tail -1 | cut -d. -f1)
if [ $cpu_usage -gt $((MAX_CORES*100)) ]; then
renice -n 19 -p $pid
cgclassify -g cpu:/java_limit $pid
fi
done
For applications using ForkJoinPool or parallel streams:
System.setProperty("java.util.concurrent.ForkJoinPool.common.parallelism", "10");