Understanding CPU% Exceeding 100% in Linux ps -aux Output for Java/Tomcat Processes


2 views

100% Actually Mean? /h2>

When you see a process showing 106% CPU usage in ps -aux output like this:

USER       PID %CPU %MEM    VSZ   RSS TTY      STAT START   TIME COMMAND
root     16228 106 24.0 2399428 1840576 ?     Sl   07:11 171:35 /usr/bin/java -Djava.util.logging.config.file=/opt/tomcat...

This doesn't indicate a problem - it's actually expected behavior for multi-threaded applications. The %CPU value represents the percentage of a single CPU core's utilization. On multi-core systems, a process can utilize more than 100% by spreading workload across multiple cores.

The formula ps uses is:

%CPU = (total CPU time used) / (time process has been running) * 100 * number_of_cores

For example, if a Java process uses 2 cores fully for 1 second out of a 1-second sampling period on a 4-core system:

(2 core-seconds / 1 second) * 100 = 200%

High CPU% becomes problematic when:

  • It consistently stays near the system's maximum (100% * cores)
  • Application response times increase significantly
  • Other processes get starved of CPU resources

Instead of just checking ps -aux, consider these alternatives:

# Per-core breakdown
mpstat -P ALL 1

# Thread-level monitoring
top -H -p [PID]

# Java-specific monitoring
jstack [PID] > thread_dump.txt
jstat -gcutil [PID] 1000

To find CPU-hungry threads in your Tomcat process:

# Get thread IDs sorted by CPU
ps -eLo pid,lwp,pcpu | grep 16228 | sort -k3 -r

# Convert hexadecimal thread ID from top to decimal
printf "%d\n" 0x3a94

# Get Java stack trace for specific thread
jstack 16228 | grep -A 30 "nid=0x3a94"

Instead of simple CPU% thresholds, consider these metrics:

# CPU load average (adjust for your core count)
cat /proc/loadavg

# Process-specific CPU time
cat /proc/[PID]/stat | awk '{print $14+$15}'

# Container-aware metrics (if running in Docker/Kubernetes)
docker stats --no-stream [CONTAINER_ID]

A common Tomcat scenario causing high CPU%:

// Bad configuration - unlimited queue with fixed pool
ExecutorService executor = Executors.newFixedThreadPool(200);

// Better approach - use bounded queue
ThreadPoolExecutor executor = new ThreadPoolExecutor(
    50, // core pool
    200, // max pool
    60, TimeUnit.SECONDS,
    new ArrayBlockingQueue<>(1000),
    new ThreadPoolExecutor.CallerRunsPolicy()
);

Remember that high CPU% isn't inherently bad - it often means your application is efficiently using available resources. Focus on response times and system stability rather than just the CPU percentage number.


When monitoring Java applications like Tomcat using ps -aux, seeing CPU percentages exceeding 100% can be confusing but is actually expected behavior. The %CPU column represents processor utilization as a percentage of a single CPU core. On multi-core systems, the total can exceed 100% when a process utilizes multiple cores.

# Example output showing 106% CPU usage:
USER       PID %CPU %MEM    VSZ   RSS TTY      STAT START   TIME COMMAND
root     16228 106 24.0 2399428 1840576 ?     Sl   07:11 171:35 /usr/bin/java...

Your Tomcat process showing 106% CPU indicates it's using slightly more than one core's worth of processing power. This is typical for:

  • Multi-threaded Java applications
  • During request processing peaks
  • When running GC cycles
  • During application startup

For more accurate monitoring of Java processes, consider these alternatives:

# 1. Use top with thread view:
top -H -p [PID]

# 2. Get per-core utilization:
mpstat -P ALL 1

# 3. Java-specific tools:
jstat -gcutil [PID] 1000
jstack [PID] > thread_dump.txt

Instead of simple CPU% thresholds, implement these checks:

#!/bin/bash
# Alert based on core count and threshold
CORES=$(nproc)
THRESHOLD=70 # % per core
MAX_USAGE=$((CORES * THRESHOLD))

CPU_USAGE=$(ps -p $PID -o %cpu | tail -1 | awk '{print int($1)}')

if [ $CPU_USAGE -gt $MAX_USAGE ]; then
  echo "Alert: Process $PID using $CPU_USAGE% CPU (Threshold: $MAX_USAGE%)"
  # Add notification logic here
fi

When investigating sustained high CPU usage:

  1. Capture thread dumps during high load: kill -3 [PID]
  2. Profile CPU usage with jvisualvm or async-profiler
  3. Check for thread contention in logs
  4. Monitor GC activity with -XX:+PrintGCDetails

A common Tomcat configuration issue causing high CPU:

# Before (problematic):
<Connector port="8080" protocol="HTTP/1.1"
           maxThreads="500" 
           minSpareThreads="50"/>

# After (optimized):
<Connector port="8080" protocol="HTTP/1.1"
           maxThreads="200" 
           minSpareThreads="10"
           acceptCount="100"/>

Reducing maxThreads to match actual server capabilities often resolves CPU spikes while maintaining throughput.