How to Display Per-Core CPU Utilization in Linux Top Batch Mode (-b) for Python Scripting


2 views

When scripting system monitoring in Python, traditional top -b outputs aggregate CPU statistics, which isn't sufficient when you need per-core breakdowns. Here's how to solve this.

Combine -b with -1 (numeric one) to force single-screen output showing all cores:

top -b -n 1 -1 | grep -A 10 "Cpu(s)"

Here's a Python function to parse per-core stats:

import subprocess

def get_per_core_usage():
    result = subprocess.run(
        ["top", "-b", "-n", "1", "-1"],
        capture_output=True,
        text=True
    )
    
    cores = []
    for line in result.stdout.splitlines():
        if line.startswith("%Cpu"):
            parts = line.split(",")
            if len(parts) >= 4:  # Minimum: us, sy, ni, id
                cores.append({
                    'user': float(parts[0].split()[1]),
                    'system': float(parts[1].split()[0]),
                    'idle': float(parts[3].split()[0])
                })
    return cores

For lower overhead, parse /proc/stat directly:

def proc_stat_cores():
    with open("/proc/stat") as f:
        cores = []
        for line in f:
            if line.startswith("cpu") and not line.startswith("cpu "):
                parts = line.split()
                total = sum(map(int, parts[1:8]))
                idle = int(parts[4])
                cores.append(100 * (total - idle) / total)
        return cores
  • Use /proc/stat for high-frequency polling (>1Hz)
  • Use top -b when needing full process context
  • Cache results when monitoring multiple metrics

Detect uneven core distribution:

cores = get_per_core_usage()
max_load = max(c['user'] for c in cores)
min_load = min(c['user'] for c in cores)
if (max_load - min_load) > 30:  # 30% threshold
    print("Warning: Significant core imbalance detected!")

When optimizing system performance or debugging multi-threaded applications, developers often need granular CPU usage data per physical core. While the interactive top command displays this information when pressing '1', obtaining the same data in batch mode for programmatic processing requires specific configuration.

The solution combines top's batch mode (-b) with delay iterations (-d) and custom output formatting:

top -b -d 2 -n 3 -1 | grep -A $(nproc) "%Cpu"

Key parameters:

-b: Batch mode operation

-d 2: 2-second delay between updates

-n 3: Run 3 iterations

-1: Force individual core display

grep -A $(nproc): Shows per-core lines (automatically scales with core count)

For programmatic access from Python, use this class that parses the output:

import subprocess
import re

class CoreMonitor:
    def __init__(self, interval=2):
        self.interval = interval
        self.core_count = self._get_core_count()
    
    def _get_core_count(self):
        with open('/proc/cpuinfo') as f:
            return sum(1 for line in f if line.startswith('processor'))
    
    def get_usage(self):
        cmd = f"top -b -n 1 -1 -d {self.interval}"
        output = subprocess.check_output(cmd.split()).decode()
        
        cores = {}
        for line in output.split('\\n'):
            if match := re.match(r'%Cpu(\d+).*?(\d+\.\d+) us', line):
                core, usage = match.groups()
                cores[int(core)] = float(usage)
        
        return cores

# Usage example
monitor = CoreMonitor()
print(monitor.get_usage())  # Returns {0: 12.3, 1: 45.6, ...}

For systems where top isn't available, consider these options:

Using mpstat (sysstat package):

mpstat -P ALL 1 1 | awk '/^[0-9]/ {print "Core", $2, ":", 100-$12"% idle"}'

Reading /proc/stat directly:

with open('/proc/stat') as f:
    lines = [line.split() for line in f if line.startswith('cpu')]
    # Parse cpu0, cpu1, etc. lines for raw jiffie counts

When implementing continuous monitoring:

  • Batch mode adds ~5-15ms overhead per snapshot
  • For high-frequency sampling (>1Hz), /proc/stat parsing is more efficient
  • Consider using taskset to pin your monitoring process to specific cores