Understanding KVM CPU Allocation: Virtual vs Physical Core Mapping for Optimal Resource Utilization


3 views

When working with KVM virtualization on a 12-core host, it's crucial to understand that virtual CPUs (vCPUs) don't automatically map 1:1 to physical cores. The Linux scheduler implements CPU time sharing, allowing multiple vCPUs to share physical cores through time slicing.

In your scenario with 10 VMs running non-CPU-intensive tasks, you can safely overcommit vCPUs. Here's how the mapping works:


# Example libvirt domain XML showing vCPU allocation
<domain type='kvm'>
  <vcpu placement='static'>2</vcpu>
  <cpu mode='host-passthrough' check='none'/>
  <cputune>
    <vcpupin vcpu='0' cpuset='0'/>
    <vcpupin vcpu='1' cpuset='1'/>
  </cputune>
</domain>

The KVM scheduler uses Completely Fair Scheduler (CFS) to allocate CPU time. When you assign 1 vCPU to a guest, it doesn't mean exclusive access to a physical core, but rather:

  • Access to a share of overall CPU resources
  • Time-sliced execution across available cores
  • Dynamically balanced by the host scheduler

For your 10 VMs on a 12-core host:


# Monitoring actual CPU usage to determine optimal allocation
sudo virsh vcpuinfo domain_name
sudo virsh cpu-stats domain_name

Consider starting with 1-2 vCPUs per VM and adjust based on actual workload patterns. The Linux scheduler will efficiently share physical cores among all vCPUs when workloads aren't CPU-bound.

For performance-sensitive workloads, explore these tuning parameters:


# Example of CPU pinning and NUMA awareness
<cputune>
  <vcpupin vcpu='0' cpuset='0-3'/>
  <emulatorpin cpuset='4-7'/>
</cputune>
<numatune>
  <memory mode='strict' nodeset='0'/>
</numatune>

Remember that the optimal configuration depends on your specific workload characteristics and performance requirements.


When working with KVM virtualization, it's crucial to understand that virtual CPUs (vCPUs) don't directly map 1:1 to physical CPU cores by default. In your 12-core server scenario, KVM guests can indeed access all available cores through the Linux scheduler's time-sharing mechanism.

KVM allows vCPU overcommitment, meaning you can allocate more vCPUs than physical cores exist. For example:


# Example: Creating a VM with 4 vCPUs on a 12-core host
virt-install \
--name vm1 \
--memory 4096 \
--vcpus 4 \
--disk size=20 \
--os-variant centos7.0

This works because most workloads don't utilize CPUs at 100% continuously. The hypervisor schedules vCPU time slices across physical cores.

For tasks requiring dedicated resources, use CPU pinning to bind vCPUs to specific physical cores:


# Pin vCPU 0-3 of vm1 to physical cores 4-7
virsh vcpupin vm1 0 4
virsh vcpupin vm1 1 5
virsh vcpupin vm1 2 6
virsh vcpupin vm1 3 7

Use these commands to monitor actual CPU usage:


# Check host CPU load
mpstat -P ALL 1

# Check per-VM CPU usage
virsh cpu-stats vm1

On NUMA systems, ensure vCPUs and memory are allocated from the same NUMA node:


# Check NUMA topology
numactl --hardware

# Launch VM with NUMA affinity
virt-install --numatune mode=strict --nodeset 0

For your 10 isolated tasks:

  • Start with 1-2 vCPUs per VM
  • Monitor performance before considering pinning
  • Use CPU shares (virsh schedinfo) for priority control
  • Consider cgroups for advanced resource control