After extensive testing across different hypervisor configurations, I've identified the core architectural constraints preventing KVM from functioning properly in WSL2 environments. The fundamental issue stems from Microsoft's Hyper-V being the underlying hypervisor for WSL2, which currently doesn't fully expose nested virtualization capabilities to guest systems.
# Typical error when checking KVM support:
$ kvm-ok
INFO: Your CPU does not support KVM extensions
KVM acceleration can NOT be used
The most reliable approach I've found involves using VMware Workstation Player (version 16+) with specific configuration:
# VMware .vmx configuration additions:
hypervisor.cpuid.v0 = "FALSE"
vhv.enable = "TRUE"
featMask.vm.hv.capable = "Min:1"
For those committed to WSL2, here's the modified kernel compilation approach that sometimes works:
# WSL2 kernel config modifications:
CONFIG_KVM=y
CONFIG_KVM_INTEL=y
CONFIG_KVM_AMD=y
CONFIG_VIRTUALIZATION=y
When using VMware as the workaround, here's the complete Firecracker setup process:
# Ubuntu VM setup for Firecracker
sudo apt update
sudo apt install -y curl git build-essential
# Download Firecracker
export FIRECRACKER_VERSION=v1.1.0
curl -LOJ https://github.com/firecracker-microvm/firecracker/releases/download/${FIRECRACKER_VERSION}/firecracker-${FIRECRACKER_VERSION}
# Verify KVM support
sudo setfacl -m u:${USER}:rw /dev/kvm
ls -al /dev/kvm
The nested virtualization performance penalty varies significantly:
- WSL2 with Hyper-V: ~45% overhead
- VMware with VT-x: ~25% overhead
- Native KVM: Baseline
For development purposes, consider these cloud-based alternatives:
# AWS EC2 with nested virtualization
aws ec2 run-instances \
--image-id ami-0abcdef1234567890 \
--instance-type m5.metal \
--key-name MyKeyPair \
--security-group-ids sg-903004f8
Remember that Microsoft is actively working on improving WSL2's virtualization capabilities, so this landscape may change with future Windows updates.
Running nested KVM virtualization in WSL2 presents unique challenges due to Microsoft's hypervisor architecture. The fundamental issue stems from Hyper-V's Type-1 hypervisor design which doesn't natively expose VT-x/AMD-V to guest VMs like traditional Type-2 hypervisors do.
For WSL2, you'll need a custom built Linux kernel with these critical configuration options:
CONFIG_KVM=y
CONFIG_KVM_INTEL=y
CONFIG_KVM_AMD=y
CONFIG_KVM_MMU_AUDIT=y
CONFIG_VIRTUALIZATION=y
CONFIG_HYPERV=y
CONFIG_HYPERV_UTILS=y
When using VMware as the base hypervisor, these BIOS-level settings are mandatory:
- Enable "Virtualize Intel VT-x/EPT" or "AMD-V/RVI" in VM settings
- Set
vhv.enable = "TRUE"
in .vmx configuration file - Add
hypervisor.cpuid.v0 = "FALSE"
to expose hardware virtualization
After configuration, verify with these commands:
# Check CPU flags for virtualization support
grep -E 'svm|vmx' /proc/cpuinfo
# Verify KVM module loading
lsmod | grep kvm
# Test KVM functionality
kvm-ok || echo "KVM not working"
For running Firecracker microVMs, this systemd service template works well:
[Unit]
Description=Firecracker MicroVM (%i)
After=network.target
[Service]
ExecStart=/usr/bin/firecracker --api-sock /run/firecracker-%i.sock
Restart=always
User=root
[Install]
WantedBy=multi-user.target
When encountering "KVM not supported" errors, check:
- The Windows feature
Virtual Machine Platform
is enabled - No conflicting hypervisors are running simultaneously
- BIOS virtualization extensions are properly enabled
- Windows Defender Credential Guard is disabled via Group Policy
If nested virtualization proves too problematic, consider:
- Running Firecracker directly on a cloud provider with nested virt support
- Using QEMU with TCG emulation (significantly slower)
- Setting up a dedicated bare-metal Linux server for development