Configuring KVM Guests on Same Subnet as Host: Bridge Networking Without Dedicated NIC


2 views

When working with KVM virtualization, the default NAT configuration creates an isolated network for guests, which doesn't work for applications relying on broadcast communication. The requirement here is to place VMs on the same L2 network segment as the host while only using the host's primary NIC.

The most effective approach is to create a Linux bridge that includes the host's physical interface. This allows virtual machines to appear as regular hosts on your network.

# Install bridge utilities
sudo apt-get install bridge-utils   # Ubuntu/Debian
sudo yum install bridge-utils       # RHEL/CentOS

# Create bridge configuration
sudo nano /etc/network/interfaces   # Ubuntu/Debian
sudo nano /etc/sysconfig/network-scripts/ifcfg-br0  # RHEL/CentOS

For Ubuntu/Debian:

auto br0
iface br0 inet dhcp
    bridge_ports eth0
    bridge_stp off
    bridge_fd 0
    bridge_maxwait 0

For RHEL/CentOS:

DEVICE=br0
TYPE=Bridge
BOOTPROTO=dhcp
ONBOOT=yes
DELAY=0
STP=off

Once the bridge is set up, modify your VM's XML configuration using virsh:

<interface type='bridge'>
  <mac address='52:54:00:71:b1:b6'/>
  <source bridge='br0'/>
  <model type='virtio'/>
</interface>

If bridge networking isn't feasible, macvtap in bridge mode offers another solution:

<interface type='direct'>
  <mac address='52:54:00:71:b1:b6'/>
  <source dev='eth0' mode='bridge'/>
  <model type='virtio'/>
</interface>
  • Ensure bridge-utils and virtio drivers are installed
  • Check for network manager conflicts (disable if necessary)
  • Verify iptables/nftables aren't blocking traffic
  • Test connectivity between host and guests using ping/arp

When placing VMs on the same subnet as the host:

  • Implement proper firewall rules
  • Consider VLAN segregation if possible
  • Use MAC address filtering where appropriate
  • Monitor ARP traffic for potential spoofing

When using KVM/QEMU's default NAT-based virtual network, guest VMs are isolated on a separate 192.168.122.0/24 subnet. While this works for basic connectivity, applications relying on broadcast communication (like clustering solutions or service discovery protocols) fail because broadcasts don't traverse the NAT boundary.

The traditional approach involves creating a Linux bridge connected to the physical NIC:


# Create persistent bridge configuration (RHEL/CentOS)
nmcli connection add type bridge con-name br0 ifname br0
nmcli connection modify br0 ipv4.method manual ipv4.addresses 192.168.1.100/24 ipv4.gateway 192.168.1.1
nmcli connection add type bridge-slave con-name br0-port1 ifname eth0 master br0
nmcli connection down eth0; nmcli connection up br0

However, this requires either:

  • Dedicating a physical NIC to the bridge (not feasible with single-NIC hosts)
  • Temporarily losing host connectivity during bridge setup

For hosts with a single NIC, macvtap in bridge mode provides a cleaner solution:


# XML snippet for VM network interface
<interface type='direct'>
  <mac address='52:54:00:4a:12:35'/>
  <source dev='eth0' mode='bridge'/>
  <model type='virtio'/>
  <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
</interface>

Key characteristics:

  • Guests appear as separate hosts on the same physical network
  • Broadcast packets are properly forwarded
  • No IP address conflicts (each VM needs unique MAC and IP)

If you're experiencing connectivity problems:


# Check kernel module loading
lsmod | grep macvlan

# Verify interface creation
ip link show type macvlan

# Test connectivity from host to guest
arping -I eth0 192.168.1.101

Common pitfalls:

  • Switch port security blocking multiple MACs
  • Missing VLAN tags when using tagged networks
  • Firewall rules blocking bridge traffic

For more complex scenarios, Open vSwitch provides additional flexibility:


# Install OVS (Ubuntu example)
apt install openvswitch-switch

# Create OVS bridge
ovs-vsctl add-br br0
ovs-vsctl add-port br0 eth0
ifconfig eth0 0 up
ifconfig br0 192.168.1.100/24 up
route add default gw 192.168.1.1 br0

OVS advantages:

  • Better performance for many VMs
  • Advanced features like QoS and flow control
  • Support for VXLAN and other overlay networks