Troubleshooting Jumbo Frame MTU Issues Between KVM Guests and Host Using Linux Bridge


2 views

When implementing 9000-byte jumbo frames for storage communication in a KVM virtualization environment, the setup appears correct at first glance:

# Host bridge configuration
host# ip link show br1
8: br1:  mtu 9000 qdisc noqueue state UP 
    link/ether fe:54:00:50:f3:55 brd ff:ff:ff:ff:ff:ff

# Guest interface configuration  
guest# ip addr show eth2
4: eth2:  mtu 9000 qdisc pfifo_fast state UP qlen 1000
    link/ether 52:54:00:50:f3:55 brd ff:ff:ff:ff:ff:ff

While regular ping works fine, large packets fail silently. The critical missing piece is often the tap interface MTU setting. Here's how to verify:

# Check vnet interface MTU
host# ip link show vnet2
11: vnet2:  mtu 1500 qdisc pfifo_fast master br1 state UNKNOWN

To properly enable end-to-end jumbo frame support:

  1. First set the bridge MTU:
  2. host# ip link set dev br1 mtu 9000
  3. Configure libvirt to use the correct MTU:
  4. <interface type="bridge">
      <source bridge="br1"/>
      <model type="virtio"/>
      <mtu size="9000"/>
    </interface>
  5. Verify all components in the path:
  6. # Check all interface MTUs in the path
    host# for intf in br1 vnet2; do ip link show $intf | grep mtu; done
    8: br1: mtu 9000 qdisc noqueue state UP 
    11: vnet2: mtu 9000 qdisc pfifo_fast master br1 state UNKNOWN

After configuration, test with large ICMP packets:

host# ping -M do -s 8972 -c 4 172.16.64.10
PING 172.16.64.10 (172.16.64.10) 8972(9000) bytes of data.
8980 bytes from 172.16.64.10: icmp_seq=1 ttl=64 time=0.543 ms
8980 bytes from 172.16.64.10: icmp_seq=2 ttl=64 time=0.511 ms

--- 172.16.64.10 ping statistics ---
4 packets transmitted, 4 received, 0% packet loss

If issues persist:

  • Check for VLAN tagging with ip -d link show
  • Verify firewall isn't blocking large packets: iptables -L -v -n
  • Inspect kernel messages: dmesg | grep -i drop

When implementing 9000-byte MTU for storage communication in KVM environments, many administrators encounter unexpected packet loss despite correct interface configurations. The key symptom manifests when ICMP packets larger than the standard 1500-byte Ethernet frame fail to traverse the virtual network path.

# Verify bridge MTU setting
ip link show br1

# Check guest interface MTU
virsh domiflist vm_name
virsh dumpxml vm_name | grep -A5 "interface"

# Inspect vnet device MTU on host
ip link show vnet2

The Linux bridge-tap combination often introduces MTU constraints that aren't immediately visible:

  • Virtio driver versions older than 4.8 may silently drop jumbo frames
  • Network namespaces can override interface MTU settings
  • QEMU process may need explicit MTU parameters

For reliable jumbo frame operation, implement this configuration sequence:

# On the host:
ip link set dev br1 mtu 9000
ip link set dev vnet2 mtu 9000

# In the guest VM XML configuration:
<interface type='bridge'>
  <source bridge='br1'/>
  <model type='virtio'/>
  <mtu size='9000'/>
  <driver name='vhost' queues='4'/>
</interface>

# After VM startup, verify inside guest:
ip link set dev eth2 mtu 9000

When standard checks don't reveal the issue, use these deeper inspection methods:

# Capture packets on the bridge interface
tcpdump -i br1 -s 1518 -w bridge_capture.pcap

# Check kernel ring buffer for MTU-related errors
dmesg | grep -i mtu

# Verify vhost-net module parameters
cat /sys/module/vhost_net/parameters/experimental_zcopytx

For storage-heavy workloads, consider these additional tweaks:

  • Enable multiqueue virtio with queues matching vCPU count
  • Set txqueuelen to 10000 on virtual interfaces
  • Disable bridge STP unless absolutely required
  • Consider using macvtap in bridge mode for reduced overhead