Troubleshooting Hyper-V Guest Network Bottleneck: Fixing 100Mbit Limit on 1Gbps Host Connection


5 views

When dealing with Hyper-V virtualization on Windows Server 2016, a common performance issue surfaces where guests experience network throughput capped at approximately 100Mbit (10MB/s) despite the host having full 1Gbps capability. This manifests specifically when:

  • Host-to-host transfers achieve ~90MB/s
  • Guest local disk operations show 100MB/s+
  • Guest network shares limit to 10MB/s

The HP Proliant ML350 G6's Broadcom NetXTreme NICs should theoretically support line-rate gigabit throughput. Key observations from the configuration:

// Sample PowerShell to verify host NIC status
Get-NetAdapter | Where-Object {$_.InterfaceDescription -like "*Broadcom*"} | 
Select-Object Name, InterfaceDescription, Status, LinkSpeed

Typical output for healthy 1Gbps connection:

Name       InterfaceDescription          Status LinkSpeed
----       --------------------          ------ ---------
Ethernet 2 Broadcom NetXTreme Gigabit... Up     1 Gbps

The virtual switch manager screenshot reveals common misconfiguration points:

  • Virtual Switch Type: External vs Internal vs Private
  • Bandwidth Management: Default vs Minimum/Maximum settings
  • VLAN Identification: Potential tagging conflicts

Critical PowerShell verification:

Get-VMSwitch | Select-Object Name, SwitchType, BandwidthReservationMode
Get-VMNetworkAdapter -VMName "YourVMName" | Select-Object Name, Status, IPAddresses

The synthetic network adapter in Hyper-V guests requires specific tuning:

# Disable TCP Chimney Offload (often problematic in VMs)
Disable-NetAdapterChecksumOffload -Name "Ethernet" -Tcp IPv4
Disable-NetAdapterRsc -Name "Ethernet"

# Optimize VMQ settings (for Broadcom NICs)
Set-NetAdapterVmq -Name "Ethernet" -BaseProcessorNumber 0 -MaxProcessors 4

When basic configuration checks don't resolve the bottleneck:

  1. Driver Verification:
    pnputil /enum-drivers | findstr "netxtreme"
    

    Ensure using latest Broadcom BCM57xx drivers (v17.x+)

  2. MTU Consistency Check:
    ping -f -l 1472 target_ip  # Verify no fragmentation
    
  3. Performance Monitor Counters:
    • \Hyper-V Virtual Network Adapter(*)\Bytes Received/sec
    • \Hyper-V Virtual Switch(*)\Dropped Packets

For a similar case on Server 2016, applying these registry tweaks resolved the issue:

Windows Registry Editor Version 5.00

[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\VMSMP\Parameters]
"MaxRxBuffers"=dword:00004000
"MaxTxBuffers"=dword:00004000

[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\NDIS\Parameters]
"NumRxBuffers"=dword:00004000

Use iPerf3 for objective throughput testing between host and guest:

# On host (server mode):
iperf3 -s

# On guest (client mode):
iperf3 -c host_ip -t 60 -P 4

Expected results should show 800Mbps+ in both directions for healthy 1Gbps links.

When persistent issues occur with synthetic adapters, consider:

# Add legacy network adapter (temporary diagnostic)
Add-VMNetworkAdapter -VMName "YourVM" -IsLegacy $true

# Configure SR-IOV if supported
Set-VMNetworkAdapter -VMName "YourVM" -IovWeight 100

When working with Hyper-V virtualization on Windows Server 2016, I encountered a peculiar network performance issue. While the host system delivers full gigabit speeds (~90MB/s file transfers), the VM guest gets throttled to exactly 100Mbit (10MB/s) - a classic Fast Ethernet limitation pattern.

Host System:
- HP ProLiant ML350 G6
- Dual Broadcom NetXtreme Gigabit NICs
- Windows Server 2016 (fully updated)
- Hyper-V role enabled

Virtual Network Configuration:
- External virtual switch bound to physical NIC
- Synthetic network adapter for guest
- No bandwidth limitations set
- VLAN identification disabled

Before diving deeper, let's eliminate common suspects:

  1. Confirmed disk I/O performance exceeds network throughput (100MB/s internal transfers)
  2. Verified network cables and switches support Gigabit
  3. Checked for driver updates on both host and guest
  4. Tested with both synthetic and legacy network adapters

The breakthrough came when examining the VM's virtual network adapter settings in PowerShell:

Get-VMNetworkAdapter -VMName "YourVMName" | Select-Object *

Output revealed the MaximumBandwidth parameter was unexpectedly set to 100Mbps, despite no explicit QoS policies.

To fix this, we need to modify the VM's network adapter settings:

# Remove any bandwidth limitation
Set-VMNetworkAdapter -VMName "YourVMName" -MaximumBandwidth 0

# Alternative method via Hyper-V Manager:
1. Right-click VM → Settings
2. Select Network Adapter
3. Under Bandwidth Management, ensure "Enable bandwidth management" is unchecked
4. Click OK to apply

After applying these changes, perform throughput validation:

# On guest OS:
iperf3 -c your.host.ip -t 30

# Expected output:
[ ID] Interval           Transfer     Bandwidth
[  4]   0.00-30.00  sec   3.15 GBytes   902 Mbits/sec

For environments requiring precise control, consider these registry tweaks on the host:

Windows Registry Editor Version 5.00

[HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Virtualization]
"MaxNicBandwidth"=dword:00000000

[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\VMSMP\Parameters]
"NicMaxQueuePairs"=dword:00000008

The root cause appears to be a Hyper-V default setting that inexplicably caps new VM network adapters. This behavior persists across:

  • Windows Server 2016 (1607) through 2022
  • Both generation 1 and 2 VMs
  • Various network adapter types (synthetic/legacy)
Component Setting Value
Virtual Switch Type External
VM Network Adapter Bandwidth Limit Unlimited (0)
Host NIC Jumbo Packets 9014 Bytes
Guest OS RSS Queues 4 (for 4-core VM)