When implementing network bonding with two 1Gbps adapters, many engineers expect to see a combined 2Gbps throughput. However, iperf often reports only ~930Mbps - essentially the speed of a single NIC. This occurs even with proper LACP (802.3ad) configuration on both servers and switches.
Different bonding modes yield different throughput characteristics:
# Common bonding modes and their typical throughput:
balance-rr (Round Robin) → ~940Mbps (single flow)
802.3ad (LACP) → ~520-940Mbps (asymmetric)
balance-xor → ~940Mbps
active-backup → ~940Mbps
The fundamental constraint comes from TCP's single-flow behavior. Even with bonded interfaces:
- Each TCP connection gets hashed to a single physical NIC
- iperf's default single-threaded test (-P 1) can't utilize multiple paths
- LACP load balancing works per-flow, not per-packet
To properly test bonded throughput, use parallel streams:
# Server:
iperf -s
# Client (10 parallel streams):
iperf -c server_ip -P 10 -t 60 -i 10
This should show aggregated throughput approaching 2Gbps if:
- Switch LACP configuration is correct
- Traffic uses multiple TCP/UDP flows
- No other bottlenecks exist (CPU, disk, etc.)
For Netgear GS728TS, verify these LACP settings:
# Sample configuration snippet:
configure lacp ports 1,2 actor-key 1
configure lacp ports 1,2 admin-key 1
enable lacp ports 1,2
Critical parameters to check:
- LAG hashing algorithm (should be layer2+3)
- Flow control settings (IEEE 802.3x enabled)
- Port speed/duplex auto-negotiation
To identify where packets are actually flowing:
# Check interface statistics:
cat /proc/net/bonding/bond0
# Real-time traffic monitoring:
iftop -i bond0
nload -m bond0
For deeper analysis:
# Capture traffic on physical interfaces:
tcpdump -i eth0 -w eth0.pcap
tcpdump -i eth1 -w eth1.pcap
Consider these tools for more comprehensive testing:
# Multi-threaded UDP testing:
iperf3 -c server_ip -u -b 2G -P 8
# Sockperf for latency measurement:
sockperf ping-pong -i server_ip --tcp
Here's a complete bonding configuration for Ubuntu:
# /etc/network/interfaces
auto bond0
iface bond0 inet static
address 192.168.1.100
netmask 255.255.255.0
slaves eth0 eth1
bond-mode 802.3ad
bond-miimon 100
bond-lacp-rate 1
bond-xmit-hash-policy layer2+3
Linux bonding offers several modes with different characteristics:
# Common bonding modes:
modprobe bonding mode=0 # balance-rr (Round Robin)
modprobe bonding mode=4 # 802.3ad (LACP)
modprobe bonding mode=6 # balance-alb (Adaptive Load Balancing)
iperf measures single TCP stream performance by default. Since most bonding modes (including LACP) maintain flow consistency (same NIC for a given connection), you'll never exceed single NIC bandwidth for a single stream. This explains your 930-945Mbps results.
Your multi-connection test (-P 10
) should theoretically aggregate bandwidth, but several factors can prevent this:
# Better multi-stream test command:
iperf3 -c 192.168.1.2 -P 8 -t 30 -O 3 -i 1
The RX/TX imbalance you observe in 802.3ad
mode is actually expected behavior. LACP uses hashing algorithms (typically layer2+3 or layer3+4) to determine path selection:
# Check your hash policy:
cat /sys/class/net/bond0/bonding/xmit_hash_policy
Your Netgear switch needs proper LACP configuration:
# Sample LACP config for Cisco (concepts apply to Netgear):
interface Port-channel1
description LACP to ServerA
switchport mode trunk
channel-protocol lacp
channel-group 1 mode active
For comprehensive testing, consider these tools:
# Use nuttcp for parallel testing:
nuttcp -T 30 -u -R -w4m -N8 192.168.1.2
# Or use multiple iperf instances:
for i in {1..4}; do
iperf3 -c 192.168.1.2 -p 520$i -t 20 &
done
Several factors affect aggregated throughput:
- PCIe bus saturation
- CPU affinity/interrupt balancing
- TCP window scaling
- NIC queue configuration
Check these settings for optimal performance:
# Check interrupt balancing:
cat /proc/interrupts | grep eth
# Set CPU affinity:
echo 2 > /proc/irq/24/smp_affinity