When configuring Linux network bonding with mode=0 (balance-rr), packets are transmitted sequentially through each interface in the bond. This round-robin approach differs fundamentally from LACP (802.3ad) which requires switch support.
# Example /etc/network/interfaces configuration:
auto bond0
iface bond0 inet static
address 192.168.1.100
netmask 255.255.255.0
gateway 192.168.1.1
bond-mode 0
bond-slaves eth0 eth1
bond-miimon 100
Balance-rr will technically work with any Ethernet switch, including:
- Unmanaged/dumb switches
- Basic managed switches without LACP
- Enterprise switches (though not recommended as primary aggregation)
A critical limitation exists: balance-rr won't double bandwidth for individual TCP sessions. The round-robin distribution means:
- Single TCP flow uses only one physical interface
- Multiple concurrent connections distribute across interfaces
- UDP streams may see improved throughput for large packet bursts
Testing with iperf3 demonstrates the behavior:
# Single TCP stream (limited to single interface speed)
iperf3 -c server -P 1
# Multiple streams (utilizes full bond bandwidth)
iperf3 -c server -P 4
The key metric is packets-per-second (PPS) capacity rather than raw bandwidth for single connections.
For optimal balance-rr performance:
# Recommended sysctl settings:
net.ipv4.tcp_reordering=3
net.core.netdev_max_backlog=5000
net.core.dev_weight=512
# Ethtool settings for each slave:
ethtool -K eth0 tx-udp_tnl-segmentation off
ethtool -K eth0 tx-checksum-ip-generic on
Linux's mode=0
(balance-rr) bonding operates at Layer 2 without requiring switch participation, making it fundamentally different from LACP-based modes. The round-robin algorithm sequentially transmits packets through each slave interface without any frame modification or coordination with network equipment.
# Example balance-rr configuration in /etc/network/interfaces
auto bond0
iface bond0 inet static
address 192.168.1.100
netmask 255.255.255.0
bond-mode 0
bond-miimon 100
bond-slaves eth0 eth1
Balance-rr works with all switches including:
- Unmanaged/dumb switches
- Basic Layer 2 switches
- Enterprise switches (without LACP configuration)
However, consider these practical limitations:
# Check current bonding status
$ cat /proc/net/bonding/bond0
Bonding Mode: load balancing (round-robin)
Despite having multiple active paths, balance-rr does not increase single TCP session bandwidth due to:
- TCP's in-order packet delivery requirement
- Potential out-of-order packet issues when using different paths
- Switch MAC address learning behavior
Balance-rr shines in these scenarios:
# Performance comparison script example
#!/bin/bash
for i in {1..5}; do
iperf3 -c server -t 30 -P 10 | grep "receiver"
done
- Multiple parallel TCP connections (HTTP servers, NFS)
- UDP-based applications (VoIP, video streaming)
- Load distribution across multiple client connections
For optimal performance with balance-rr:
# sysctl tuning for better performance
net.core.rmem_max = 4194304
net.core.wmem_max = 4194304
net.ipv4.tcp_rmem = 4096 87380 4194304
net.ipv4.tcp_wmem = 4096 16384 4194304
Remember to test with ethtool
to verify interface speeds and duplex settings match on all bonded interfaces.