When running iPerf between interfaces on the same machine, you might encounter misleadingly high throughput results (like 29Gb/s) because the traffic is being routed through the loopback interface instead of the physical NICs. Here's how to force proper interface binding:
# On first terminal (server):
iperf3 -s -B 192.168.1.100 -p 5201
# On second terminal (client):
iperf3 -c 192.168.1.100 -B 192.168.1.101 -p 5201 -t 30
Key parameters to note:
- -B binds to specific IP/interface
- Use actual IPs, not interface names (eth0/eth1)
- Different ports help avoid conflicts
Always monitor interface counters during tests:
# Linux:
watch -n 1 'cat /proc/net/dev | grep eth'
# Windows:
netsh interface ipv4 show interfaces
When iPerf proves problematic, consider:
# Ntttcp (Windows/Linux):
ntttcp -r -m 4,*,192.168.1.100 -t 30
ntttcp -s -m 4,*,192.168.1.101 -t 30
For more precise measurements between local interfaces:
# Using specific sockets and window sizes
iperf3 -s -B 192.168.1.100 -p 5201 -w 128K
iperf3 -c 192.168.1.100 -B 192.168.1.101 -p 5201 -w 128K -O 2 -P 4
- Firewall rules blocking test traffic
- Interface MTU mismatches
- Kernel bypass optimizations
- Virtual interface quirks (VLANs, bridges)
When attempting to measure network throughput between two interfaces on the same machine using Iperf, many administrators encounter a common pitfall. The default behavior often bypasses the physical network interface entirely, resulting in misleadingly high throughput measurements that actually represent internal loopback transfers.
The correct approach requires careful interface binding on both client and server instances. Here's the proper syntax:
# On first terminal (server on eth0):
iperf3 -s -B 192.168.1.100
# On second terminal (client on eth1):
iperf3 -c 192.168.1.100 -B 192.168.1.101
To confirm traffic is actually traversing your physical interfaces:
# Linux:
sudo tcpdump -i eth0 -nn -vv tcp port 5201
sudo tcpdump -i eth1 -nn -vv tcp port 5201
# Windows (with Npcap):
tcpdump -i "Ethernet 1" -n -vv port 5201
When Iperf proves problematic, consider these alternatives:
# Using netcat for basic testing:
# Server:
nc -l -p 5000 > /dev/null
# Client:
dd if=/dev/zero bs=1M count=100 | nc server_ip 5000
# Using sockperf for latency-focused tests:
sockperf server -i 192.168.1.100 -p 5201
sockperf client -i 192.168.1.100 -p 5201 --tcp -m 1400 -t 60
- Ensure interfaces are in separate subnets to prevent routing shortcuts
- Disable interface offloading features that might distort results:
ethtool -K eth0 gro off gso off tso off
- For virtual interfaces, confirm they aren't bridged internally
- Check MTU settings match on both interfaces
Realistic throughput expectations should consider:
- Interface speed (1Gbps, 10Gbps, etc.)
- PCIe bus limitations
- CPU overhead for packet processing
- Switch capabilities if traffic traverses network equipment