When benchmarking high-speed networks, you'll need specialized tools capable of saturating 10Gbps links. RHEL 5.x systems require careful tool selection due to their older kernel networking stacks.
Both tools can handle 10Gbps traffic generation, but with different approaches:
# iperf (recommended for raw throughput testing)
iperf -s # On receiver
iperf -c receiver_ip -t 60 -i 1 -P 8 # On sender (8 parallel threads)
# netperf (better for protocol-specific testing)
netserver # On receiver
netperf -H receiver_ip -l 60 -t TCP_STREAM -- -m 64K -s 256K -S 256K
To achieve full 10Gbps throughput:
- Use multiple parallel streams (-P flag in iperf)
- Increase socket buffer sizes (-w in iperf, -s/-S in netperf)
- Consider CPU affinity (taskset) for multi-core systems
This iperf3 configuration achieves near-line-rate on modern hardware:
# Receiver:
iperf3 -s -p 5201
# Sender:
iperf3 -c receiver_ip -p 5201 -t 300 -b 10G \
-P 16 -w 512K -O 2 -T "10G Test" --logfile iperf.log
If you're not reaching 10Gbps:
- Verify NIC driver settings (ethtool -k)
- Check for CPU saturation (top/htop)
- Test with different packet sizes (64B to 9000B)
- Ensure proper TCP window scaling
For more advanced scenarios:
# Packet-level generation (pktgen)
modprobe pktgen
pgset "count 10000000"
pgset "delay 0"
pgset "dst dst_ip"
pgset "dst_mac dst_mac"
When testing high-speed network performance on RHEL 5.x systems, proper tool selection and configuration are crucial. For 10Gbps throughput testing, you'll need tools that can handle the bandwidth while providing accurate metrics.
Both iperf
and netperf
can be used for 10Gbps benchmarking, but with important caveats:
# Install iperf on RHEL 5.x wget https://downloads.sourceforge.net/project/iperf/iperf/2.0.5/iperf-2.0.5.tar.gz tar -xzf iperf-2.0.5.tar.gz cd iperf-2.0.5 ./configure make make install
For reliable 10Gbps measurements, use these parameters:
# On server node: iperf -s -w 256K -i 1 # On client node: iperf -c server-ip -w 256K -t 60 -i 1 -P 8 -T 10
Key flags explanation:
-w 256K
: Sets window size for better throughput
-P 8
: Uses 8 parallel threads
-T 10
: Test title for identification
For more detailed TCP stack analysis:
netserver -p 12865 netperf -H server-ip -p 12865 -t TCP_STREAM -l 60 -- -m 64K -s 256K -S 256K
Essential RHEL 5.x optimizations:
# Increase TCP buffer sizes echo "net.core.rmem_max=16777216" >> /etc/sysctl.conf echo "net.core.wmem_max=16777216" >> /etc/sysctl.conf echo "net.ipv4.tcp_rmem=4096 87380 16777216" >> /etc/sysctl.conf echo "net.ipv4.tcp_wmem=4096 65536 16777216" >> /etc/sysctl.conf # Apply changes sysctl -p
Before testing, confirm your NICs support 10Gbps:
ethtool eth0 | grep Speed lspci | grep -i ethernet
If you're not reaching 10Gbps:
- Check for CPU bottlenecks (top/htop)
- Verify interrupt coalescing settings (ethtool -c)
- Test with jumbo frames (if supported)
- Consider NUMA affinity for multi-socket systems
For deep inspection during tests:
tcpdump -i eth0 -s 96 -w test.pcap tcp port 5001