When testing website performance, it's crucial to simulate real-world network conditions. Many developers face challenges when trying to replicate specific bandwidth and latency scenarios on Linux servers. This guide provides a comprehensive solution for controlling both incoming and outgoing traffic.
For outbound traffic shaping, we'll use the Token Bucket Filter (TBF) with tc
. This approach effectively limits bandwidth while maintaining consistent latency:
# Limit outbound traffic to 4Mbps with 50ms latency
tc qdisc add dev eth0 root tbf \
rate 4.0mbit \
latency 50ms \
burst 50kb \
mtu 10000
The parameters work as follows:
rate
: Sets the maximum bandwidth (4Mbps in this case)latency
: Controls the packet delay (50ms)burst
: Determines how much data can be sent in a burstmtu
: Maximum transmission unit size
Inbound traffic shaping is more complex due to Linux's packet reception mechanism. We'll use an ingress qdisc combined with policing:
# Create ingress queue discipline
tc qdisc add dev eth0 handle ffff: ingress
# Apply rate limiting to incoming traffic
tc filter add dev eth0 parent ffff: \
protocol ip \
u32 match ip src 0.0.0.0/0 \
police rate 512kbit \
burst 10k \
mtu 10000 \
drop
Key considerations for inbound limiting:
- The
police
action is used instead ofrate
for ingress burst
should be adjusted based on your specific needs- Lower
mtu
values can help with more precise control
After applying these settings, verify them with:
tc qdisc show dev eth0
tc -s qdisc show dev eth0
tc filter show dev eth0
To make these changes persistent across reboots, add them to your network configuration. For Debian/Ubuntu systems:
# /etc/network/interfaces
auto eth0
iface eth0 inet static
address 192.168.1.100
netmask 255.255.255.0
gateway 192.168.1.1
post-up tc qdisc add dev eth0 root tbf rate 4.0mbit latency 50ms burst 50kb mtu 10000
post-up tc qdisc add dev eth0 handle ffff: ingress
post-up tc filter add dev eth0 parent ffff: protocol ip u32 match ip src 0.0.0.0/0 police rate 512kbit burst 10k mtu 10000 drop
If you experience unstable bandwidth limiting, try adjusting these parameters:
# More stable inbound limiting example
tc filter change dev eth0 parent ffff: \
protocol ip \
u32 match ip src 0.0.0.0/0 \
police rate 512kbit \
burst 20k \
mtu 1500 \
drop
Remember that network throttling affects all traffic on the interface. For more granular control, consider using classful qdiscs or network namespaces.
When testing web application performance, accurately simulating real-world network conditions is crucial. Many developers struggle with properly throttling both incoming (ingress) and outgoing (egress) traffic on Linux systems. The key requirements typically include:
- Precise latency control (50ms in this case)
- Asymmetric bandwidth limits (512kbps ingress, 4096kbps egress)
- Consistent behavior during sustained transfers
Linux's traffic control system consists of several important concepts:
qdisc - Queuing discipline (scheduler)
class - Traffic classification
filter - Rule to classify traffic
netem - Network emulation module
tbf - Token Bucket Filter (for bandwidth limiting)
For outbound traffic shaping, we'll use a combination of Token Bucket Filter (TBF) and netem:
# Clear existing rules
tc qdisc del dev eth0 root 2>/dev/null
# Set up egress shaping (4Mbps with 50ms latency)
tc qdisc add dev eth0 root handle 1: tbf \
rate 4.0mbit \
burst 50kb \
latency 50ms \
mtu 10000
# Add netem for latency (50ms) on top
tc qdisc add dev eth0 parent 1:1 handle 10: netem \
delay 50ms
Ingress traffic control requires an ingress qdisc and policing:
# Clear existing ingress rules
tc qdisc del dev eth0 ingress 2>/dev/null
# Set up ingress qdisc
tc qdisc add dev eth0 handle ffff: ingress
# Apply ingress policing (512kbps)
tc filter add dev eth0 parent ffff: protocol ip \
u32 match u32 0 0 \
police rate 512kbit \
burst 10k \
mtu 10000 \
drop flowid :1
After applying these settings, verify with:
tc -s qdisc show dev eth0
tc -s class show dev eth0
tc -s filter show dev eth0
To make these changes persist across reboots, add them to your network configuration or create a startup script. For systemd systems, create a service file:
[Unit]
Description=Traffic Shaping
After=network.target
[Service]
Type=oneshot
ExecStart=/path/to/your/tc-script.sh
[Install]
WantedBy=multi-user.target
For simpler cases, consider using the wondershaper utility:
wondershaper eth0 4096 512
However, this won't handle latency settings and provides less fine-grained control than direct tc commands.
If you experience unexpected behavior:
- Check your network interface name (eth0 might be ensX on newer systems)
- Verify kernel modules are loaded (sch_netem, sch_tbf)
- Test with simple tools like ping and iperf3
- Clear existing rules before applying new ones