When working with transparent proxy setups, conventional interface-based traffic shaping (using tools like tc
on eth0/eth1) often falls short. What we really need is granular control at the IP level to enforce bandwidth limits for specific clients or IP ranges.
The Linux kernel's traffic control subsystem provides the perfect tools for this job. Here's how to implement IP-based shaping using HTB:
# Basic HTB setup
tc qdisc add dev eth0 root handle 1: htb default 30
tc class add dev eth0 parent 1: classid 1:1 htb rate 100mbit ceil 100mbit
The magic happens when we combine HTB with filter rules. Here's how to limit bandwidth for specific IPs:
# Create a class for our limited bandwidth
tc class add dev eth0 parent 1:1 classid 1:10 htb rate 1mbit ceil 1mbit
# Add filter for specific IP
tc filter add dev eth0 protocol ip parent 1:0 prio 1 u32 \
match ip dst 192.168.1.100 flowid 1:10
For CIDR ranges, we need to use the match
command more creatively:
# Limit entire /24 subnet to 10Mbit
tc class add dev eth0 parent 1:1 classid 1:20 htb rate 10mbit ceil 10mbit
# Filter for 192.168.2.0/24
tc filter add dev eth0 protocol ip parent 1:0 prio 1 u32 \
match ip dst 192.168.2.0/24 flowid 1:20
For more complex scenarios, we can leverage iptables to mark packets:
# Mark packets in iptables
iptables -t mangle -A PREROUTING -s 192.168.3.0/24 -j MARK --set-mark 3
# Filter based on mark
tc filter add dev eth0 parent 1:0 protocol ip handle 3 fw flowid 1:30
Always verify your setup with these commands:
# Show classes
tc -s class show dev eth0
# Show filter stats
tc -s filter show dev eth0
- Ensure your kernel has CONFIG_NET_SCH_HTB enabled
- Remember that shaping applies to egress traffic (outgoing on the interface)
- For ingress shaping, consider ifb (Intermediate Functional Block) devices
- Test with
iperf
before deploying to production
Here's a complete script to limit three IP ranges differently:
#!/bin/bash
DEV=eth0
RATE=100mbit # Total available bandwidth
# Clear existing rules
tc qdisc del dev $DEV root 2>/dev/null
# Main HTB setup
tc qdisc add dev $DEV root handle 1: htb default 40
tc class add dev $DEV parent 1: classid 1:1 htb rate $RATE ceil $RATE
# Classes for different IP ranges
tc class add dev $DEV parent 1:1 classid 1:10 htb rate 5mbit ceil 5mbit # strict limit
tc class add dev $DEV parent 1:1 classid 1:20 htb rate 20mbit ceil 30mbit # burstable
tc class add dev $DEV parent 1:1 classid 1:30 htb rate 10mbit ceil 15mbit # premium
# Filters
tc filter add dev $DEV protocol ip parent 1:0 prio 1 u32 \
match ip dst 10.0.1.0/24 flowid 1:10
tc filter add dev $DEV protocol ip parent 1:0 prio 2 u32 \
match ip dst 10.0.2.0/24 flowid 1:20
tc filter add dev $DEV protocol ip parent 1:0 prio 3 u32 \
match ip dst 10.0.3.0/24 flowid 1:30
When working with transparent proxy configurations, traditional interface-based traffic shaping (eth0
/eth1
) often falls short. The real need is to control bandwidth at the IP level - whether for individual IPs or entire ranges.
The Hierarchical Token Bucket (HTB) qdisc in Linux's traffic control system (tc
) provides the granular control we need. Here's how to implement it:
# Basic structure tc qdisc add dev eth0 root handle 1: htb default 12 tc class add dev eth0 parent 1: classid 1:1 htb rate 100mbit ceil 100mbit tc class add dev eth0 parent 1:1 classid 1:10 htb rate 1mbit ceil 1mbit tc filter add dev eth0 protocol ip parent 1:0 prio 1 u32 match ip src 192.168.1.100 flowid 1:10
For IP ranges, we extend the filter rules using CIDR notation or IP ranges:
# For a /24 subnet tc filter add dev eth0 parent 1:0 protocol ip prio 10 u32 \ match ip src 192.168.1.0/24 flowid 1:10 # For specific range tc filter add dev eth0 parent 1:0 protocol ip prio 10 u32 \ match ip src 192.168.1.100-192.168.1.200 flowid 1:10
When working with transparent proxies like Squid, ensure your shaping rules account for both directions:
# Outbound shaping tc qdisc add dev eth0 handle ffff: ingress tc filter add dev eth0 parent ffff: protocol ip u32 \ match ip src 192.168.1.0/24 police rate 1mbit burst 10k drop
To make these rules survive reboots, consider options like:
- Adding to
/etc/rc.local
- Creating systemd units
- Using
if-up.d
scripts
Use these commands to verify your setup:
tc -s qdisc ls dev eth0 tc class show dev eth0 tc filter show dev eth0
For real-time monitoring, iftop -f 'src host 192.168.1.100'
or nload -t 1000 -i 10000 -o 10000
can be helpful.
If shaping doesn't work:
- Verify your interface is correctly identified (especially with bridges)
- Check that filters are properly matching traffic (
tcpdump
helps) - Ensure no conflicting qdiscs exist (
tc qdisc del dev eth0 root
to clean)