The fundamental difference lies in their scope and persistence:
- mark: Applies to individual packets (transient, packet-level)
- connmark: Operates at connection level (persistent, connection tracking)
# Setting packet mark (mark)
iptables -t mangle -A PREROUTING -p tcp --dport 80 -j MARK --set-mark 1
# Saving mark to connection (connmark)
iptables -t mangle -A PREROUTING -j CONNMARK --save-mark
# Restoring mark from connection
iptables -t mangle -A OUTPUT -j CONNMARK --restore-mark
QoS Example:
# Mark SSH traffic (packet level)
iptables -t mangle -A FORWARD -p tcp --dport 22 -j MARK --set-mark 2
# Persist the marking for entire SSH connection
iptables -t mangle -A FORWARD -j CONNMARK --save-mark
# Apply QoS based on connection mark
tc filter add dev eth0 protocol ip parent 1:0 prio 1 handle 2 fw flowid 1:1
The kernel handles these differently:
- mark: Stored in sk_buff structure (per-packet)
- connmark: Stored in conntrack structure (per-connection)
1. Forgetting to --restore-mark when needed
2. Assuming marks persist across packets without connmark
3. Not clearing marks properly when reusing values
# Initial packet marking
iptables -t mangle -A PREROUTING -s 192.168.1.0/24 -j MARK --set-mark 100
# Save to connection tracking
iptables -t mangle -A PREROUTING -j CONNMARK --save-mark
# Restore mark for reply packets
iptables -t mangle -A OUTPUT -j CONNMARK --restore-mark
# Policy routing based on restored mark
ip rule add fwmark 100 table 100
ip route add default via 10.0.0.1 table 100
Connection marking has minimal overhead (typically <2% CPU impact) as it leverages existing conntrack infrastructure. Packet marking is virtually free but requires additional rules to maintain state.
# View connection marks
cat /proc/net/ip_conntrack | grep -E 'mark=[0-9]+'
# Check packet marks (requires debug kernel)
iptables -t mangle -L -v -n --line-numbers
In iptables, both mark
and connmark
deal with packet labeling, but at fundamentally different layers:
# Packet mark (applies to individual packets)
iptables -t mangle -A PREROUTING -j MARK --set-mark 0x1
# Connection mark (operates at connection level)
iptables -t mangle -A PREROUTING -j CONNMARK --set-mark 0x2
The Linux kernel stores these marks differently:
- mark: Stored in the
sk_buff
structure (volatile, packet-specific) - connmark: Stored in the
nf_conn
structure (persistent for connection lifetime)
Here's a common QoS scenario using both:
# Mark new SSH connections
iptables -t mangle -A PREROUTING -p tcp --dport 22 -j CONNMARK --set-mark 0x22
# Restore mark to packets in established connections
iptables -t mangle -A PREROUTING -j CONNMARK --restore-mark
# Apply QoS based on the mark
iptables -t mangle -A POSTROUTING -m mark --mark 0x22 -j CLASSIFY --set-class 1:10
Aspect | mark | connmark |
---|---|---|
Scope | Single packet | Entire connection |
Persistence | Lost after packet processing | Maintained for connection duration |
Storage Location | Packet metadata | Connection tracking table |
Typical Use Case | Immediate packet handling | Stateful connection policies |
Combining both for complex routing:
# Mark incoming VPN traffic
iptables -t mangle -A INPUT -i tun0 -j CONNMARK --set-mark 0x50
# Restore marks on subsequent packets
iptables -t mangle -A INPUT -j CONNMARK --restore-mark
# Apply packet mark based policies
iptables -t mangle -A FORWARD -m mark --mark 0x50 -j MARK --set-mark 0x51
To inspect marks during troubleshooting:
# View connection marks
cat /proc/net/ip_conntrack | grep -o 'mark=[^ ]*'
# Check packet marks in real-time
tcpdump -i eth0 -nn -v -e | grep 'mark'
Connection marking has higher overhead due to:
- Connection tracking table lookups
- Additional conntrack record updates
- Memory allocation for mark storage
Benchmarks show ~15-20% throughput difference in mark-intensive rulesets.