Process-Based IP Traffic Routing: How to Route Specific Application Traffic Through Dedicated Network Interfaces in Linux


13 views

Traditional network routing in Linux operates at the packet level without process awareness. When we need to route traffic from specific applications through dedicated interfaces - regardless of which user runs them or what ports they use - we require more sophisticated solutions than standard routing tables.

We'll implement this using three key components:

1. cgroups (control groups) for process identification
2. Netfilter (nftables) for packet marking
3. Policy routing (iproute2) for interface selection

First, create a cgroup for our target application:

sudo mkdir /sys/fs/cgroup/net_cls/testapp_group
echo 0x00100001 > /sys/fs/cgroup/net_cls/testapp_group/net_cls.classid

Configure nftables to mark packets from our cgroup:

nft add table ip mangle
nft add chain ip mangle OUTPUT { type route hook output priority mangle \; }
nft add rule ip mangle OUTPUT cgroup 0x00100001 meta mark set 0x1

Set up policy routing for marked packets:

ip rule add fwmark 0x1 lookup 100
ip route add default via 192.168.2.1 dev eth1 table 100

For dynamic process assignment, create a wrapper script:

#!/bin/bash
# /usr/local/bin/testapp_wrapper

cgcreate -g net_cls:testapp_group
echo 0x00100001 > /sys/fs/cgroup/net_cls/testapp_group/net_cls.classid
cgexec -g net_cls:testapp_group /usr/bin/testapp "$@"

Check packet marking with:

nft list ruleset
conntrack -E -o extended | grep MARK

Verify routing decisions using:

ip route show table 100
tcpdump -i eth1 -n

For containerized environments, you'll need to modify the approach:

# Kubernetes example
apiVersion: v1
kind: Pod
metadata:
  name: network-policy-demo
spec:
  containers:
  - name: testapp
    command: ["/usr/local/bin/testapp_wrapper"]
    securityContext:
      capabilities:
        add: ["NET_ADMIN"]

Modern Linux systems often need to route traffic from specific applications through dedicated network interfaces while maintaining default routing for other processes. The core challenge lies in identifying application traffic regardless of:

  • User context (UID/GID)
  • Port numbers (dynamic or configurable)
  • Protocol types (TCP/UDP/ICMP)

For kernel 4.0+ systems, we can combine network namespaces with cgroup v2 filtering:

# Create new network namespace
ip netns add testapp-ns

# Create veth pair
ip link add veth0 type veth peer name veth1
ip link set veth1 netns testapp-ns

# Configure interfaces
ip netns exec testapp-ns ip addr add 192.168.100.2/24 dev veth1
ip netns exec testapp-ns ip link set veth1 up
ip netns exec testapp-ns ip route add default via 192.168.100.1

# Create cgroup
mkdir /sys/fs/cgroup/testapp
echo "+network" > /sys/fs/cgroup/testapp/cgroup.subtree_control

# Network classifier (kernel 5.7+)
echo "1 0 0" > /sys/fs/cgroup/testapp/net_cls.classid
tc filter add dev eth1 parent 1:0 protocol ip handle 1: cgroup

For legacy compatibility, we can use an iptables + process wrapper solution:

#!/bin/bash
# testapp-wrapper
MARK=0x1000
TABLE=100

# Set up routing if not exists
ip rule show | grep -q "fwmark $MARK" || {
    ip rule add fwmark $MARK table $TABLE
    ip route add default via 192.168.1.1 dev eth1 table $TABLE
}

# Execute with network control
iptables -t mangle -A OUTPUT -m owner --pid $PPID -j MARK --set-mark $MARK
exec /usr/bin/testapp "$@"

For maximum flexibility (kernel 4.15+):

# BPF program to classify by executable path
SEC("kprobe/sock_sendmsg")
int bpf_prog1(struct pt_regs *ctx) {
    struct task_struct *task = (struct task_struct *)bpf_get_current_task();
    char comm[TASK_COMM_LEN];
    bpf_get_current_comm(&comm, sizeof(comm));
    
    if (__builtin_memcmp(comm, "testapp", 7) == 0) {
        bpf_skb_set_mark(ctx, 0x1000);
    }
    return 0;
}

Confirm routing works as expected:

# Monitor process traffic
nsenter -t $(pidof testapp) -n tcpdump -i eth1

# Check route selection
ip route get 8.8.8.8 mark 0x1000