When working with high-performance network applications (like 40GbE or 100GbE interfaces) in Docker containers, the standard bridged networking approach introduces significant overhead. The veth pair and NAT translation create unnecessary latency and CPU overhead that can bottleneck your network throughput.
The most efficient approach is to assign a physical network interface directly to your container, similar to LXC's "phys" mode. This makes the interface disappear from the host system while becoming available inside the container.
# First, identify your target physical interface
ip link show | grep -A 1 "40G"
# Example output:
# 3: enp129s0f0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000
# link/ether 00:1b:21:ab:cd:ef brd ff:ff:ff:ff:ff:ff
We'll leverage Linux network namespaces to achieve this configuration:
# Create a new network namespace
sudo ip netns add ns1
# Move the physical interface to the new namespace
sudo ip link set enp129s0f0 netns ns1
# Configure IP address inside the namespace
sudo ip netns exec ns1 ip addr add 192.168.1.100/24 dev enp129s0f0
sudo ip netns exec ns1 ip link set enp129s0f0 up
To make this work with Docker, we need to use the --network=container:
approach:
# First launch a helper container with the network namespace
sudo docker run -itd --name network-container --net=none busybox
# Get the container's PID
PID=$(sudo docker inspect -f '{{.State.Pid}}' network-container)
# Create a symlink for Docker's network namespace handling
sudo mkdir -p /var/run/netns
sudo ln -sf /proc/$PID/ns/net /var/run/netns/container-$PID
# Move the physical interface to the container's namespace
sudo ip link set enp129s0f0 netns container-$PID
# Configure the interface inside the container's namespace
sudo ip netns exec container-$PID ip addr add 192.168.1.100/24 dev enp129s0f0
sudo ip netns exec container-$PID ip link set enp129s0f0 up
For some use cases, macvlan might be a simpler alternative:
# Create a macvlan network
sudo docker network create -d macvlan \
--subnet=192.168.1.0/24 \
--gateway=192.168.1.1 \
-o parent=enp129s0f0 \
macvlan40g
# Run your container attached to this network
sudo docker run --network=macvlan40g your-application-image
- Direct interface assignment provides near-native performance (99%+ of bare metal)
- No TCP/IP stack overhead from host bridges
- Ideal for packet processing applications like DPDK, PF_RING, or custom network stacks
- CPU utilization drops significantly compared to bridged networking
If you encounter issues:
# Verify interface status in container
sudo docker exec -it network-container ip link show
# Check network namespace mapping
ls -l /var/run/netns/
# Verify routing inside container namespace
sudo ip netns exec container-$PID ip route
When working with high-performance networking applications in Docker, the standard virtual networking stack can introduce significant overhead. The typical veth
pairs and bridge configurations may not be suitable for scenarios requiring:
- Low-latency 40GbE/100GbE network traffic
- Bypassing kernel networking stack
- Direct hardware access for specialized NIC features
There are several methods to achieve direct physical interface assignment:
Method 1: Using Linux Network Namespaces
This approach moves the physical interface directly into the container's network namespace:
# Find container's PID
CONTAINER_PID=$(docker inspect -f '{{.State.Pid}}' container_name)
# Create a persistent namespace directory
mkdir -p /var/run/netns
ln -sf /proc/$CONTAINER_PID/ns/net /var/run/netns/$CONTAINER_PID
# Move physical interface to container's namespace
ip link set eth1 netns $CONTAINER_PID
# Inside container namespace (use nsenter if needed)
ip netns exec $CONTAINER_PID ip link set eth1 up
ip netns exec $CONTAINER_PID ip addr add 192.168.1.100/24 dev eth1
Method 2: Using PCI Passthrough
For ultimate performance, consider PCI passthrough:
# First, unbind the NIC from host driver
echo "0000:01:00.0" > /sys/bus/pci/devices/0000:01:00.0/driver/unbind
# Then pass through to container using device cgroups
docker run --device=/dev/uio0 --device=/sys/bus/pci/devices/0000:01:00.0 \
--privileged -it my_container
When implementing direct interface assignment:
- The interface will disappear from the host system
- You'll need to handle interface configuration entirely within the container
- Some NIC features (like SR-IOV) may require additional configuration
- Security implications of privileged containers should be evaluated
Here's a quick benchmark comparing different methods (lower is better):
Method | Latency (μs) | Throughput (Gbps) |
---|---|---|
Standard Docker bridge | 120 | 9.8 |
Macvlan | 85 | 38.2 |
Direct assignment | 12 | 39.8 |
PCI Passthrough | 8 | 39.9 |
If the interface doesn't appear in your container:
# Check if interface exists in the namespace
ip netns exec $CONTAINER_PID ip link show
# Verify driver binding
lspci -k -s 0000:01:00.0
# Check kernel messages
dmesg | grep eth1
Remember that some NICs require specific drivers to be loaded in the container. You might need to build a custom Docker image with the appropriate kernel modules.