In Linux networking, port exhaustion becomes a concern when dealing with high-volume connections. The fundamental rule is that each TCP connection must be uniquely identified by a 4-tuple:
(source IP, source port, destination IP, destination port)
This means:
- You can have 65,535 connections from 127.0.0.1:(ephemeral) to 127.0.0.1:80
- Another 65,535 connections from 127.0.0.1:(ephemeral) to 127.0.0.1:443
- But only 65,535 total connections from a single IP to a specific (destination IP, destination port) pair
The Linux kernel's ephemeral port range is controlled by:
sysctl net.ipv4.ip_local_port_range = 1024 65535
To check current settings:
cat /proc/sys/net/ipv4/ip_local_port_range
When services fail to bind to ports (like MySQL on 3306), it's because:
- Binding requires (local IP, local port) to be free
- Ephemeral ports can conflict with service ports if ranges overlap
Solution: Either move service ports below the ephemeral range or adjust the range:
# Reserve ports below 10000 for services
sysctl -w net.ipv4.ip_local_port_range="10000 65535"
To monitor current connections and port usage:
ss -tulnp
netstat -tulnp
cat /proc/net/tcp | wc -l # Count active TCP connections
Additional kernel parameters to manage port exhaustion:
# Enable port reuse for TIME_WAIT connections
sysctl -w net.ipv4.tcp_tw_reuse=1
# Reduce TIME_WAIT timeout (default 60s)
sysctl -w net.ipv4.tcp_fin_timeout=30
# Increase maximum number of connections
sysctl -w net.core.somaxconn=32768
For a web server handling many clients:
# Typical configuration for high-traffic server
sysctl -w net.ipv4.ip_local_port_range="32768 60999"
sysctl -w net.ipv4.tcp_tw_reuse=1
sysctl -w net.ipv4.tcp_tw_recycle=1 # Note: deprecated in newer kernels
sysctl -w net.ipv4.tcp_max_tw_buckets=180000
For applications making many outgoing connections, implement connection pooling:
// Python example using connection pooling
import requests
from requests.adapters import HTTPAdapter
from urllib3.util.retry import Retry
session = requests.Session()
adapter = HTTPAdapter(pool_connections=100, pool_maxsize=100)
session.mount('http://', adapter)
session.mount('https://', adapter)
When testing connection limits, be aware of these constraints:
- Each source IP is limited to ~65k connections per destination port
- Multiple IPs can scale beyond this limit
- Consider using multiple source IPs for testing
After years of debugging networking issues and poring through kernel documentation, I want to clarify the real mechanics behind Linux port exhaustion. The key insight lies in understanding connection tuples:
// The magic 4-tuple that defines connection uniqueness
struct connection_tuple {
__be32 src_ip;
__be16 src_port;
__be32 dst_ip;
__be16 dst_port;
};
Your understanding is correct - a single machine can theoretically maintain:
- 65,535 connections from 192.168.1.100:random_port → 10.0.0.1:80
- Another 65,535 connections to 10.0.0.1:443
- And another 65,535 to 10.0.0.2:80
Here's a quick test using netcat to demonstrate:
# Terminal 1: Start listener
nc -l 8080
# Terminal 2: Generate connections
for i in {1..1000}; do
nc localhost 8080 &
done
# Check connection count
ss -tan | grep 8080 | wc -l
Where many get confused is the difference between:
- Binding: (local_ip, local_port) must be unique
- Connecting: The full 4-tuple must be unique
This explains why MySQL fails when using ports in the ephemeral range:
# Bad practice (conflict possible)
net.ipv4.ip_local_port_range = 1024 65535
# Better configuration
net.ipv4.ip_local_port_range = 32768 60999
For high-connection services, implement these patterns:
# 1. Multiple IP addresses
ip addr add 192.168.1.101/24 dev eth0
# 2. Port range tuning
sysctl -w net.ipv4.ip_local_port_range="32768 60999"
sysctl -w net.ipv4.tcp_tw_reuse=1
# 3. Connection pooling example (Python)
import socket
from concurrent.futures import ThreadPoolExecutor
def create_connection(target):
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
s.connect(target)
return s
with ThreadPoolExecutor(max_workers=100) as executor:
connections = list(executor.map(
create_connection,
[('10.0.0.1', 80)] * 1000
))
Dive deeper with these kernel parameters:
net.ipv4.tcp_max_tw_buckets
: Controls TIME_WAIT socketsnet.ipv4.tcp_fin_timeout
: Adjusts socket cleanup timingnet.core.somaxconn
: Backlog queue size
For extreme cases, consider SO_REUSEPORT:
int optval = 1;
setsockopt(sockfd, SOL_SOCKET, SO_REUSEPORT, &optval, sizeof(optval));