In Linux networking, two critical parameters control TCP receive buffer sizes:
net.core.rmem_max - Maximum receive buffer size allowed for ALL socket types
net.ipv4.tcp_rmem - Min/default/max buffer sizes specifically for TCP sockets
The effective maximum TCP receive buffer size is determined by:
- The third value in tcp_rmem (TCP-specific limit)
- But cannot exceed net.core.rmem_max (system-wide cap)
Let's examine the concrete examples:
Case 1
net.core.rmem_max = 7388608
net.ipv4.tcp_rmem = 4096 87380 8388608
Result: Maximum TCP buffer is 7,388,608 bytes (constrained by rmem_max)
Case 2
net.core.rmem_max = 8388608
net.ipv4.tcp_rmem = 4096 87380 7388608
Result: Maximum TCP buffer is 7,388,608 bytes (constrained by tcp_rmem)
For optimal performance:
# Recommended configuration approach:
sysctl -w net.core.rmem_max=16777216
sysctl -w net.ipv4.tcp_rmem='4096 87380 16777216'
sysctl -w net.ipv4.tcp_window_scaling=1
Check actual buffer allocation:
ss -ntmp
cat /proc/net/sockstat
Observe the "skmem" values showing actual memory usage per socket.
The kernel dynamically adjusts buffers between the min/max values based on:
- Network conditions (latency, bandwidth)
- System memory pressure
- TCP window scaling configuration
For high-performance servers, consider:
# Enable autotuning:
sysctl -w net.ipv4.tcp_moderate_rcvbuf=1
# Increase max for 10Gbps+ networks:
sysctl -w net.core.rmem_max=33554432
In Linux networking, there are two critical parameters controlling TCP receive buffer sizes:
net.core.rmem_max - Maximum receive buffer size allowed for ALL sockets
net.ipv4.tcp_rmem - Min/default/max buffer sizes specifically for TCP sockets
The actual TCP receive buffer size is determined by this priority order:
- The third value in
tcp_rmem
(TCP-specific max) - The global
rmem_max
value (system-wide cap) - Kernel-enforced absolute maximum (varies by system)
Let's examine the two provided cases:
// Case 1
sysctl -w net.core.rmem_max=7388608
sysctl -w net.ipv4.tcp_rmem='4096 87380 8388608'
// Effective max buffer: 7388608 (capped by rmem_max)
// Case 2
sysctl -w net.core.rmem_max=8388608
sysctl -w net.ipv4.tcp_rmem='4096 87380 7388608'
// Effective max buffer: 7388608 (determined by tcp_rmem)
This behavior has important consequences:
- For high-performance servers, always set
rmem_max
≥tcp_rmem[2]
- Applications can request buffers up to
rmem_max
via setsockopt() - The kernel dynamically adjusts between default and max based on traffic
Check actual buffer sizes being used:
# View current settings
cat /proc/sys/net/core/rmem_max
cat /proc/sys/net/ipv4/tcp_rmem
# Monitor live connections
ss -ntmp | grep -i rmem
For modern 10Gbps+ networks:
# Recommended baseline (adjust based on latency bandwidth product)
sysctl -w net.core.rmem_max=16777216
sysctl -w net.ipv4.tcp_rmem='4096 87380 16777216'
# For latency-sensitive applications
sysctl -w net.ipv4.tcp_low_latency=1
Common issues and solutions:
- If buffers are hitting max: increase both values proportionally
- For container environments: set on host AND container
- When changing live: modify both /proc and /etc/sysctl.conf