When dealing with high-frequency API calls (3000 requests/minute) returning large JSON payloads (~450KB each), Linux systems may exhibit network stack issues evidenced by these netstat -s
metrics:
254329 packets pruned from receive queue because of socket buffer overrun
50678438 packets collapsed in receive queue due to low socket buffer
The packet pruning occurs when the receive queue fills faster than the application can process data. Collapsed packets indicate TCP's attempt to merge small packets when buffers are undersized. Both symptoms point to inadequate socket buffer configuration.
For Debian systems handling heavy JSON traffic, these sysctl
adjustments are crucial:
# Increase maximum socket buffer sizes
net.core.rmem_max = 16777216
net.core.wmem_max = 16777216
# Default and maximum receive buffer sizes
net.ipv4.tcp_rmem = 4096 87380 16777216
net.ipv4.tcp_wmem = 4096 65536 16777216
# Socket backlog queue
net.core.netdev_max_backlog = 30000
# TCP window scaling
net.ipv4.tcp_window_scaling = 1
For Node.js applications (common in JSON API scenarios), implement socket buffer adjustments:
const http = require('http');
const server = http.createServer();
// Increase socket buffer sizes
server.on('connection', (socket) => {
socket.setRecvBufferSize(1024 * 1024); // 1MB
socket.setSendBufferSize(1024 * 1024);
});
server.listen(3000);
Verify improvements using:
watch -n 1 'cat /proc/net/sockstat'
ss -tempo state established
Key metrics to monitor:
- TCP mem pressure in
/proc/net/sockstat
- Receive/send queue sizes in
ss
output - Retransmission rates via
netstat -s
For extreme cases (50K+ reqs/sec), consider:
- Kernel bypass techniques (DPDK, XDP)
- SO_REUSEPORT for load balancing
- Custom protocol buffers instead of JSON
When dealing with high-frequency API calls (3000 requests/minute) returning large JSON payloads (~450KB each), you might encounter network performance issues evidenced by:
254329 packets pruned from receive queue because of socket buffer overrun
50678438 packets collapsed in receive queue due to low socket buffer
These messages indicate your system is struggling with:
- Insufficient socket buffer space (rmem_default/rmem_max)
- Kernel dropping packets due to buffer overflows
- TCP window scaling limitations
Here's a complete solution combining multiple tuning parameters:
# Add to /etc/sysctl.conf
# General socket buffer settings
net.core.rmem_default = 16777216
net.core.wmem_default = 16777216
net.core.rmem_max = 33554432
net.core.wmem_max = 33554432
# TCP-specific optimizations
net.ipv4.tcp_rmem = 4096 87380 33554432
net.ipv4.tcp_wmem = 4096 65536 33554432
net.ipv4.tcp_window_scaling = 1
net.ipv4.tcp_timestamps = 1
net.ipv4.tcp_sack = 1
# Queue management
net.core.netdev_max_backlog = 50000
net.core.somaxconn = 4096
For applications using HTTP libraries, consider these code-level optimizations:
// Python example with requests
import requests
from requests.adapters import HTTPAdapter
session = requests.Session()
adapter = HTTPAdapter(
pool_connections=100,
pool_maxsize=100,
max_retries=3
)
session.mount('http://', adapter)
session.mount('https://', adapter)
# Set socket options
session.socket_options = [
(socket.SOL_SOCKET, socket.SO_RCVBUF, 16777216),
(socket.SOL_SOCKET, socket.SO_SNDBUF, 16777216)
]
After applying changes, monitor with:
# Check current buffer sizes
cat /proc/sys/net/core/rmem_max
cat /proc/sys/net/core/wmem_max
# Monitor drops in real-time
watch -n 1 "netstat -s | grep -i 'pruned\|collapsed'"
# Detailed socket info
ss -temoi
For extreme cases, consider:
- Implementing connection pooling
- Using HTTP/2 for multiplexing
- Compressing large JSON responses
- Distributing load across multiple machines