When we encapsulate TCP traffic within another TCP connection (TCP-over-TCP), we create a scenario where two congestion control mechanisms interact in problematic ways. The inner TCP connection is unaware of actual network conditions because:
- The outer TCP connection handles packet loss and retransmits internally
- Network congestion is hidden from the inner TCP by the outer TCP's buffering
- Round-trip time measurements become inaccurate
Most VPN protocols default to UDP to avoid these issues, but OpenVPN supports TCP mode for environments where UDP is blocked. The potential problems include:
// Simplified conceptual representation of the buffer bloat issue
void handleInnerTCP(Packet p) {
outerTCP.send(p); // Buffered by OS
// If outerTCP is congested but innerTCP isn't aware...
innerTCP.acknowledge(); // False positive acknowledgement
}
OpenVPN implements several techniques to mitigate TCP meltdown:
- TCP Window Size Tuning: Dynamically adjusts window sizes based on observed latency
- Selective Acknowledgements: Improves retransmission efficiency
- Explicit Congestion Notification (ECN): When supported by network equipment
Here's sample output comparing throughput in different modes:
OpenVPN Benchmark Results:
UDP Mode: 85 Mbps ± 2.3
TCP Mode: 62 Mbps ± 5.1 (with mitigation)
TCP Mode (naive): 28 Mbps ± 12.4
The key parts of OpenVPN's TCP implementation that prevent meltdown:
// From OpenVPN's source (simplified)
void process_tcp_packet() {
if (tcp_queue_size > THRESHOLD) {
throttle_connection();
send_tcp_window_adjustment();
}
// Special handling for retransmissions
if (is_retransmit(packet)) {
prioritize_retransmit();
}
}
Despite the challenges, TCP mode is valuable when:
- UDP ports are blocked by restrictive firewalls
- Network conditions are stable (low packet loss)
- The VPN is used primarily for non-latency-sensitive traffic
When we nest TCP connections (TCP-over-TCP), we create a scenario where one TCP stack operates atop another. This architectural pattern frequently appears in VPN implementations like OpenVPN when configured to use TCP as both transport and tunneling protocol.
// Simplified conceptual illustration
[Application TCP] -> [VPN TCP Tunnel] -> [Physical Network TCP]
The fundamental issue stems from competing congestion control mechanisms. The inner TCP connection remains unaware of actual network conditions because:
- The outer TCP connection handles packet loss and retransmits automatically
- Bufferbloat occurs when inner TCP continues transmitting at full speed
- Two separate retransmission timers interact unpredictably
This isn't just a local buffer management issue. The compounding effect:
- Increases overall network latency
- Triggers unnecessary congestion avoidance mechanisms
- Can degrade performance for other flows sharing the network
OpenVPN implements several mitigation strategies in TCP mode:
// OpenVPN configuration example
tcp-nodelay
tun-mtu 1500
mssfix 1450
sndbuf 393216
rcvbuf 393216
Key technical approaches include:
- TCP Window Size Optimization: Carefully tuned to prevent buffer overflow
- MSS Clamping: Prevents IP fragmentation and subsequent retransmissions
- Queue Management: Implements smarter packet scheduling than standard TCP
In practice, OpenVPN's TCP mode remains usable because:
- Modern networks have sufficient bandwidth to mask inefficiencies
- The encryption overhead becomes the bigger bottleneck
- Most usage patterns don't saturate the tunnel completely
A simple iperf test reveals the difference:
# UDP Mode Test
iperf3 -c vpnserver -u -b 100M
# TCP Mode Test
iperf3 -c vpnserver
Typical results show 20-30% throughput reduction in TCP mode, but this varies based on:
- Network latency and stability
- Client/server hardware capabilities
- Concurrent connection count