Optimizing Low OpenVPN TCP Throughput: Network Tuning for 100Mbit Ports with Minimal CPU Utilization


2 views

When analyzing OpenVPN performance issues, the first red flag in this configuration is the use of TCP as both the transport protocol (OpenVPN's proto tcp) and the underlying network protocol (TCP/IP). This creates a classic TCP-over-TCP scenario where:

Application Layer: OpenVPN (TCP)
└── Transport Layer: TCP (retransmission and flow control)
    └── Network Layer: TCP (retransmission and flow control)

The nested TCP stacks compete for bandwidth and create congestion control conflicts. This explains why:

  • Ping times become erratic during transfers (TCP retransmissions)
  • CPU utilization remains low (not a processing bottleneck)
  • Non-encrypted traffic performs normally

Replace the current TCP configuration with UDP protocol and add performance tuning parameters:

# Server A config (UDP version)
proto udp
dev tun
tun-mtu 1500
fragment 0
mssfix 0
sndbuf 393216
rcvbuf 393216
push "sndbuf 393216"
push "rcvbuf 393216"
comp-lzo no
# Server B config (UDP version)
proto udp
remote 204.11.60.69
dev tun
tun-mtu 1500
fragment 0
mssfix 0
sndbuf 393216
rcvbuf 393216
comp-lzo no

If UDP isn't an option due to network restrictions, consider these TCP-specific optimizations:

# TCP-specific tweaks
proto tcp
tun-mtu 1400
mssfix 1360
socket-flags TCP_NODELAY
push "socket-flags TCP_NODELAY"
sndbuf 393216
rcvbuf 393216

After applying changes, verify performance with these commands:

# Check interface statistics
ip -s link show tun0

# Advanced iperf testing
iperf -c 10.0.0.2 -t 60 -i 10 -p 5001 -w 256K -P 4

# Monitor OpenVPN process
watch -n 1 'cat /proc/$(pgrep openvpn)/net/dev'

The expected throughput improvement should be visible within 10-15 seconds of starting the test.


When testing OpenVPN connectivity between two CentOS 6.6 servers with 100Mbit network interfaces, I encountered surprisingly low throughput of just 6.5Mbps despite minimal CPU utilization. Standard iperf tests without VPN showed expected ~88Mbps performance, indicating the bottleneck was specifically VPN-related.

# Baseline iperf test (non-VPN)
$ iperf -c remote_server
[  3]  0.0-10.0 sec   107 MBytes  89.6 Mbits/sec

# VPN tunnel iperf test
$ iperf -c 10.0.0.2
[  4]  0.0-10.0 sec  7.38 MBytes  6.19 Mbits/sec

The most telling indicators were:

  • Normal 60ms ping times spiking to 200+ms during transfers
  • Single-core OpenVPN process showing minimal CPU usage (2-3%)
  • Identical performance regardless of cipher strength (RC2-40-CBC vs default)

The root cause emerged when switching from TCP to UDP protocol. The original configuration used:

# Server config (TCP)
proto tcp-server

# Client config (TCP) 
proto tcp-client

After modifying to UDP:

# Server config
proto udp

# Client config
proto udp

Throughput immediately jumped to ~75Mbps. The TCP-in-TCP scenario was causing unnecessary overhead and the "TCP meltdown problem" where two TCP stacks compete against each other.

Further optimizations included:

# Increase buffer sizes
sndbuf 393216
rcvbuf 393216

# Disable encryption for internal trusted networks
cipher none
auth none

# Adjust MTU
tun-mtu 1500
mssfix 1400

The optimized server configuration became:

port 1194
proto udp
dev tun
sndbuf 393216
rcvbuf 393216
tun-mtu 1500
mssfix 1400
verb 3

With these changes, iperf tests now consistently show 85+ Mbps throughput with stable ping times during transfers.