When working with BGAN satellite connections exhibiting ~3000ms latency, standard OpenVPN configurations fail due to TCP's congestion control mechanisms. The classic symptom: small transfers succeed but larger ones (especially with SCP) stall after ~192KB due to packet loss and buffer bloat.
The fragment
and mssfix
parameters serve distinct purposes:
# Optimal settings for satellite links tun-mtu 1500 mssfix 1300 fragment 1200 link-mtu 1542 mtu-disc yes
Key differences:
fragment
: Splits packets at OpenVPN layer (before encryption)mssfix
: Modifies TCP MSS option during connection establishmenttun-mtu
: Sets the maximum size of tunnel packets
Add these to server.conf and client.conf:
socket-flags TCP_NODELAY sndbuf 393216 rcvbuf 393216 push "sndbuf 393216" push "rcvbuf 393216"
For CentOS systems, also configure sysctl:
# /etc/sysctl.conf additions net.ipv4.tcp_window_scaling = 1 net.ipv4.tcp_sack = 1 net.ipv4.tcp_timestamps = 1 net.core.rmem_max = 16777216 net.core.wmem_max = 16777216
UDP performs better than TCP-over-TCP in high-latency scenarios:
proto udp cipher AES-128-CBC auth SHA1 comp-lzo no
Avoid TLS handshake timeouts with:
hand-window 30 tran-window 60
For accurate measurements over VPN:
# Server side: iperf3 -s -p 5202 # Client side (with proper window sizing): iperf3 -c vpnserver -p 5202 -t 60 -i 5 -w 512K -P 4
Key interpretation metrics:
- Look for consistent transfer rates without sudden drops
- Check retransmit count (should be < 1% of total packets)
- Compare with non-VPN baseline (expect ~15-20% overhead)
Create /usr/local/bin/vpn-mtu-test:
#!/bin/bash SERVER_IP=$(grep 'remote ' /etc/openvpn/client.conf | awk '{print $2}') ping -M do -s 1472 -c 3 $SERVER_IP | grep -oP '(?<=icmp_seq=1 ttl=).*(?=time=)'
Run before OpenVPN startup to dynamically set tun-mtu
.
For Postfix/Sendmail over VPN:
# Postfix main.cf additions smtp_bind_address = 10.8.0.2 smtp_connect_timeout = 30s smtp_mx_session_limit = 1
Consider using mutt
with these settings for manual testing:
set timeout=60 set sendmail_wait=-1
When dealing with BGAN satellite connections exhibiting ~3 second latency, traditional TCP/IP stack behavior becomes problematic. The fundamental issue stems from:
- TCP's congestion control algorithms misinterpreting latency as packet loss
- Buffer bloat in both client and satellite modem equipment
- Inefficient window scaling for long fat networks (LFNs)
The fragment
and mssfix
directives serve distinct purposes:
# Fragment larger packets (before encryption)
fragment 1200
# Automatically adjusts TCP MSS (Maximum Segment Size)
mssfix 1200
For satellite links, use this optimized OpenVPN server configuration:
# MTU settings
tun-mtu 1500
link-mtu 1500
mtu-disc yes
mssfix 1200
fragment 1200
# TCP performance tuning
sndbuf 393216
rcvbuf 393216
push "sndbuf 393216"
push "rcvbuf 393216"
# Protocol selection
proto tcp
tcp-nodelay
Add these kernel parameters in /etc/sysctl.conf
:
net.ipv4.tcp_window_scaling = 1
net.ipv4.tcp_timestamps = 1
net.ipv4.tcp_sack = 1
net.ipv4.tcp_low_latency = 1
net.core.rmem_max = 16777216
net.core.wmem_max = 16777216
net.ipv4.tcp_rmem = 4096 87380 16777216
net.ipv4.tcp_wmem = 4096 65536 16777216
For more accurate measurements than iperf, consider:
# UDP-based testing (bypasses TCP issues)
iperf3 -c server -u -b 1M -t 60 -i 5
# SCP alternative with progress monitoring
rsync --progress --partial -avz /path/to/file user@server:/dest
For extremely challenging links, consider:
- Switching to UDP protocol with forward error correction
- Implementing application-layer acceleration
- Using alternative protocols like QUIC or SCTP