During recent network troubleshooting for a Caribbean-based client, I observed a fascinating phenomenon: while Ookla's Speedtest.net reported 60-70Mbps to Amsterdam, actual file transfers via wget
showed sustained speeds between 1.5-22Mbps. This discrepancy becomes particularly interesting when considering:
# Client-side measurement command
wget -O /dev/null --report-speed=bits http://server.example.net/testfile.txt
# Typical output (good case)
22.81M 11.6Mb/s in 17s
# Typical output (bad case)
22.81M 1.5Mb/s in 121s
Speedtest.net's methodology: Creates multiple concurrent connections and measures burst throughput during short test periods (typically 10-30 seconds). It's optimized to measure potential bandwidth capacity rather than sustained transfer rates.
wget's transfer characteristics: By default uses a single TCP connection, making it susceptible to:
- TCP slow start limitations
- Packet loss recovery penalties
- Bandwidth-delay product constraints
With 180ms RTT to Amsterdam, the bandwidth-delay product becomes significant. For a 100Mbps connection:
# Calculate BDP in bytes
BDP = (Bandwidth * RTT) / 8
= (100e6 * 0.180) / 8
= 2.25MB
This means TCP needs at least 2.25MB of window size to fully utilize the pipe. The default Linux TCP window is typically:
# Check current window settings
sysctl net.ipv4.tcp_rmem
# Typical default: 4096 87380 6291456
Option 1: Parallel wget transfers
# Use xargs to create parallel downloads
seq 1 4 | xargs -P4 -I{} wget -O /dev/null --report-speed=bits \
http://server.example.net/testfile.txt
Option 2: Use iperf3 for controlled testing
# Server side
iperf3 -s
# Client side (with window size adjustment)
iperf3 -c server.example.net -w 2M -t 30 -P 4
Option 3: TCP parameter tuning
# Temporary settings for testing
sudo sysctl -w net.ipv4.tcp_window_scaling=1
sudo sysctl -w net.core.rmem_max=4194304
sudo sysctl -w net.core.wmem_max=4194304
The speed test service employs several techniques that differ from real-world file transfers:
- Uses multiple concurrent TCP streams (typically 8-16)
- Employs HTTP compression where possible
- Optimizes for peak rather than sustained throughput
- Uses geographically close servers when available
For comprehensive testing, I recommend this diagnostic sequence:
# 1. Measure baseline latency
ping -c 10 ams.server.example.net
# 2. Check for packet loss
mtr --report --report-cycles 10 ams.server.example.net
# 3. Test single-stream TCP performance
wget -O /dev/null --report-speed=bits http://ams.server.example.net/testfile
# 4. Test multi-stream performance
seq 1 8 | xargs -P8 -I{} wget -O /dev/null \
http://ams.server.example.net/testfile.{}
This methodology helps identify whether limitations stem from network conditions or protocol behaviors.
In network diagnostics, we often encounter situations where synthetic speed tests (like Speedtest.net) show significantly better performance than real-world file transfers using tools like wget. The Caribbean fiber connection case demonstrates this vividly:
Speedtest.net results: - Local ISP: 95 Mbps - Amsterdam: 60-70 Mbps Actual wget transfers: - Raspberry Pi: 1.5-22 Mbps - Powerful laptop: Up to 30 Mbps
Several technical aspects explain this discrepancy:
TCP Window Scaling
High latency (180ms RTT) significantly impacts TCP throughput. The theoretical maximum throughput can be calculated as:
Throughput = Window Size / RTT Optimal window size = Bandwidth Delay Product (BDP) For 100Mbps with 180ms RTT: BDP = (100,000,000 bits/s * 0.18s) / 8 = 2.25MB
Many systems don't properly auto-tune window sizes for such conditions. Check current settings:
# Linux TCP settings inspection sysctl net.ipv4.tcp_window_scaling sysctl net.ipv4.tcp_rmem sysctl net.ipv4.tcp_wmem
Speedtest Methodology
Speedtest.net uses multiple concurrent connections and optimized test files, while a single wget transfer:
- Uses one TCP connection
- Subject to full protocol overhead
- Includes actual file I/O operations
To properly diagnose, we need better measurement tools. Here's an improved test script:
#!/bin/bash # Network measurement script SERVER="aserv.example.net" FILE="~myuser/links/M77232917.txt" echo "=== Single connection test ===" wget -O /dev/null --report-speed=bits http://${SERVER}/${FILE} echo "=== Parallel test (4 connections) ===" aria2c -x4 -k1M -d /dev/null http://${SERVER}/${FILE} echo "=== TCP connection quality ===" ping -c 10 ${SERVER} mtr --report --report-cycles 10 ${SERVER}
For high-latency connections, consider these adjustments:
# Optimize TCP for high latency sudo sysctl -w net.ipv4.tcp_window_scaling=1 sudo sysctl -w net.core.rmem_max=4194304 sudo sysctl -w net.core.wmem_max=4194304 sudo sysctl -w net.ipv4.tcp_rmem="4096 87380 4194304" sudo sysctl -w net.ipv4.tcp_wmem="4096 87380 4194304"
For more accurate benchmarking:
# iPerf3 testing (requires server setup) iperf3 -c iperf.server.example.com -p 5201 -R # HTTP benchmark tools curl -o /dev/null -w "time_total: %{time_total}s\nspeed_download: %{speed_download} bytes/s\n" http://${SERVER}/${FILE}
The discrepancy stems from fundamental TCP behavior in high-latency environments. While Speedtest.net uses optimizations to show maximum potential bandwidth, real-world single-connection transfers like wget reveal the actual usable throughput under specific network conditions.