When transferring large files internationally via SFTP, the default TCP window configuration often becomes the primary bottleneck. The standard OS X Remote Login SFTP service uses conservative TCP settings that don't account for high-latency scenarios. Here's what's happening mathematically:
# Theoretical maximum throughput calculation:
Throughput (bits/sec) = Window Size (bits) / Round Trip Time (seconds)
# Example with default values:
Window Size = 65KB (524288 bits)
RTT = 300ms (0.3 seconds)
Max Throughput = 524288 / 0.3 ≈ 1.75 Mbps (218 KB/s)
Your observed 50KB/s speed suggests either the window size is smaller than default or packet loss is occurring.
First, modify the SSH daemon configuration on your OS X server:
# Edit /etc/ssh/sshd_config
Subsystem sftp /usr/libexec/sftp-server -l INFO -u 0000
TCPKeepAlive yes
ClientAliveInterval 60
MaxSessions 10
Then optimize TCP stack parameters in your terminal:
# Adjust TCP window scaling
sudo sysctl -w net.inet.tcp.rfc1323=1
sudo sysctl -w net.inet.tcp.sendspace=1048576
sudo sysctl -w net.inet.tcp.recvspace=1048576
sudo sysctl -w net.inet.tcp.delayed_ack=0
For multi-GB files, implement parallel chunking with lftp:
lftp sftp://user@host -p 22
set sftp:connect-program "ssh -a -x -oTCPKeepAlive=yes"
set net:connection-limit 5
set net:max-retries 3
mirror --parallel=5 --use-pget-n=10 /remote/path /local/path
This creates 5 parallel connections with each file split into 10 segments.
While maintaining encryption, consider these alternatives:
# Using UDT over SSH tunnel (lower latency impact)
udtcat -s 10M -r 10G -b 9000 ssh://user@host:22//path/to/file > localfile
# Or segmented HTTPS with axel:
axel -n 8 -H "Authorization: Bearer token" https://host/path/file
Both maintain encryption while better handling latency.
If you control the network infrastructure:
- Enable QoS for SSH traffic (DSCP CS1)
- Configure path MTU discovery
- Implement WAN acceleration rules
Example pf firewall rule for MTU:
match out on eth0 proto tcp from any to any port 22 set mss 1350
When transferring large files internationally via SFTP, many developers encounter frustratingly slow upload speeds despite having good bandwidth. The core issue stems from SFTP's inherent design:
# Typical SFTP performance limitations:
1. Single-channel operation (no parallel transfers per file)
2. High round-trip time (RTT) sensitivity
3. Small default TCP window sizes
4. Encryption overhead
First, let's verify if the bottleneck is truly latency-related:
# Run these tests from your international location:
ping target.server.com
# Check average RTT (should be >150ms for international)
iperf3 -c target.server.com -p 22
# Verify raw TCP throughput without SFTP overhead
sftp -v user@target.server.com
# Look for "debug1: Sending command" delays
For macOS Remote Login (OpenSSH-based), try these server-side adjustments in /etc/ssh/sshd_config:
# Optimize for large file transfers
Match User your_username
Compression no # Disable for already compressed files
Subsystem sftp internal-sftp -u 0000
TCPKeepAlive yes
ClientAliveInterval 60
Client-side optimizations:
# Use these sftp command-line options:
sftp -o "Compression=no" -o "IPQoS=throughput" -o "ServerAliveInterval=60" user@host
When SFTP can't meet requirements, these encrypted alternatives often perform better:
# Benchmark alternatives like:
1. rsync over SSH (with --partial and --inplace)
rsync -avz --partial --inplace -e "ssh -T -c aes128-ctr" largefile user@host:/path
2. BBCP (for parallel streams)
bbcp -P 2 -w 2M -s 16 file user@host:/path
3. UDT/QUIC-based tools like UFTP
uftp -u -E aes256 -P "highperf" largefile user@host
For truly massive files, implement parallel chunking:
#!/bin/bash
# Split, transfer, and reassemble
split -b 500M hugefile.gz hugefile_part_
for part in hugefile_part_*; do
sftp -o "Compression=no" user@host < hugefile.gz
Always verify your improvements:
# Network monitoring during transfer:
iftop -nNP -i en0 -f "port 22"
# SFTP-specific metrics:
sftp -b <(echo "progress; ls -l") user@host