When transferring files via SFTP using FileZilla, many users encounter an unexplained throughput ceiling of approximately 1.3MiB/s - even when the available network bandwidth suggests significantly higher potential speeds. This limitation persists across different file transfers, with concurrent sessions each achieving the same maximum rate rather than sharing the total available bandwidth.
Several factors contribute to this performance bottleneck:
- TCP Window Size: The default TCP window size (typically 64KB) combined with high latency creates mathematical limits. For a 180ms RTT, maximum throughput = Window Size / RTT = 64KB / 0.18s ≈ 355KB/s (theoretical). FileZilla's better performance suggests it may use multiple TCP connections or larger window sizes.
- SSH Encryption Overhead: The encryption/decryption process introduces computational latency, though CPU usage appears low in your case.
- Single-Threaded Transfer: Traditional SFTP implementations process files sequentially without parallel streams.
The bandwidth-delay product (BDP) formula explains much of this limitation:
# Python calculation example rtt = 0.18 # seconds window_size = 65536 # bytes max_throughput = window_size / rtt print(f"Theoretical max: {max_throughput/1024:.2f} KB/s")
Try these server-side SSH configuration adjustments in /etc/ssh/sshd_config:
# Increase TCP window sizing TCPWindowSize 512000 MaxStartups 100:30:100 # Encryption tuning (choose faster algorithms) Ciphers aes128-ctr,aes192-ctr,aes256-ctr MACs hmac-sha1
Client-side (FileZilla settings.xml):
<Setting name="Number of buffers">16</Setting> <Setting name="Buffer size">262144</Setting>
For Windows-compatible solutions with parallel transfer support:
- lftp (Windows build):
lftp -e "set net:connection-limit 8; get largefile.iso" sftp://user@server
- Cyberduck: GUI client with transfer acceleration
- pscp (PuTTY): Command-line alternative with -N flag for concurrent transfers
To diagnose whether the limitation is protocol-related or network-related:
# Network baseline test (using iperf3) # Server: iperf3 -s # Client: iperf3 -c server_ip -P 8 -t 30 # Compare with encrypted tunnel test: iperf3 -c server_ip -P 8 -t 30 -Z
When large file transfers are necessary, consider these approaches:
# Compress before transfer (reduces encryption overhead) ssh user@server "tar czf - /path/to/files" | dd of=backup.tar.gz # Split file approach (manual parallelization) ssh user@server "split -b 100M largefile.iso largefile.part." rsync -avz user@server:largefile.part.* . cat largefile.part.* > largefile.iso
When dealing with SFTP transfers over high-latency connections (180-190ms in this case), the TCP window size becomes a critical factor. The default settings in many SFTP clients don't adequately account for network conditions, leading to suboptimal throughput. Here's the math behind it:
# Calculate theoretical maximum throughput # Bandwidth Delay Product (BDP) = Bandwidth * Round Trip Time (RTT) # For 1.3MB/s (10.4Mbps) with 190ms RTT: BDP = 10.4 * 0.19 ≈ 1.976 Mb (247 KB)
FileZilla's performance can be improved by modifying its internal buffer settings. Create or edit FileZilla's configuration file:
<FileZilla3> <Settings> <Option name="Buffersize">4194304</Option> # 4MB buffer <Option name="Socket buffersize">1048576</Option> # 1MB socket buffer </Settings> </FileZilla3>
On the OpenSSH server (Debian), modify /etc/ssh/sshd_config:
# Increase TCP window sizes TCPWindowSize 2048K TCPKeepAlive yes # SFTP subsystem settings Subsystem sftp /usr/lib/openssh/sftp-server -u 0000 -l INFO
For Windows users needing segmented downloads, consider these open-source alternatives:
# lftp Windows build command example: lftp -c "open sftp://user:pass@server; \ set net:connection-limit 4; \ get -c /path/to/file"
For persistent connections, consider these sysctl tweaks on Linux servers:
# /etc/sysctl.conf additions net.core.rmem_max = 4194304 net.core.wmem_max = 4194304 net.ipv4.tcp_rmem = 4096 87380 4194304 net.ipv4.tcp_wmem = 4096 65536 4194304
Here's a Python script to test various transfer methods:
import paramiko import time def test_sftp_speed(host, username, password, filepath): transport = paramiko.Transport((host, 22)) transport.connect(username=username, password=password) sftp = paramiko.SFTPClient.from_transport(transport) start = time.time() sftp.get(filepath, '/dev/null') # Linux only elapsed = time.time() - start print(f"Transfer speed: {file_size/elapsed/1024/1024:.2f} MiB/s") sftp.close() transport.close()