When transferring large files (300GB+) via SCP over 802.11g networks, we typically expect transfer speeds around 20Mbits/sec based on network benchmarks. However, real-world transfers sometimes drop to sub-300KB/sec speeds despite proper disk I/O capabilities (45MB/sec read on source, 30MB/sec write on destination).
# Verify network throughput
iperf -c destination_host -t 30
# Check source disk read performance
dd if=source_file of=/dev/null bs=1M count=1024 status=progress
# Test destination write speed
dd if=/dev/zero of=test_file bs=1M count=1024 status=progress
Cipher Selection: SCP's default encryption can be CPU-intensive. Try faster ciphers:
scp -c aes128-gcm@openssh.com large_file.tar user@remote:/backup/
rsync with Compression:
rsync -avzP --partial large_file.tar user@remote:/backup/
Parallel SCP with pscp:
pscp -p 8 -l 80% large_file.tar user@remote:/backup/
For ext4 filesystems, disable journaling during transfers:
tune2fs -O ^has_journal /dev/sdX
mount -o remount,noatime,nodiratime /backup
Adjust TCP window sizes for high-latency links:
sysctl -w net.ipv4.tcp_window_scaling=1
sysctl -w net.core.rmem_max=16777216
sysctl -w net.core.wmem_max=16777216
Create a transfer wrapper script:
#!/bin/bash
SCP_OPTS="-o Compression=yes \
-o Ciphers=aes128-gcm@openssh.com \
-o MACs=umac-64@openssh.com"
scp $SCP_OPTS "$@"
When transferring massive files (300GB+) via SCP over 802.11g networks, throughput can mysteriously drop to sub-300KB/s despite adequate baseline network performance (20Mbit/s via iperf) and sufficient disk I/O capabilities (45MB/s read on USB source, 30MB/s write on SATA destination). Here's how to systematically troubleshoot beyond the obvious culprits.
SCP's default encryption can become problematic with large files. Try benchmarking with different cipher algorithms:
# Test AES-256 (default)
time scp -c aes256-ctr largefile.tar user@remote:/backup/
# Try faster Arcfour (less secure)
time scp -c arcfour largefile.tar user@remote:/backup/
# Compare with ChaCha20
time scp -c chacha20-poly1305@openssh.com largefile.tar user@remote:/backup/
Add these flags to your SCP command for better throughput:
scp -o Compression=no \
-o IPQoS=throughput \
-o BatchMode=yes \
-o ConnectTimeout=30 \
largefile.tar user@remote:/backup/
Check source/target filesystem performance with direct I/O tests:
# Source read test with O_DIRECT
dd if=largefile.tar of=/dev/null iflag=direct bs=1M count=1000
# Target write test with O_DIRECT
dd if=/dev/zero of=/backup/testfile bs=1M count=1000 oflag=direct
When SCP underperforms, consider these alternatives with their respective trade-offs:
# Rsync with compression (better for incremental)
rsync -avz --progress --partial largefile.tar user@remote:/backup/
# Netcat for raw speed (no encryption)
tar cf - largefile.tar | pv | nc -l 1234 # On source
nc source.ip 1234 | tar xf - # On destination
# BBCP for parallel streams
bbcp -s 4 -w 4M largefile.tar user@remote:/backup/
System-level adjustments for better WiFi performance:
# Increase TCP window size
echo "net.ipv4.tcp_window_scaling = 1" >> /etc/sysctl.conf
echo "net.core.rmem_max = 16777216" >> /etc/sysctl.conf
echo "net.core.wmem_max = 16777216" >> /etc/sysctl.conf
sysctl -p
# Optimize WiFi MTU (test different values)
ifconfig wlan0 mtu 1472
For 300GB+ transfers over 802.11g, physical factors become significant:
- Check for WiFi interference with
iwlist wlan0 scan | grep -i quality
- Monitor CPU temperature during transfers (throttling affects crypto performance)
- Consider USB drive sleep/power management issues with
hdparm -B /dev/sdX
Create a transfer monitor to log performance metrics:
#!/bin/bash
while true; do
netstat -t | grep ssh | awk '{print $3}' | cut -d: -f2 | \
while read bytes; do
echo "$(date +%s),$bytes" >> scp_throughput.log
done
sleep 5
done