When transferring large files between servers or to remote locations, uncontrolled SCP or Rsync operations can saturate your network bandwidth. This becomes particularly problematic in:
- Production environments where other services share bandwidth
- Multi-tenant systems where fair usage is required
- Cross-data-center transfers with latency-sensitive applications
While SCP doesn't have a built-in speed limit parameter, we can use Linux traffic control:
scp -l 1000 user@remote:/path/to/file ./
# -l limits bandwidth to 1000 Kbit/s (approximately 125 KB/s)
# This affects only the SSH connection used by SCP
For more precise control, combine with pv
(pipe viewer):
ssh user@remote "cat /path/to/large/file" | pv -L 500k | cat > local_file
# -L limits to 500 kilobytes per second
Rsync offers native bandwidth limiting with the --bwlimit
option:
rsync -avz --bwlimit=500 /source/dir/ user@remote:/dest/dir/
# Limits transfer to 500 KB/s
# Works for both upload and download directions
For time-based throttling (useful for production backups):
rsync -avz --bwlimit=8:00,1000 12:00,500 /source/ user@remote:/dest/
# From 8AM to 12PM: 1000 KB/s
# From 12PM onward: 500 KB/s
When you need network-wide control, consider these alternatives:
# Using trickle (user-space bandwidth shaper)
trickle -s -u 500 -d 500 scp file user@remote:/path/
# Using tc (traffic control)
tc qdisc add dev eth0 root tbf rate 1mbit burst 32kbit latency 400ms
Always verify your bandwidth limits are working:
# For SSH/SCP transfers
iftop -i eth0 -f 'port 22'
# For general bandwidth monitoring
nload -u m eth0
Remember that actual transfer speeds may vary slightly due to protocol overhead and network conditions. Test different values to find the optimal balance between transfer speed and system responsiveness.
When transferring large files or backups between servers, uncontrolled SCP or Rsync operations can saturate your network bandwidth. This becomes critical in production environments where other services share the same network infrastructure. Fortunately, there are several effective methods to impose bandwidth limits.
The secure copy protocol (SCP) inherits its transfer speed control from SSH. While the scp command itself doesn't have a built-in throttle option, we can leverage SSH's configuration:
# Basic syntax with bandwidth limit (in KB/s)
scp -l 1024 largefile.tar.gz user@remote:/path/to/destination
# Example with 500KB/s limit:
scp -l 500 backup_db.sql admin@db-server:/backups/
Important notes about -l parameter:
- Value is in kilobits per second (Kbps)
- 8 Kbps = 1 KB/s (divide your desired KB/s by 8)
- Applies to both upload and download directions
Rsync provides more granular control through its --bwlimit parameter:
# Limit to 1MB/s (1024 KB/s)
rsync -avz --bwlimit=1024 /local/path/ user@remote:/remote/path/
# Time-based bandwidth scheduling example:
rsync -avz --bwlimit=8:30-17:30:512 --bwlimit=17:30-8:30:2048 /data/ backup@nas:/storage/
When you need system-wide control or the above methods aren't available, consider trickle:
# Install trickle on Debian/Ubuntu
sudo apt-get install trickle
# Throttle scp to 200KB/s
trickle -s -u 200 -d 200 scp file.txt remote-server:/path/
For enterprise environments, consider these additional approaches:
# SSH Config persistent limits (add to ~/.ssh/config)
Host backup-server
BandwidthLimit 1000 # 1000 Kbps
# Combining with ionice for disk I/O control
ionice -c 2 -n 7 rsync --bwlimit=500 -av /source/ dest:/backup/
Remember that actual transfer speeds may vary slightly due to encryption overhead and network conditions. For consistent results in critical operations, test your settings beforehand.