When transferring files between servers using SCP, the default behavior copies all files regardless of their modification status. For large directories with frequent updates, this creates unnecessary network overhead and wastes time transferring unchanged files.
The most efficient solution is to use rsync instead of SCP when you need to transfer only modified files:
rsync -avz --progress /source/directory/ username@domain.net:/destination/directory/
Key rsync options:
- -a: Archive mode (preserves permissions, ownership, timestamps)
- -v: Verbose output
- -z: Compression during transfer
- --progress: Shows transfer progress
If you must use SCP, you can combine it with find to select only modified files:
find /source/directory -type f -mtime -1 -exec scp -C {} username@domain.net:/destination/directory \;
This example transfers only files modified in the last day (-mtime -1). The -C flag enables compression.
For more control over file transfers:
# Transfer files modified within last 2 hours
rsync -avz --progress --files-from=<(find /source -type f -mmin -120) / username@domain.net:/destination/
# Exclude certain file patterns
rsync -avz --progress --exclude='*.tmp' --exclude='*.log' /source/ username@domain.net:/destination/
For scheduled transfers of modified files, consider setting up a cron job:
# Daily at 2am
0 2 * * * rsync -avz --delete /source/ username@domain.net:/destination/ > /var/log/sync.log 2>&1
When transferring sensitive data:
# Use SSH key authentication
rsync -avz -e "ssh -i /path/to/private_key" /source/ username@domain.net:/destination/
# Limit bandwidth usage
rsync --bwlimit=1000 -avz /source/ username@domain.net:/destination/
When using the standard SCP command like:
scp -rc blowfish /source/directory/* username@domain.net:/destination/directory
you're transferring all files regardless of their modification status. This becomes inefficient when dealing with large directories where only a few files have changed.
SCP itself doesn't have a built-in --update
flag like cp
, but here are effective workarounds:
For most use cases, rsync
is the better tool for this job:
rsync -avz --progress --update /source/directory/ username@domain.net:/destination/directory
Key flags:
- -a
: Archive mode (preserves permissions, etc.)
- -v
: Verbose
- -z
: Compression
- --update
: Skip files that are newer on the receiver
If you must use SCP, you can filter modified files first:
find /source/directory -type f -mtime -1 -exec scp -rc blowfish {} username@domain.net:/destination/directory \\;
This transfers only files modified in the last day. Adjust -mtime -1
as needed.
For more precise control, use a script that compares timestamps:
#!/bin/bash
for file in /source/directory/*
do
if [[ -f "$file" ]]; then
remote_file="username@domain.net:/destination/directory/${file##*/}"
if ssh username@domain.net "test -e \"$remote_file\" && test \"$remote_file\" -ot \"$file\""; then
scp -rc blowfish "$file" "$remote_file"
fi
fi
done
While these methods work, remember that each SCP connection has overhead. For frequent transfers of many small files, the initial connection setup time might outweigh the benefits of transferring only modified files. In such cases, consider:
- Archiving files first with tar
- Using SFTP with a persistent connection
- Implementing a proper sync solution like lsyncd
When automating these transfers:
- Use SSH keys instead of passwords
- Consider restricting SSH access to specific IPs
- Monitor transfer logs for unusual activity