When automating file transfers in Linux environments, developers often need to mirror local directory structures to remote FTP servers. The challenge intensifies when restricted to basic FTP clients (ftp
or lftp
) without modern alternatives like rsync
or SSH access.
The lftp
client provides superior functionality compared to basic ftp
, especially with its mirroring capabilities:
lftp -e "mirror -R -v --delete /local/path /remote/path; quit" ftp://user:pass@server.com
Here's a production-ready script that handles edge cases:
#!/bin/bash
LOCAL_DIR="/path/to/local"
REMOTE_DIR="/path/to/remote"
FTP_USER="username"
FTP_PASS="password"
FTP_HOST="ftp.example.com"
lftp -u ${FTP_USER},${FTP_PASS} ${FTP_HOST} << EOF
set ftp:ssl-allow no
set mirror:parallel-directories yes
mirror --verbose --reverse --delete --parallel=4 ${LOCAL_DIR} ${REMOTE_DIR}
quit
EOF
--reverse
: Upload (mirror local to remote)--delete
: Remove extra files on server--parallel
: Transfer multiple files simultaneously--verbose
: Show detailed transfer progress
For syncing only changed files, add these options:
mirror --only-newer --ignore-time --continue ...
Optimize performance for massive directory trees:
set net:connection-limit 5
set net:limit-total-rate 1048576 # 1MB/s total
mirror --use-pget-n=5 ... # Split large files
Always implement proper error checking:
if ! lftp -e "..."; then
echo "Transfer failed!" >&2
exit 1
fi
For environments where lftp
isn't available, a basic ftp
workaround:
find /local/path -type f -exec sh -c '
file="{}"
remote="${file#/local/path/}"
ftp -n <<END
open ftp.example.com
user username password
put "$file" "/remote/path/$remote"
quit
END
' \;
This primitive approach lacks directory creation and delete functionality.
Never store passwords in scripts. Instead use .netrc
:
machine ftp.example.com
login username
password secret
Then call with just lftp ftp.example.com
When dealing with legacy FTP servers that don't support modern protocols like SFTP or rsync, we often need to implement recursive directory uploads using only basic FTP clients. Here's a robust solution using standard Linux tools.
The lftp
command provides built-in mirroring capabilities that handle recursion automatically. This single command handles both initial uploads and subsequent incremental updates:
lftp -u username,password -e "mirror -R --delete --verbose /local/path /remote/path" ftp.example.com
Key flags explanation:
-R
: Reverse mirror (upload instead of download)--delete
: Remove remote files not present locally--verbose
: Show detailed transfer progress
For environments where lftp
isn't available, we can use standard ftp
with a generated script:
#!/bin/bash
LOCAL_DIR="/path/to/local"
REMOTE_DIR="/path/to/remote"
FTP_HOST="ftp.example.com"
FTP_USER="username"
FTP_PASS="password"
# Generate FTP script
{
echo "open $FTP_HOST"
echo "user $FTP_USER $FTP_PASS"
echo "cd $REMOTE_DIR"
find "$LOCAL_DIR" -type d -printf "mkdir %P\n"
find "$LOCAL_DIR" -type f -printf "put %p %P\n"
echo "bye"
} > ftp_script.ftp
# Execute the script
ftp -n < ftp_script.ftp
For directories with thousands of files, we should implement chunking to avoid timeout issues:
lftp -u user,pass ftp.example.com << EOF
set ftp:list-options -a
set cmd:parallel 10
set cmd:queue-parallel 5
mirror -R --parallel=5 --verbose /local /remote
EOF
Enhance the script with proper error checking:
lftp -u user,pass ftp.example.com > upload.log 2>&1 << EOF
set xfer:log 1
set xfer:clobber on
set ftp:ssl-allow no
set ftp:passive-mode on
mirror -R --only-newer --verbose /local /remote
EOF
# Check exit status
if [ $? -ne 0 ]; then
echo "Upload failed" >&2
exit 1
fi
For scheduled uploads, create a wrapper script that:
#!/bin/bash
LOCK_FILE="/var/run/ftp_upload.lock"
# Prevent concurrent execution
if [ -e "$LOCK_FILE" ]; then
exit 0
fi
touch "$LOCK_FILE"
# Main upload command
lftp -u user,pass ftp.example.com -e "mirror -R --only-newer /local /remote; quit"
rm -f "$LOCK_FILE"