The recurring "[Errno 32] Broken pipe" error during s3cmd uploads typically indicates network instability or connectivity problems between your client and AWS S3. The throttling behavior shown in your logs suggests s3cmd's automatic retry mechanism is failing to overcome the underlying issue.
# Verify your current s3cmd configuration:
s3cmd --configure --dump-config
# Check network MTU (common culprit for broken pipes):
ip link show | grep mtu
# Test raw connectivity to S3 endpoint:
nc -zv s3.amazonaws.com 443
Add these parameters to your ~/.s3cfg file or command line:
[default]
access_key = YOUR_KEY
secret_key = YOUR_SECRET
host_base = s3.amazonaws.com
host_bucket = %(bucket)s.s3.amazonaws.com
multipart_chunk_size_mb = 50
recv_chunk = 65536
send_chunk = 65536
max_retries = 20
socket_timeout = 30
Using AWS CLI (More Reliable Alternative)
aws s3 cp bkup.tgz s3://mybucket/ \
--region us-east-1 \
--cli-connect-timeout 60 \
--cli-read-timeout 300
Parallel Uploads with s4cmd
# Install:
pip install s4cmd
# Execute:
s4cmd put bkup.tgz s3://mybucket/ \
-c 4 \ # Concurrent connections
--retry=10 \
--API-ReadTimeout=60
If the issue persists after configuration changes:
- Test with different S3 regions (sometimes regional endpoints have better connectivity)
- Configure TCP keepalive settings:
echo 30 > /proc/sys/net/ipv4/tcp_keepalive_time
- Consider using S3 Transfer Acceleration endpoint:
--host-bucket=%(bucket)s.s3-accelerate.amazonaws.com
s3cmd -v -v put bkup.tgz s3://mybucket/ 2>&1 | tee upload.log
# Important flags to monitor:
# -v: verbose level 1
# -v -v: verbose level 2
# --debug: shows HTTP traffic
The broken pipe error (Errno 32) with s3cmd typically indicates a network connectivity issue between your server and AWS S3. The operation fails when attempting to upload files, even relatively small ones (100MB+), with s3cmd automatically retrying at progressively slower speeds until giving up completely.
From experience, these are the most likely causes:
1. Network instability or high latency
2. Server-side firewall restrictions
3. AWS API rate limiting
4. Outdated s3cmd version (1.0.1 is quite old)
5. DNS resolution problems
6. Incorrect multipart upload configuration
First, update s3cmd:
pip install --upgrade s3cmd
Adjust the multipart chunk size:
s3cmd --multipart-chunk-size-mb=15 put largefile.tgz s3://mybucket/
Try these diagnostic commands:
# Check basic connectivity
s3cmd ls s3://mybucket/
# Test with small file
dd if=/dev/zero of=testfile bs=1M count=10
s3cmd put testfile s3://mybucket/
When s3cmd proves unreliable, these alternatives work well:
1. AWS CLI v2 (most reliable option)
aws s3 cp bkup.tgz s3://mybucket/
2. rclone (better for large files)
rclone copy bkup.tgz s3:mybucket
3. MinIO Client (mc)
mc cp bkup.tgz mys3/mybucket
For persistent issues, modify your ~/.s3cfg:
[default]
access_key = YOUR_ACCESS_KEY
secret_key = YOUR_SECRET_KEY
host_base = s3.amazonaws.com
host_bucket = %(bucket)s.s3.amazonaws.com
use_https = True
socket_timeout = 30
multipart_chunk_size_mb = 15
max_concurrent_requests = 2
If problems persist:
# Check TCP connectivity
telnet s3.amazonaws.com 443
# Test upload speed
curl -o /dev/null http://speedtest-sfo2.digitalocean.com/10mb.test
# Verify DNS
dig s3.amazonaws.com
For mission-critical backups, consider:
1. Split large files before upload:
split -b 500M bkup.tgz bkup_part_
2. Use GNU parallel for chunks:
find . -name "bkup_part_*" | parallel s3cmd put {} s3://mybucket/
3. Implement proper retry logic in a script