Troubleshooting s3cmd Delete Failures: Why Files Remain in S3 Bucket Despite Success Messages


9 views

Many developers encounter this puzzling scenario when working with s3cmd:

# Attempting to delete a file
s3cmd del s3://my-test-bucket/example.txt
File s3://my-test-bucket/example.txt successfully deleted

# But the file still exists when checking
s3cmd ls s3://my-test-bucket/example.txt
2023-01-01 00:00  1024  s3://my-test-bucket/example.txt

The issue typically stems from these common situations:

  • Bucket Versioning: When versioning is enabled, s3cmd's default delete operation only adds a delete marker
  • Permissions: The IAM user might have s3:DeleteObject permission but lack s3:DeleteObjectVersion
  • API Differences: s3cmd uses older S3 API calls that behave differently than the web interface

For Versioned Buckets

Force permanent deletion with version-specific commands:

# List all versions first
s3cmd ls --list-md5 --versions s3://my-test-bucket/

# Delete specific version
s3cmd del --version-id VERSION_ID s3://my-test-bucket/example.txt

# Alternative permanent delete
s3cmd del --force s3://my-test-bucket/example.txt

Permission Fixes

Ensure your IAM policy includes:

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "s3:DeleteObject",
                "s3:DeleteObjectVersion"
            ],
            "Resource": "arn:aws:s3:::my-test-bucket/*"
        }
    ]
}

When s3cmd proves unreliable for deletions:

# Using AWS CLI (more consistent behavior)
aws s3 rm s3://my-test-bucket/example.txt --version-id VERSION_ID

# For recursive directory deletion
aws s3 rm s3://my-test-bucket/path/ --recursive

Always confirm deletion with multiple methods:

# Check with s3cmd
s3cmd ls s3://my-test-bucket/

# Cross-verify with AWS CLI
aws s3 ls s3://my-test-bucket/

# Final confirmation via API
aws s3api list-object-versions --bucket my-test-bucket

html

Many developers encounter this puzzling scenario when using s3cmd:

s3cmd del s3://my-bucket/empty-folder/
# Output: "File empty-folder/ successfully deleted"
# Yet the folder remains visible in S3 console

Or when deleting individual files:

s3cmd del s3://my-bucket/test-file.txt
# Output shows success but file persists

1. Permission Issues: Your IAM user might have s3:DeleteObject permission but lack s3:DeleteObjectVersion for versioned buckets.

// Example minimal IAM policy that works:
{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Action": [
        "s3:DeleteObject",
        "s3:DeleteObjectVersion"
      ],
      "Resource": "arn:aws:s3:::MYBUCKET/*"
    }
  ]
}

2. Versioning Conflicts: S3 buckets with versioning enabled require special handling:

# Force delete all versions (requires --recursive):
s3cmd del --recursive --force s3://my-bucket/path/

Solution A: Using AWS CLI as alternative

aws s3 rm s3://my-bucket/path/ --recursive
# For versioned objects:
aws s3api delete-object --bucket my-bucket --key "path/to/file" --version-id "EXAMPLE_VERSION_ID"

Solution B: s3cmd with proper flags

# For empty folders (actually creates delete marker):
s3cmd del s3://my-bucket/folder/ --verbose

# For recursive deletion:
s3cmd del --recursive s3://my-bucket/folder/

Enable verbose output to see what's really happening:

s3cmd -v -d del s3://my-bucket/target-file

Check for hidden characters in paths:

# List objects with hex representation
aws s3api list-objects-v2 --bucket my-bucket \
--query 'Contents[].{Key:Key}' --output text | xxd

For programmatic deletion in Python:

import boto3
s3 = boto3.resource('s3')
bucket = s3.Bucket('my-bucket')

# Delete single object
bucket.Object('file.txt').delete()

# Batch delete
bucket.objects.filter(Prefix="folder/").delete()