Amazon S3 doesn't technically have "filenames" in the traditional filesystem sense - it uses object keys within a flat namespace. When you upload an object with the same key as an existing object in the bucket, S3 doesn't modify the key but instead replaces the existing object completely.
Consider this AWS CLI example:
# First upload
aws s3 cp test.txt s3://my-bucket/test.txt
# Second upload (same key)
aws s3 cp new_test.txt s3://my-bucket/test.txt
The second command completely overwrites the first version of test.txt without warning. No versioning occurs unless explicitly enabled.
To prevent accidental overwrites, enable bucket versioning:
aws s3api put-bucket-versioning \
--bucket my-bucket \
--versioning-configuration Status=Enabled
With versioning enabled, S3 maintains all versions of objects with the same key, each with a unique version ID.
For applications requiring unique filenames, consider these patterns:
Timestamp Prefixing
import time
timestamp = int(time.time())
s3_key = f"{timestamp}_{original_filename}"
UUID Suffixing
import uuid
unique_id = uuid.uuid4().hex
s3_key = f"{original_filename}_{unique_id}"
Before uploading, you can check for existing objects:
import boto3
s3 = boto3.client('s3')
response = s3.list_objects_v2(
Bucket='my-bucket',
Prefix='desired_key'
)
if 'Contents' in response:
# Handle duplicate case
Frequent overwrites can impact performance due to S3's eventual consistency model. For write-heavy applications, consider:
- Using distinct keys for new objects
- Implementing a naming convention that distributes objects across prefixes
- Utilizing S3's multipart upload API for large files
Amazon S3 operates as a pure object storage system rather than a traditional filesystem. Each object in S3 is uniquely identified by its fully qualified key, which is essentially the full path including the filename. When you upload an object to S3 with the same key as an existing object:
# Python example using boto3
import boto3
s3 = boto3.client('s3')
bucket = 'your-bucket-name'
key = 'path/to/your/file.ext'
# This will overwrite existing object if key matches
s3.upload_file('local_file.ext', bucket, key)
S3 doesn't perform automatic filename changes. The service implements a last-write-wins strategy:
- New uploads with identical keys completely replace existing objects
- No versioning occurs unless explicitly enabled
- No warnings or confirmations are provided
Here are practical approaches to handle filename conflicts:
// JavaScript (Node.js) example with timestamp suffix
const AWS = require('aws-sdk');
const s3 = new AWS.S3();
async function safeUpload() {
const timestamp = Date.now();
const uniqueKey = user_uploads/${timestamp}_original_filename.jpg;
await s3.putObject({
Bucket: 'my-bucket',
Key: uniqueKey,
Body: fileBuffer
}).promise();
}
Enable S3 versioning to maintain object history:
# AWS CLI command to enable versioning
aws s3api put-bucket-versioning \
--bucket your-bucket-name \
--versioning-configuration Status=Enabled
With versioning enabled, overwritten objects remain accessible via version IDs, though storage costs increase as all versions are retained.
For advanced control, use ETag checks to prevent overwrites unless intended:
// Java example with conditional put
AmazonS3 s3Client = AmazonS3ClientBuilder.standard().build();
PutObjectRequest request = new PutObjectRequest(bucketName, key, new File(filePath))
.withConditionalOperator(new ConditionalOperator(AND))
.withNoncurrentVersionETag(existingETag);
try {
s3Client.putObject(request);
} catch (AmazonS3Exception e) {
if (e.getStatusCode() == 412) {
System.out.println("Precondition failed - object modified");
}
}
Frequent overwrites can impact:
- Data transfer costs (same region: $0.00/GB, cross-region varies)
- Request charges (PUT requests are charged per operation)
- Eventual consistency window (typically milliseconds, but can be longer)