Amazon's strong warning against public S3 buckets stems from their defense-in-depth security philosophy. While your specific use case of hosting a static website with granular s3:GetObject
permissions seems valid, AWS maintains this blanket recommendation because:
// Example of minimal public read policy (still risky)
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": "*",
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::your-website-bucket/*"
}
]
}
Even properly configured website buckets can expose risks:
- Accidental exposure of hidden files (e.g.,
.env
,.git
) - Potential for directory traversal attacks if misconfigured
- Bucket takeover risks when DNS records aren't properly secured
For true production websites, consider:
# CloudFront + S3 OAI (Origin Access Identity) setup
Resources:
CloudFrontOAI:
Type: AWS::CloudFront::CloudFrontOriginAccessIdentity
Properties:
CloudFrontOriginAccessIdentityConfig:
Comment: "Restrict S3 access"
BucketPolicy:
Type: AWS::S3::BucketPolicy
Properties:
Bucket: !Ref YourS3Bucket
PolicyDocument:
Version: "2012-10-17"
Statement:
- Effect: Allow
Principal:
CanonicalUser: !GetAtt CloudFrontOAI.S3CanonicalUserId
Action: s3:GetObject
Resource: !Sub "arn:aws:s3:::${YourS3Bucket}/*"
If you must maintain public access:
- Enable all Block Public Access settings EXCEPT "Block public access to buckets and objects granted through new access control lists (ACLs)"
- Implement strict object-level lifecycle policies
- Enable S3 access logging to monitor GET requests
Set up these CloudWatch alarms:
aws cloudwatch put-metric-alarm \
--alarm-name "S3-Public-Object-Count" \
--metric-name "NumberOfObjects" \
--namespace "AWS/S3" \
--dimensions "Name=BucketName,Value=your-bucket" \
--statistic "Average" \
--period 300 \
--evaluation-periods 1 \
--threshold 10000 \
--comparison-operator "GreaterThanThreshold"
Regularly audit your configuration using AWS Config rules like s3-bucket-public-read-prohibited
and s3-bucket-public-write-prohibited
.
That stark warning in AWS documentation exists for good reason - over 100,000 S3 data leaks were reported in 2023 alone due to misconfigured bucket policies. However, there are legitimate use cases like static website hosting where controlled public access makes sense.
When using S3 for static websites (with Route53 aliasing), we're implementing a very specific security pattern:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "PublicReadGetObject",
"Effect": "Allow",
"Principal": "*",
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::your-bucket-name/*"
}
]
}
Even for website buckets, these safeguards are mandatory:
- Enable S3 Block Public Access account settings (but override for this bucket)
- Set the bucket policy to allow only GetObject
- Never enable ListBucket permission
- Implement CloudFront with OAI for production sites
For production websites, this CloudFront setup is more secure:
# CloudFormation snippet
CloudFrontDistribution:
Type: AWS::CloudFront::Distribution
Properties:
Origins:
- DomainName: !GetAtt S3Bucket.DomainName
Id: S3Origin
S3OriginConfig:
OriginAccessIdentity: !Sub 'origin-access-identity/cloudfront/${OAI}'
Always implement these detective controls:
# AWS CLI command to check public buckets
aws s3api list-buckets --query "Buckets[].Name" | \
xargs -I {} aws s3api get-bucket-policy-status --bucket {} | \
jq '.PolicyStatus.IsPublic'
Remember: "Public" in S3 exists on a spectrum - what matters is precisely controlling which actions are allowed and regularly auditing your configuration.