How to Configure Nginx as Reverse Proxy for Private AWS S3 Buckets with Signed Requests


3 views

Many developers need to serve static web applications from S3 while keeping buckets private for security compliance. When using Nginx as a reverse proxy, simply making the bucket public defeats the security purpose. The challenge lies in properly authenticating requests while maintaining proper content-type headers.

The existing Nginx configuration attempts to generate AWS Signature Version 1 requests using the set-misc-nginx-module. While the authentication part works (as evidenced by receiving bucket contents), the content is being served as raw XML rather than rendered HTML/JS.


# Current problematic proxy configuration
location * {
    set $bucket '[MY_BUCKET]';
    set $aws_access '[MY_AWS_KEY]';
    set $aws_secret '[MY_AWS_SECRET]';
    # ... other settings ...
}

The primary issues causing XML rendering instead of proper content delivery:

  1. Missing Content-Type headers in responses
  2. Improper handling of S3's ListBucket vs GetObject responses
  3. Incorrect location matching pattern

Here's a working configuration that properly proxies private S3 content:


server {
    listen 80;
    server_name example.com;

    # Handle root request specifically
    location = / {
        rewrite ^ /index.html last;
    }

    # Proxy all other requests
    location / {
        set $s3_bucket 'your-bucket-name';
        set $aws_access 'AKIAXXXXXXXXXXXXXX';
        set $aws_secret 'YYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYY';
        
        # Generate AWS signature
        set $now $time_iso8601;
        set $string_to_sign "GET\n\n\n${now}\n/${s3_bucket}$uri";
        set_hmac_sha1 $signature $aws_secret $string_to_sign;
        set_encode_base64 $signature $signature;

        proxy_http_version 1.1;
        proxy_set_header Host ${s3_bucket}.s3.amazonaws.com;
        proxy_set_header x-amz-date $now;
        proxy_set_header Authorization "AWS ${aws_access}:${signature}";
        
        proxy_hide_header x-amz-id-2;
        proxy_hide_header x-amz-request-id;
        
        # Critical for proper content handling
        proxy_set_header Accept '*/*';
        proxy_pass https://${s3_bucket}.s3.amazonaws.com;
    }
}

Several key improvements make this work:

  • Using proper location matching instead of wildcard
  • Explicit root path handling
  • Adding proxy_set_header Accept
  • Using HTTPS endpoint
  • Removing unnecessary buffering settings

When implementing this solution:


# Check Nginx error logs
tail -f /var/log/nginx/error.log

# Verify headers with curl
curl -v http://your-domain.com/index.html

# Test S3 access directly (temporarily)
aws s3 cp s3://your-bucket/index.html -

For more complex scenarios, consider generating presigned URLs:


location / {
    # Generate presigned URL logic here
    set $expires 3600;
    set $string_to_sign "GET\n\n\n${expires}\n/${s3_bucket}$uri";
    # ... signature generation ...
    
    return 302 https://${s3_bucket}.s3.amazonaws.com$uri?\
        AWSAccessKeyId=${aws_access}&\
        Expires=${expires}&\
        Signature=${signature};
}

When hosting static web applications on AWS S3, security often conflicts with functionality. While making buckets public is the simplest approach, it's not suitable for production environments handling sensitive data. This is where Nginx shines as a secure middleware solution.

The XML output you're seeing indicates Nginx is receiving S3's raw API response instead of the actual file content. This happens because:

1. Missing proper content-type handling
2. Incorrect request routing to S3's ListBucket API
3. Improper rewrite rules for object paths

Here's the corrected Nginx configuration that properly serves private S3 content:

server {
    listen 80;
    server_name example.com;

    # Handle root requests
    location = / {
        return 301 /index.html;
    }

    # Static assets handler
    location ~* \.(js|css|png|jpg|jpeg|gif|ico|html)$ {
        set $bucket          'your-private-bucket';
        set $aws_access      'AKIAXXXXXXXXXXXXXXXX';
        set $aws_secret      'xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx';
        set $aws_region      'us-east-1';
        set $s3_endpoint     's3.amazonaws.com';

        set $url_path        $uri;
        set_by_lua $now      "return ngx.cookie_time(ngx.time())";
        set $string_to_sign  "GET\n\n\n${now}\n/${bucket}${url_path}";
        set_hmac_sha1        $aws_signature $aws_secret $string_to_sign;
        set_encode_base64    $aws_signature $aws_signature;

        proxy_http_version     1.1;
        proxy_set_header       Host ${bucket}.${s3_endpoint};
        proxy_set_header       Authorization "AWS ${aws_access}:${aws_signature}";
        proxy_set_header       x-amz-date $now;
        proxy_set_header       Accept-Encoding "";
        proxy_hide_header      x-amz-id-2;
        proxy_hide_header      x-amz-request-id;
        proxy_hide_header      Set-Cookie;
        proxy_ignore_headers   Set-Cookie;
        proxy_intercept_errors on;
        proxy_buffering        off;
        
        resolver              8.8.8.8 valid=300s;
        resolver_timeout      10s;
        
        proxy_pass            http://${bucket}.${s3_endpoint}${url_path};
    }
}

The critical fixes in this version include:

  • Specific file type matching (\.ext$) instead of wildcard *
  • Proper URL path construction without rewrite breaks
  • Essential headers for content type preservation
  • Regional endpoint support for non-US buckets

For production environments, consider adding these performance tweaks:

proxy_cache_path /var/cache/nginx levels=1:2 keys_zone=s3_cache:10m inactive=24h max_size=1g;
proxy_cache s3_cache;
proxy_cache_valid 200 302 12h;
proxy_cache_valid 404 1m;
add_header X-Proxy-Cache $upstream_cache_status;