When implementing Server-Sent Events behind Nginx, several critical configuration requirements emerge:
# Common pitfalls in default Nginx config
location /events {
proxy_pass http://backend;
# Missing crucial SSE-specific directives
}
These directives form the foundation for proper SSE proxying:
location /sse-endpoint {
proxy_pass http://your_upstream_server;
proxy_http_version 1.1;
proxy_set_header Connection '';
proxy_buffering off;
proxy_cache off;
chunked_transfer_encoding off;
# Timeout configuration (adjust as needed)
proxy_read_timeout 24h;
proxy_send_timeout 24h;
}
For high-performance SSE implementations:
http {
upstream backend {
server 127.0.0.1:8080;
keepalive 100; # Maintains persistent connections
}
server {
location /stream {
proxy_http_version 1.1;
proxy_set_header Connection "";
proxy_pass http://backend;
# Other SSE directives...
}
}
}
Here's a battle-tested configuration used in production environments:
server {
listen 443 ssl http2;
server_name yourdomain.com;
location /api/events {
proxy_pass http://event_backend;
proxy_http_version 1.1;
proxy_set_header Connection '';
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_buffering off;
proxy_cache off;
# Important timeout settings
proxy_connect_timeout 7d;
proxy_read_timeout 7d;
proxy_send_timeout 7d;
# Additional security headers
add_header X-Accel-Buffering no;
}
}
When debugging SSE problems with Nginx:
- Connection drops: Verify
proxy_read_timeout
is sufficiently long - Buffering problems: Ensure
proxy_buffering off
is set - HTTP/2 issues: Test with both HTTP/1.1 and HTTP/2
For specialized use cases:
# For load balanced environments
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header Host $host;
# For websocket-style behavior with SSE
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
# For CORS support
add_header 'Access-Control-Allow-Origin' '*';
add_header 'Access-Control-Allow-Methods' 'GET';
add_header 'Access-Control-Allow-Headers' 'Content-Type';
Key metrics to monitor:
- Connection churn rate
- Memory usage per SSE connection
- Proxy worker process load
For high-scale implementations, consider:
events {
worker_connections 4096; # Increase for many SSE connections
}
http {
# Reduce I/O operations
aio threads;
directio 4k;
}
When implementing Server-Sent Events through Nginx, several critical considerations emerge:
- Connection persistence requirements
- Buffering behavior implications
- Timeout configuration nuances
- HTTP version compatibility
Here's the minimum viable configuration for SSE proxy support:
location /sse-endpoint {
proxy_pass http://backend;
proxy_http_version 1.1;
proxy_set_header Connection '';
proxy_buffering off;
proxy_cache off;
proxy_read_timeout 24h;
}
proxy_http_version 1.1: Mandatory for persistent connections. SSE requires HTTP/1.1 or higher.
Connection header: Empty value prevents connection closure between requests.
Buffering controls: Disabling buffering prevents message aggregation and delays.
For enterprise deployments, consider these enhancements:
location /events/ {
proxy_pass http://sse_backend;
proxy_http_version 1.1;
proxy_set_header Connection '';
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_buffering off;
proxy_cache off;
chunked_transfer_encoding off;
proxy_read_timeout 86400s;
proxy_send_timeout 86400s;
proxy_connect_timeout 75s;
# SSE-specific headers
proxy_set_header Accept 'text/event-stream';
add_header X-Accel-Buffering no;
}
Connection drops: Verify timeout values exceed client expectations. The 24h duration in our example accommodates most use cases.
Message delays: Ensure all buffering mechanisms are disabled, including Nginx's default proxy buffers.
HTTP/2 considerations: While SSE works over HTTP/2, some older clients may require explicit HTTP/1.1 forcing.
For high-volume SSE implementations:
# In main nginx.conf context
worker_processes auto;
events {
worker_connections 4096;
use epoll;
multi_accept on;
}
http {
# Existing directives...
# SSE-specific optimizations
proxy_temp_path /dev/shm/nginx_temp;
client_body_temp_path /dev/shm/nginx_client;
fastcgi_temp_path /dev/shm/nginx_fastcgi;
}
location ~ ^/secure-events/(.*)$ {
# Authentication requirements
auth_request /auth-proxy;
# Standard SSE config
proxy_pass http://secured_backend/$1;
proxy_http_version 1.1;
proxy_set_header Connection '';
# Rate limiting
limit_req zone=sse burst=20 nodelay;
}