When working with embedded devices sending real-time events via chunked HTTP transfers, Nginx's default buffering behavior can introduce unacceptable latency. Here's what happens:
# Original device output (working directly with Node.js):
POST /embedded_endpoint/ HTTP/1.1
Host: url.com
Transfer-Encoding: chunked
120
{"some","json message"}
232
{"other","json event"}
0
# After Nginx proxy (problematic buffered version):
POST /embedded_endpoint/ HTTP/1.1
Host: localhost:5000
Content-Length: 352
{"some","json message"}{"other","json event"}
To maintain real-time processing of chunked events, these are the critical directives:
location /embedded_endpoint/ {
proxy_http_version 1.1;
proxy_buffering off;
proxy_request_buffering off;
proxy_set_header Connection "";
proxy_set_header X-Real-IP $remote_addr;
proxy_read_timeout 24h; # For long-lived connections
chunked_transfer_encoding on;
proxy_pass http://backend;
}
Nginx buffers by default for performance reasons, but for real-time systems this causes:
- Event processing delays (2+ seconds in observed cases)
- Batching of individual chunks into single requests
- Loss of timing precision for event sequences
For IoT/embedded scenarios with high message rates (150+ messages/sec):
server {
listen 443 ssl;
server_name iot.example.com;
ssl_certificate /path/to/cert.pem;
ssl_certificate_key /path/to/key.pem;
location = /embedded_endpoint {
# Critical real-time directives
proxy_http_version 1.1;
proxy_buffering off;
proxy_request_buffering off;
proxy_cache off;
# Keep connection alive for chunked transfers
proxy_set_header Connection "";
# Forward original headers
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
# Important timeout settings
proxy_read_timeout 24h;
proxy_send_timeout 24h;
# Target Node.js application
proxy_pass http://127.0.0.1:5000;
}
# Normal traffic gets buffered
location / {
proxy_pass http://127.0.0.1:3000;
}
}
Test with curl to confirm chunked transfers work:
curl -X POST -H "Transfer-Encoding: chunked" -H "Content-Type: text/x-events" \
--data-binary @- https://iot.example.com/embedded_endpoint <<EOF
7
{"test"}
0
EOF
Check both Nginx access logs and Node.js application logs to verify immediate processing.
When disabling buffering:
- Monitor connection count (may increase)
- Adjust worker_connections in nginx.conf
- Consider TCP keepalive settings
- Test under high load (150+ messages/sec)
When working with embedded devices that stream real-time events via HTTP chunked transfer encoding, we encountered an interesting challenge with Nginx buffering behavior. Our device sends multiple small JSON events (up to 150 per second) as individual chunks:
POST /embedded_endpoint/ HTTP/1.1
Host: url.com
Transfer-Encoding: chunked
Content-Type: text/x-events
120
{"some","json message"}
232
{"other","json event"}
0
By default, Nginx buffers chunked requests before forwarding them to the backend (Node.js in our case). While this improves performance for regular web traffic, it introduces unacceptable latency (about 2 seconds) for our real-time event stream.
The buffered output looks like:
POST /embedded_endpoint/ HTTP/1.1
Host: localhost:5000
Content-Length: 2415
Content-Type: text/x-events
{"some","json message"}
{"other","json event"}
After extensive testing, we found the optimal configuration that preserves chunked encoding while still allowing Nginx to handle SSL termination and other benefits:
server {
listen 8080;
location /embedded_endpoint/ {
proxy_http_version 1.1;
proxy_set_header Connection "";
proxy_buffering off;
proxy_request_buffering off;
proxy_cache off;
proxy_pass http://localhost:5000;
}
# Regular traffic handling
location / {
proxy_pass http://localhost:5000;
}
}
The critical directives that solved our problem:
proxy_buffering off; # Disables response buffering
proxy_request_buffering off; # Disables request buffering
proxy_http_version 1.1; # Required for chunked encoding
proxy_set_header Connection ""; # Keeps connection alive
While this configuration works perfectly for our real-time endpoint, we maintain separate handling for regular traffic:
- Normal web traffic still benefits from Nginx buffering and caching
- SSL termination works as expected for all endpoints
- CPU usage remains low despite high chunk throughput
To test if chunks are being forwarded immediately:
# On the backend server:
nc -l 5000
# Expected output shows individual chunks arriving in real-time:
POST /embedded_endpoint/ HTTP/1.1
Transfer-Encoding: chunked
120
{"some","json message"}
232
{"other","json event"}