When exposing APIs to JavaScript developers, we often encounter situations where poorly implemented client-side code creates excessive AJAX requests. These rogue requests typically target database-intensive endpoints (like /json_api/
), while static resources remain unaffected. The challenge is implementing selective throttling that protects backend resources without impacting legitimate traffic.
NGINX provides two primary modules for request limitation:
# In http context
limit_req_zone $binary_remote_addr zone=api_zone:10m rate=50r/m;
# In server/location context
limit_req zone=api_zone burst=10 nodelay;
For our case targeting /json_api/
specifically:
http {
# Define rate limiting zone (stores 160k IPs in 10MB)
limit_req_zone $binary_remote_addr zone=api_zone:10m rate=50r/m;
server {
location /json_api/ {
# Apply rate limiting with burst handling
limit_req zone=api_zone burst=20 nodelay;
# Continue with proxy_pass or other directives
proxy_pass http://api_backend;
}
location /static/ {
# No rate limiting for static assets
alias /var/www/static/;
}
}
}
The burst
parameter allows temporary exceeding of the rate limit while nodelay
immediately processes requests within burst capacity:
rate=50r/m
: Core limit (50 requests per minute)burst=20
: Accepts 20 excess requests before queuingnodelay
: Processes burst requests immediately without delay
Add logging to track throttling events:
http {
log_format throttled '$remote_addr - $request [$time_local] '
'[LIMIT] $limit_req_status';
server {
access_log /var/log/nginx/throttled.log throttled;
}
}
For additional security, combine rate limiting with geographic filters:
geo $limited_country {
default 0;
# High-abuse regions
CN 1;
RU 1;
IN 1;
}
map $limited_country $api_limit_key {
0 $binary_remote_addr;
1 "";
}
limit_req_zone $api_limit_key zone=api_zone:10m rate=50r/m;
When developers build JavaScript applications consuming our APIs, occasional client-side bugs can create runaway AJAX requests that hammer our servers. This becomes particularly problematic for database-intensive endpoints under paths like /json_api/
, while static content under /static/
shouldn't be affected.
The limit_req
module in NGINX provides leaky bucket algorithm-based rate limiting. Here's the core syntax we'll use:
limit_req_zone $binary_remote_addr zone=api_limit:10m rate=50r/m;
limit_req zone=api_limit burst=20 nodelay;
This setup throttles only our API endpoints while leaving static content unaffected:
http {
# Define rate limiting zone (stores 10MB of IP addresses)
limit_req_zone $binary_remote_addr zone=api_limit:10m rate=50r/m;
server {
listen 80;
location /static/ {
# No rate limiting for static assets
try_files $uri =404;
}
location /json_api/ {
# Apply rate limiting with burst capability
limit_req zone=api_limit burst=20 nodelay;
# Your usual proxy pass or fastcgi config
proxy_pass http://api_backend;
}
}
}
zone=api_limit:10m: Creates a shared memory zone storing client IPs (10MB can hold ~160,000 IPs)
rate=50r/m: Allows 50 requests per minute (adjust based on your API capacity)
burst=20: Permits temporary bursts exceeding the rate limit
nodelay: Processes burst requests immediately rather than delaying them
When clients exceed limits, NGINX returns 503 (Service Unavailable) by default. We can customize this:
location /json_api/ {
limit_req zone=api_limit burst=20 nodelay;
limit_req_status 429; # More appropriate HTTP status
error_page 429 /custom_429.json;
proxy_pass http://api_backend;
}
Add these to your NGINX config to track rate limiting events:
log_format rate_limiting '$remote_addr - $remote_user [$time_local] '
'"$request" $status $body_bytes_sent '
'"$http_referer" "$http_user_agent" '
'Rate: $limit_req_status';
access_log /var/log/nginx/rate_limit.log rate_limiting;
For APIs with different access tiers (free vs premium users):
map $http_apikey $limit_key {
default $binary_remote_addr;
"PREMIUM_KEY_123" "";
}
limit_req_zone $limit_key zone=free_tier:10m rate=50r/m;
limit_req_zone $limit_key zone=premium_tier:10m rate=500r/m;
• Test with nginx -t
before applying changes
• Monitor memory usage of your rate limit zones
• Consider using $server_name
in complex multi-server setups
• For high traffic APIs, combine with application-level caching