How to Fix “could not build map_hash” Error in Nginx When Using Large Redirect Maps


3 views

When working with large redirect maps (2,000+ rules) in Nginx, you might encounter this frustrating error during configuration testing:

nginx: [emerg] could not build map_hash, you should increase map_hash_bucket_size: 64

Nginx uses hash tables to efficiently lookup values in map directives. The error occurs when your map data exceeds the default hash table parameters:

  • map_hash_max_size (default 2048) - maximum number of entries
  • map_hash_bucket_size (default 32/64) - bucket size for hash entries

For a redirect map with ~200KB size, try these optimized settings:

http {
    map_hash_max_size 32768;
    map_hash_bucket_size 256;
    
    map $uri $new_uri {
        include /etc/nginx/conf.d/redirects.map;
    }
}

1. Placement matters: These directives must be in the http context, not within server blocks

2. Bucket sizing: The bucket size should be a power of two (128, 256, 512)

3. Performance tuning:

# For extremely large maps (1MB+)
map_hash_max_size 131072;
map_hash_bucket_size 512;

After making changes:

sudo nginx -t
sudo systemctl reload nginx

For massive redirect sets (10,000+ rules), consider:

  • Database-backed redirects using Lua or other scripting
  • Splitting into multiple map files
  • Using rewrite rules with pattern matching

When working with large URI mapping files in Nginx (2,000+ entries in this case), the default hash table configuration becomes inadequate. The error message specifically indicates that the map_hash_bucket_size needs adjustment to accommodate the mapping data structure.

The initial configuration attempt shows:

map $uri $new_uri {
    include /etc/nginx/conf.d/redirects.map;
}

And the error persists even after applying seemingly large values:

map $uri $new_uri {
    map_hash_max_size 262144;
    map_hash_bucket_size 262144;
    include /etc/nginx/conf.d/redirects.map;
}

The solution requires understanding Nginx's hash table mechanics:

  • map_hash_bucket_size should be set to the length of your longest URI plus other overhead (typically 32-64 bytes per character)
  • map_hash_max_size should accommodate your total entries and average key size

Here's a recommended configuration for large redirect maps:

http {
    map_hash_bucket_size 128;  # For URIs up to ~80 characters
    map_hash_max_size 32768;   # For thousands of entries
    
    map $uri $new_uri {
        include /etc/nginx/conf.d/redirects.map;
    }
}

For a production environment with 5,000 redirects:

http {
    # Adjust based on your longest URI
    map_hash_bucket_size 256;
    
    # Number of entries * avg key size * factor
    map_hash_max_size 204800;
    
    server {
        listen 80;
        
        location / {
            if ($new_uri) {
                return 301 $new_uri;
            }
        }
    }
    
    map $uri $new_uri {
        include /path/to/large_redirect_map.conf;
    }
}
  • Check your longest URI with: awk '{print length, $0}' redirects.map | sort -nr | head -1
  • Monitor memory usage after changes
  • Consider breaking very large maps into multiple smaller files

While increasing these values solves the immediate error, be aware that:

  • Higher values consume more memory during Nginx startup
  • The hash tables are rebuilt on configuration reload
  • For extremely large maps (10,000+ entries), consider alternative approaches