PostgreSQL Connection Pooling Optimization: Calculating max_connections and pgbouncer’s pool_size Parameters


18 views

When configuring PostgreSQL with pgbouncer, there's often confusion about how these parameters interact:

  • max_connections in postgresql.conf - hard limit of PostgreSQL connections
  • default_pool_size in pgbouncer.ini - connections maintained per database
  • max_client_conn in pgbouncer.ini - total client connections pgbouncer will accept

The key misunderstanding is that pgbouncer's default_pool_size should be lower than PostgreSQL's max_connections, not higher. Pgbouncer's entire purpose is to multiplex many client connections onto fewer actual PostgreSQL connections.

Here's how to calculate these values for a production server:

# PostgreSQL max_connections calculation:
max_connections = (
    (available_ram - shared_buffers - other_processes) / 
    (work_mem + maintenance_work_mem + temp_buffers)
)

# Pgbouncer settings:
default_pool_size = CEILING(max_connections * 0.75 / num_databases)
max_client_conn = default_pool_size * num_databases * max_pool_multiplier

For a server with 16GB RAM running PostgreSQL 14 and pgbouncer 1.16:

# postgresql.conf
max_connections = 200
shared_buffers = 4GB
work_mem = 8MB
maintenance_work_mem = 128MB

# pgbouncer.ini
[pgbouncer]
max_client_conn = 1000
default_pool_size = 20
max_db_connections = 150
reserve_pool_size = 5

Key factors affecting these calculations:

  • Transaction Mode: Session pooling requires higher pool_size than transaction pooling
  • Workload Type: OLTP vs OLAP need different approaches
  • Connection Churn: Applications creating/destroying connections frequently need larger pools

Use these queries to monitor connection usage:

-- PostgreSQL connection count
SELECT count(*) FROM pg_stat_activity;

-- Pgbouncer stats
SHOW POOLS;
SHOW STATS;

Adjust pool sizes based on these observations:

  • If you see many clients waiting, increase pool_size
  • If PostgreSQL connections approach max_connections, decrease pool_size

For high-performance setups, consider:

# pgbouncer.ini optimizations
server_idle_timeout = 600
server_lifetime = 3600
server_connect_timeout = 15
server_login_retry = 2

Remember to benchmark after each configuration change to verify improvements.


PostgreSQL's max_connections and PgBouncer's default_pool_size serve fundamentally different purposes. The confusion often stems from misunderstanding their interaction:

# PostgreSQL configuration (postgresql.conf)
max_connections = 100  # Maximum backend processes

# PgBouncer configuration (pgbouncer.ini)
max_client_conn = 200  # Maximum client connections
default_pool_size = 20 # Connections per database/user combo

For PostgreSQL max_connections:

  • RAM-based: (Total RAM - OS overhead) / (work_mem + maintenance_work_mem)
  • CPU-based: (CPU cores × 2) + effective_io_concurrency

Example calculation for 16GB RAM server:

available_ram = 16GB - 4GB (OS) = 12GB
work_mem = 64MB
maintenance_work_mem = 256MB
max_connections ≈ (12 × 1024) / (64 + 256) ≈ 38 connections

The golden ratio for PgBouncer configuration:

max_client_conn = (peak concurrent users × 1.2)
default_pool_size = (max_connections / (active_databases × active_users)) × 0.8

Real-world example for web application:

# Web app with 200 peak users, 3 databases, 2 roles
max_connections = 50  # From RAM calculation
max_client_conn = 240 # 200 × 1.2
default_pool_size = 8 # (50 / (3×2)) × 0.8 ≈ 6.67 → round up

Critical monitoring metrics to validate your settings:

# PostgreSQL monitoring
SELECT max_used_connections FROM pg_stat_database;

# PgBouncer stats
SHOW POOLS;

Connection wait time threshold recommendation:

# In pgbouncer.ini
server_check_delay = 30
server_lifetime = 3600
server_idle_timeout = 600
  • Setting pool_size > max_connections (will cause connection starvation)
  • Ignoring transaction pooling mode implications
  • Underestimating prepared statement handling

For transaction-heavy workloads, consider:

pool_mode = transaction
server_reset_query = DISCARD ALL