How to Implement LDAP Paged Results with ldapsearch to Bypass Server Size Limits


2 views

When working with LDAP directories containing more entries than the server's configured size limit (typically 500 by default in slapd.conf), standard searches hit a brick wall. The fundamental challenge arises when you need to:

  • Export complete directory data for backup purposes
  • Process large result sets programmatically
  • Maintain operation continuity despite server-side restrictions

The LDAPv3 protocol provides the paged results control (1.2.840.113556.1.4.319) specifically for this scenario. The control works by:

Client: Sends search request with paged results control
Server: Returns partial results + cookie
Client: Sends same request with new cookie
Server: Returns next page of results

Here's the proper way to implement paged searches using OpenLDAP's ldapsearch:

#!/bin/bash
HOST="ldap.example.com"
PORT=389
BASE="dc=example,dc=com"
PAGE_SIZE=100

cookie=""
while : ; do
    result=$(ldapsearch -H ldap://$HOST:$PORT -x -LLL \
        -b "$BASE" "(&)" "*" "+" \
        -E pr=$PAGE_SIZE/prompt=no$cookie 2>&1)
    
    # Process current page
    echo "$result" | grep -v "^#"
    
    # Extract cookie for next page
    cookie=$(echo "$result" | sed -n 's/^control: 1.2.840.113556.1.4.319.*cookie=//p')
    if [[ -z "$cookie" ]]; then break; fi
    cookie="=$cookie"
done

Size Limit Still Hit? Ensure your LDAP server actually supports the paged results control. Some implementations require explicit configuration:

# slapd.conf configuration for OpenLDAP
allow restrict_paged_results

Performance Considerations:

  • Smaller page sizes (50-200) work best for most directories
  • Add -LLL to ldapsearch for cleaner, machine-readable output
  • Store intermediate results when processing large directories

For environments where paged results control isn't available:

# Using timestamp-based chunking
ldapsearch -b "$BASE" "(&(createTimestamp>=20230101000000Z)(createTimestamp<20230201000000Z))"

# Using uidNumber ranges (for POSIX systems)
for i in {0..20}; do
    start=$((i*1000))
    end=$((start+999))
    ldapsearch -b "$BASE" "(&(uidNumber>=$start)(uidNumber<=$end))"
done


When working with large LDAP directories, you'll frequently encounter server-side size limits that can't be modified (typically set to 500 entries in slapd.conf). The standard error Size limit exceeded (4) appears even when attempting paged results with the -E pr= option.

The paged results control (pr=) operates within the server's configured size limit. If your directory contains 10,000 entries and the server limit is 500, you'll still hit the ceiling. The paging control only helps manage network traffic, not bypass server restrictions.

For reliable large-scale backups, combine paging with attribute-based range queries. This method works with most LDAP servers:

#!/bin/bash
HOST="ldap.example.com"
PORT=389
BASE="dc=example,dc=com"
ATTR="uidNumber"  # Must be indexed attribute
MAX=500000        # Estimate maximum value
STEP=1000         # Chunk size

for ((i=0; i<=$MAX; i+=$STEP)); do
    ldapsearch -x -h $HOST -p $PORT -b "$BASE" \
        "(&($ATTR>=$i)($ATTR<$((i+STEP))))" \
        -LLL -E pr=100/noprompt >> backup.ldif
done

For newer OpenLDAP servers (2.4+), VLV offers better performance:

ldapsearch -x -h $HOST -p $PORT -b "$BASE" \
    -E 'vlv=1/0/1000' -E 'sort=uidNumber' \
    "(objectClass=*)" -LLL > backup.ldif
  • Always use an indexed attribute for range queries
  • Combine with -LLL for clean LDIF output
  • Test chunk sizes (STEP value) for optimal performance
  • Consider parallel execution for very large directories

For Microsoft Active Directory, use the range option:

ldapsearch -x -h ad.example.com -b "$BASE" \
    "(objectClass=user)" "member;range=0-999"