When running MongoDB under heavy load, you might encounter these critical errors in logs:
[initandlisten] pthread_create failed: errno:11 Resource temporarily unavailable
[initandlisten] can't create new thread, closing connection
This clearly indicates a thread creation limitation issue, where the process is hitting the user process limit (nproc).
Checking the current limits shows the core issue:
$ cat /proc/$(pgrep mongod)/limits | grep processes
Max processes 1024 30000 processes
$ ulimit -u
1024
Despite setting both soft and hard limits to 30000 in /etc/security/limits.conf
, only the hard limit gets applied.
On Linux systems, there are actually three layers of process limits:
- System-wide limit in
/proc/sys/kernel/threads-max
- User-level limits in
/etc/security/limits.conf
- PAM session initialization
The issue typically occurs because:
- Systemd ignores
limits.conf
for service units - On Amazon Linux, PAM may not properly initialize the soft limit
- The mongod service might be started before PAM applies the limits
Here's how to properly set the limits for MongoDB:
1. Direct Service Configuration
For systemd systems (most modern Linux distros), edit the service file:
sudo systemctl edit mongod.service
[Service]
LimitNPROC=30000
2. Alternative PAM Configuration
Create a custom PAM configuration for MongoDB:
sudo tee /etc/security/limits.d/mongod.conf <<EOF
mongod soft nproc 30000
mongod hard nproc 30000
EOF
3. Verification Script
Create a check script to verify limits application:
#!/bin/bash
PID=$(pgrep mongod)
echo "Current limits for mongod (PID $PID):"
cat /proc/$PID/limits | grep processes
echo -e "\nSystem-wide thread limit:"
cat /proc/sys/kernel/threads-max
For high-performance MongoDB deployments, consider these additional settings:
# In /etc/sysctl.conf
kernel.pid_max = 4194303
kernel.threads-max = 4194303
# Then apply:
sudo sysctl -p
After applying all changes, restart MongoDB and verify:
sudo systemctl restart mongod
cat /proc/$(pgrep mongod)/limits | grep processes
You should now see both soft and hard limits set correctly.
When running MongoDB under heavy load on Amazon Linux, you might encounter these critical errors:
[initandlisten] pthread_create failed: errno:11 Resource temporarily unavailable
[initandlisten] can't create new thread, closing connection
Checking process limits reveals the issue:
cat /proc/$(pgrep mongod)/limits | grep processes
Max processes 1024 30000 processes
The ulimit output confirms the soft limit isn't being applied:
ulimit -a | grep user
max user processes (-u) 1024
On modern Linux systems, several factors can override limits.conf:
- Systemd service unit files
- PAM configuration
- User session managers
- Security modules like SELinux
For MongoDB specifically, follow these steps:
1. First create a custom systemd override file:
sudo mkdir -p /etc/systemd/system/mongod.service.d
sudo tee /etc/systemd/system/mongod.service.d/limits.conf <<EOF
[Service]
LimitNPROC=30000
LimitNOFILE=350000
EOF
2. Reload systemd and restart MongoDB:
sudo systemctl daemon-reload
sudo systemctl restart mongod
3. Verify the new limits:
cat /proc/$(pgrep mongod)/limits | grep processes
Max processes 30000 30000 processes
If not using systemd, modify the init script:
sudo sed -i '/^ULIMIT=/c\ULIMIT="nproc 30000"' /etc/init.d/mongod
sudo service mongod restart
The Linux process limit hierarchy works like this:
- Systemd service settings (highest priority)
- PAM modules (including limits.conf)
- Shell configuration files
- Kernel defaults (lowest priority)
For production MongoDB deployments, I recommend setting limits at both the systemd and PAM levels for redundancy.