Many developers encounter this frustrating scenario: You've properly configured /etc/security/limits.conf
and /etc/sysctl.conf
, but your CentOS system stubbornly refuses to honor the new file descriptor limits. This commonly occurs when:
# Current limits remain unchanged despite configuration
$ ulimit -Sn
1024
$ ulimit -Hn
4096
First, verify all potential configuration locations that might override your settings:
# Check system-wide file-max setting
$ cat /proc/sys/fs/file-max
100000
# Verify limits.conf entries
$ grep -r "username" /etc/security/
/etc/security/limits.conf:username soft nofile 6000
/etc/security/limits.conf:username hard nofile 65535
# Examine limits.d directory
$ ls -l /etc/security/limits.d/
total 4
-rw-r--r--. 1 root root 191 Mar 7 2022 90-nproc.conf
The most common culprit is PAM (Pluggable Authentication Modules) not loading the limits properly. Check your PAM configuration:
# Verify pam_limits.so is included
$ grep pam_limits /etc/pam.d/*
/etc/pam.d/login:session required pam_limits.so
/etc/pam.d/sshd:session required pam_limits.so
/etc/pam.d/su:session required pam_limits.so
If PAM is configured correctly but limits still don't apply, try these approaches:
# Solution 1: Add to user's .bashrc (temporary)
echo "ulimit -Sn 6000" >> ~/.bashrc
# Solution 2: Systemd service override (for services)
mkdir -p /etc/systemd/system/[servicename].service.d
cat > /etc/systemd/system/[servicename].service.d/limits.conf <
For permanent changes that survive reboots, ensure these files contain:
# /etc/sysctl.conf
fs.file-max = 100000
fs.nr_open = 100000
# /etc/security/limits.conf
* soft nofile 6000
* hard nofile 65535
root soft nofile 65535
root hard nofile 65535
After making changes, verify with:
# Apply sysctl changes
sysctl -p
# Check process limits
cat /proc/$$/limits | grep "Max open files"
# Alternative verification
prlimit --nofile --pid $$
For containerized environments or special cases:
# Docker specific configuration
docker run --ulimit nofile=6000:65535 [image]
# Kubernetes pod specification
spec:
containers:
- name: myapp
resources:
limits:
memory: "128Mi"
cpu: "500m"
securityContext:
privileged: false
procMount: Default
capabilities:
add: ["SYS_RESOURCE"]
Remember that some applications may need to be restarted to pick up the new limits, and systemd services often require explicit limit declarations in their unit files.
Many sysadmins face this exact scenario: You've properly configured both /etc/security/limits.conf
and /etc/sysctl.conf
, yet your user sessions stubbornly refuse to honor the new file descriptor limits. Let's break down why this happens and how to force the system to comply.
The complete solution requires changes in three places:
# /etc/security/limits.conf
* soft nofile 6000
* hard nofile 65535
# /etc/sysctl.conf
fs.file-max = 100000
fs.nr_open = 100000
# /etc/systemd/system.conf (for systemd systems)
DefaultLimitNOFILE=65535
On modern CentOS/RHEL systems using systemd, you must ensure PAM reads your limits:
# /etc/pam.d/login
session required pam_limits.so
# /etc/pam.d/sshd (for SSH sessions)
session required pam_limits.so
For services running under systemd, create override files:
# /etc/systemd/system/[service].service.d/override.conf
[Service]
LimitNOFILE=65535
After making changes, run these commands:
# Reload systemd manager configuration
sudo systemctl daemon-reload
# Apply sysctl settings
sudo sysctl -p
# Check kernel-level limit
cat /proc/sys/fs/file-max
# Check user limits (from new session)
ulimit -Sn
ulimit -Hn
If limits still don't apply:
- Check for conflicting settings in
/etc/security/limits.d/*.conf
- Ensure no shell startup files (
~/.bashrc
,~/.bash_profile
) override limits - For SSH sessions, verify
UsePAM yes
in/etc/ssh/sshd_config
For services like Nginx or MySQL that need high FD limits:
# Example for Nginx
sudo systemctl edit nginx.service
[Service]
LimitNOFILE=65535