How to Check the Number of Open Files per User in Linux: Command Line Solutions


2 views

When troubleshooting system performance issues or monitoring resource usage in Linux, you might need to check how many files a specific user has opened. This is particularly important for:

  • Identifying resource hogs
  • Debugging "Too many open files" errors
  • Monitoring system limits
  • Detecting potential security issues

The most reliable way to check open files count is using the lsof (List Open Files) command:

lsof -u username | wc -l

This command works by:

  1. -u username filters files opened by the specified user
  2. wc -l counts the number of lines in the output

For systems where lsof isn't available, try these approaches:

Using /proc Filesystem

ls -l /proc/*/fd/* 2>/dev/null | awk -F'/' '{print $3}' | sort | uniq -c | sort -nr

This command:

  1. Lists all file descriptors in /proc
  2. Extracts the PID (process ID) from the path
  3. Groups and counts by PID

Per-Process Count

for pid in $(pgrep -u username); do echo "$(ls /proc/$pid/fd/ 2>/dev/null | wc -l) for PID $pid"; done

Typical output from these commands shows:

  • Total open files count for the user
  • Breakdown per process (when using /proc method)
  • Potential file descriptor leaks (unusually high counts)

To monitor open files in real-time:

watch -n 5 "lsof -u apache | wc -l"

To get counts for multiple users:

for user in $(cut -d: -f1 /etc/passwd); do printf "%s: " "$user"; lsof -u "$user" 2>/dev/null | wc -l; done

Common issues and solutions:

  • Permission denied errors: Run as root
  • No lsof installed: Use the /proc method
  • High counts: Check for file descriptor leaks

Remember that users are also constrained by system limits. Check with:

ulimit -n
cat /proc/sys/fs/file-max

The most straightforward method uses lsof (List Open Files), a powerful utility that can filter by user:

lsof -u username | wc -l

For example, to count files opened by user 'nginx':

lsof -u nginx | wc -l

Before diving deeper, it's good to check system limits:

cat /proc/sys/fs/file-max
ulimit -n

The first command shows system-wide limit, while the second shows per-user limit.

For a more performant alternative (especially on busy systems), use the proc filesystem:

for pid in $(pgrep -u username); do 
  ls -l /proc/$pid/fd 2>/dev/null | wc -l
done | awk '{s+=$1} END {print s}'

This script sums open files across all processes of a user.

To continuously monitor open files count:

watch -n 5 'lsof -u apache | wc -l'

For detailed analysis of which processes are consuming file descriptors:

lsof -u mysql -a +f -- /proc

This shows MySQL-related processes and their open files.

If counts seem abnormally high:

  1. Check for file descriptor leaks in applications
  2. Look for runaway processes
  3. Verify if services are properly closing connections

Here's a reusable bash script to track user file counts over time:

#!/bin/bash
USER=$1
COUNT=$(lsof -u $USER | wc -l)
DATE=$(date +"%Y-%m-%d %H:%M:%S")
echo "$DATE - $USER has $COUNT files open" >> /var/log/user_fd_counts.log

Run it periodically via cron for historical data.