- Red Hat Enterprise Linux (RHEL) 7
- Red Hat Enterprise Linux (RHEL) 6
- Red Hat Enterprise Linux (RHEL) 5
- Red Hat Enterprise Linux (RHEL) 4
The following errors are seen in
Nov 24 19:01:45 localhost sshd2: fatal: setresuid 20054: Resource temporarily unavailable Nov 24 19:06:04 localhost sshd2: Disconnecting: fork failed: Resource temporarily unavailable Nov 24 19:14:46 localhost sshd2: Disconnecting: fork failed: Resource temporarily unavailable
The following errors are observed in /var/log/messages:
Nov 24 19:08:51 localhost sshd: error: fork: Resource temporarily unavailable
fork failed: Resource temporarily unavailableerror are logged by other processes also:
Nov 24 12:59:14 localhost multipathd: fork failed: Resource temporarily unavailable Nov 24 12:59:14 localhost multipathd: fork failed: Resource temporarily unavailable Nov 24 12:59:14 localhost multipathd: fork failed: Resource temporarily unavailable Nov 24 13:00:18 localhost udevd: udev_event_run: fork of child failed: Resource temporarily unavailable Nov 24 13:00:18 localhost udevd: udev_event_run: fork of child failed: Resource temporarily unavailable
There can be various reasons for processes not being able to fork and thus that means there are also various resolution:
- There may be misbehaving services or processes running. Always confirm that number of processes, threads or memory consumption is expected in your use case before raising any limits. Raising limits while not fixing the root cause of the process, thread or memory leak may lead to worse, unpredictable consequences.
When the system runs into a limitation in the number of processes, increase the
/etc/security/limits.d/90-nproc.confdepending on RHEL version. The limit can be increased for a specific user or all users. For example, here is an example of
<user> - nproc 2048 <<<----[ Only for "<user>" user ] * - nproc 2048 <<<----[ For all user's ]
When the system runs into an out of memory situation, locate the application with the memory leak. Consider running valgrind against the application, which will report any memory leaks that are found, or contact your application vendor.
In RHEL6 use
cgroupsto limit access to resources for processes, please refer to the separate knowledge base article on cgroups.
Check the total number of threads and processes running on the server:
[root@host ~]# ps -eLf | wc -l
- For example, if the above result is 32,000, then increase
kernel.pid_maxmust be larger than the total number of simultaneous threads and processes.
In RHEL6, also check
/etc/security/limits.d/90-nproc.conf. Setting nproc in /etc/security/limits.conf has no effect in Red Hat Enterprise Linux 6
There could be an Oracle database running on the server. Oracle Installation Guide usually requires some modifications in
/etc/profileand other files related to the numbers of processes. Double check, if the modifications required was done on your system. Use Oracle Installation Guide for your version of Oracle for this and complete the section related to changes in
/etc/security/limits.confand other files. Such an article should be in Oracle Metalink, for example:
- Note 339510.1: Requirements for Installing Oracle 10gR2 RDBMS on RHEL 4 on AMD64/EM64T
- Note 353529.1: Requirements for Installing Oracle 9iR2 64-bit on RHEL 4 x86-64 (AMD64/EM64T)
There can be various reasons for processes not being able to fork:
- There is a misbehaving service or process running, consuming more resources than expected.
- The system was not able to create new processes, because of the limits set for nproc in
- The system ran out of memory and new processes were unable to start because they could not allocate memory.
- There is not an available ID to assign to the new process. A unique value less than kernel.pid_max must be available.
- Check with sar whether all memory was used or whether a large number of processes was spawned.
A lack of system resources may cause some of the system daemons to die. This would show up as applications segfaulting. Note that in case of a memory leak the first application to segfault is not neccesarilly the one causing the memory leak.
In order to check the use of processes against what is allowed for the user, check the output of ulimit -u for the limit set to the particular user, and compare with the number of processes the user is runing.
You can run the below command to find the number of processes opened for every user and compare if that limit is exceeded with what defined in /etc/security/limits.conf or /etc/security/limits.d/*.
$ ps --no-headers auxwwwm | cut -f1 -d' ' | sort | uniq -c | sort -n 2 chrony 2 dbus 2 dnsmasq 4 rtkit 17 polkitd 396 root 531 user
Increase the value for the "nproc" parameter in /etc/security/limits.conf.
If the system is running Oracle database check that '/etc/security/limits.conf', '/etc/profile' and '/etc/pam.d/login' files are modified according to Oracle Installation Guide for the installed version of Oracle. Check and perform steps from the following section:
- Configure the Unix user process and file limits
The steps may be similar to following examples (assuming that the "oracle" Unix user will perform the installation and run the database):
Add the following settings to
oracle soft nproc 2047 oracle hard nproc 16384 oracle soft nofile 1024 oracle hard nofile 65536
Add or edit the following line in the
/etc/pam.d/loginfile, if it does not already exist:
session required pam_limits.so
Add the following lines to
if [ $USER = "oracle" ]; then if [ $SHELL = "/bin/ksh" ]; then ulimit -p 16384 ulimit -n 65536 else ulimit -u 16384 -n 65536 fi fi
- Red Hat Enterprise Linux
This solution is part of Red Hat’s fast-track publication program, providing a huge library of solutions that Red Hat engineers have created while supporting our customers. To give you the knowledge you need the instant it becomes available, these articles may be presented in a raw and unedited form.