What does the error message "fork failed: Resource temporarily unavailable" mean?

Solution Verified - Updated -


  • Red Hat Enterprise Linux (RHEL) 8
  • Red Hat Enterprise Linux (RHEL) 7
  • Red Hat Enterprise Linux (RHEL) 6
  • Red Hat Enterprise Linux (RHEL) 5
  • Red Hat Enterprise Linux (RHEL) 4


  • The following errors are seen in /var/log/secure:

    Nov 24 19:01:45 localhost sshd2[23985]: fatal: setresuid 20054: Resource temporarily unavailable  
    Nov 24 19:06:04 localhost sshd2[28377]: Disconnecting: fork failed: Resource temporarily unavailable  
    Nov 24 19:14:46 localhost sshd2[4484]: Disconnecting: fork failed: Resource temporarily unavailable
  • The following errors are observed in /var/log/messages:

    Nov 24 19:08:51 localhost sshd[4243]: error: fork: Resource temporarily unavailable
  • Similar fork failed: Resource temporarily unavailable error are logged by other processes also:

    Nov 24 12:59:14 localhost multipathd: fork failed: Resource temporarily unavailable 
    Nov 24 12:59:14 localhost multipathd: fork failed: Resource temporarily unavailable 
    Nov 24 12:59:14 localhost multipathd: fork failed: Resource temporarily unavailable 
    Nov 24 13:00:18 localhost udevd[2244]: udev_event_run: fork of child failed: Resource temporarily unavailable
    Nov 24 13:00:18 localhost udevd[2244]: udev_event_run: fork of child failed: Resource temporarily unavailable


There can be various reasons for processes not being able to fork and thus that means there are also various resolution:

  • There may be misbehaving services or processes running. Always confirm that number of processes, threads or memory consumption is expected in your use case before raising any limits. Raising limits while not fixing the root cause of the process, thread or memory leak may lead to worse, unpredictable consequences.
  • When the system runs into a limitation in the number of processes, increase the nproc value in /etc/security/limits.conf or /etc/security/limits.d/90-nproc.conf depending on RHEL version. The limit can be increased for a specific user or all users. For example, here is an example of /etc/security/limits.d/90-nproc.conf file.

    <user>       -          nproc     2048      <<<----[ Only for "<user>" user ]
    *            -          nproc     2048      <<<----[ For all user's ]
  • When the system runs into an out of memory situation, locate the application with the memory leak. Consider running valgrind against the application, which will report any memory leaks that are found, or contact your application vendor.
    In RHEL6 use cgroups to limit access to resources for processes, please refer to the separate knowledge base article on cgroups.

  • Check the total number of threads and processes running on the server:

    [root@host ~]# ps -eLf | wc -l
  • For example, if the above result is 32,000, then increase kernel.pid_max to 65534.
  • kernel.pid_max must be larger than the total number of simultaneous threads and processes.

  • In RHEL6, also check /etc/security/limits.d/90-nproc.conf. Setting nproc in /etc/security/limits.conf has no effect in Red Hat Enterprise Linux 6

  • There could be an Oracle database running on the server. Oracle Installation Guide usually requires some modifications in /etc/security/limits.conf, /etc/profile and other files related to the numbers of processes. Double check, if the modifications required was done on your system. Use Oracle Installation Guide for your version of Oracle for this and complete the section related to changes in /etc/security/limits.conf and other files. Such an article should be in Oracle Metalink, for example:

    • Note 339510.1: Requirements for Installing Oracle 10gR2 RDBMS on RHEL 4 on AMD64/EM64T
    • Note 353529.1: Requirements for Installing Oracle 9iR2 64-bit on RHEL 4 x86-64 (AMD64/EM64T)

Root Cause

There can be various reasons for processes not being able to fork:

  • There is a misbehaving service or process running, consuming more resources than expected.
  • The system was not able to create new processes, because of the limits set for nproc in /etc/security/limits.conf.
  • The system ran out of memory and new processes were unable to start because they could not allocate memory.
  • There is not an available ID to assign to the new process. A unique value less than kernel.pid_max must be available.

Diagnostic Steps

  • Check with sar whether all memory was used or whether a large number of processes was spawned.
  • A lack of system resources may cause some of the system daemons to die. This would show up as applications segfaulting. Note that in case of a memory leak the first application to segfault is not neccesarilly the one causing the memory leak.

  • In order to check the use of processes against what is allowed for the user, check the output of ulimit -u for the limit set to the particular user, and compare with the number of processes the user is runing.

    • You can run the below command to find the number of processes opened for every user and compare if that limit is exceeded with what defined in /etc/security/limits.conf or /etc/security/limits.d/*.

      $ ps --no-headers auxwwwm | awk '$2 == "-" { print $1 }' | sort | uniq -c | sort -n
            2 chrony
            2 dbus
            2 dnsmasq
            4 rtkit
           17 polkitd
          396 root
          531 user
  • Increase the value for the "nproc" parameter in /etc/security/limits.conf.

  • If the system is running Oracle database check that '/etc/security/limits.conf', '/etc/profile' and '/etc/pam.d/login' files are modified according to Oracle Installation Guide for the installed version of Oracle. Check and perform steps from the following section:

    • Configure the Unix user process and file limits
  • The steps may be similar to following examples (assuming that the "oracle" Unix user will perform the installation and run the database):

    • Add the following settings to /etc/security/limits.conf:

      oracle           soft    nproc   2047
      oracle           hard    nproc   16384
      oracle           soft    nofile  1024
      oracle           hard    nofile  65536
    • Add or edit the following line in the /etc/pam.d/login file, if it does not already exist:

      session     required     pam_limits.so
    • Add the following lines to /etc/profile:

      if [ $USER = "oracle" ]; then
          if [ $SHELL = "/bin/ksh" ]; then
              ulimit -p 16384
              ulimit -n 65536
              ulimit -u 16384 -n 65536

This solution is part of Red Hat’s fast-track publication program, providing a huge library of solutions that Red Hat engineers have created while supporting our customers. To give you the knowledge you need the instant it becomes available, these articles may be presented in a raw and unedited form.


What about increasing the pid_max value in /proc/sys/kernel/pid_max ? This has helped me few times to fix similar issues.

Hola, anexamos el resultado de la ejecución de los comandos indicados

unevm-dfuseplat01 | 15:33:24 / $ ps -ef | grep fuse | grep -v grep
fuse 1293 1 0 Oct27 ? 00:03:34 /opt/fuse/jboss-fuse-6.1.0.redhat-379/bin/fuse-wrapper /opt/fuse/jboss-fuse-6.1.0.redhat-379/etc/fuse-wrapper.conf wrapper.syslog.ident=fuse wrapper.pidfile=/opt/fuse/jboss-fuse-6.1.0.redhat-379/data/fuse.pid wrapper.daemonize=TRUE wrapper.lockfile=/var/lock/subsys/fuse
fuse 1847 1293 0 Oct27 ? 00:10:31 java -Dkaraf.home=/opt/fuse/jboss-fuse-6.1.0.redhat-379 -Dkaraf.base=/opt/fuse/jboss-fuse-6.1.0.redhat-379 -Dkaraf.data=/opt/fuse/jboss-fuse-6.1.0.redhat-379/data -Dkaraf.etc=/opt/fuse/jboss-fuse-6.1.0.redhat-379/etc -Dcom.sun.management.jmxremote -Dkaraf.startLocalConsole=false -Dkaraf.startRemoteShell=true -Djava.endorsed.dirs=%JAVA_HOME%/jre/lib/endorsed:%JAVA_HOME%/lib/endorsed:/opt/fuse/jboss-fuse-6.1.0.redhat-379/lib/endorsed -Djava.ext.dirs=%JAVA_HOME%/jre/lib/ext:%JAVA_HOME%/lib/ext:/opt/fuse/jboss-fuse-6.1.0.redhat-379/lib/ext -Xms1024m -Xmx2048m -XX:PermSize=512m -XX:MaxPermSize=512m -Djava.library.path=/opt/fuse/jboss-fuse-6.1.0.redhat-379/lib/ -classpath /opt/fuse/jboss-fuse-6.1.0.redhat-379/lib/karaf-wrapper.jar:/opt/fuse/jboss-fuse-6.1.0.redhat-379/lib/karaf.jar:/opt/fuse/jboss-fuse-6.1.0.redhat-379/lib/karaf-jaas-boot.jar:/opt/fuse/jboss-fuse-6.1.0.redhat-379/lib/karaf-wrapper-main.jar -Dwrapper.key=eLM9uQ6Gu2nB8MZd -Dwrapper.port=32000 -Dwrapper.jvm.port.min=31000 -Dwrapper.jvm.port.max=31999 -Dwrapper.pid=1293 -Dwrapper.version=3.2.3 -Dwrapper.native_library=wrapper -Dwrapper.service=TRUE -Dwrapper.cpu.timeout=10 -Dwrapper.jvmid=2 org.apache.karaf.shell.wrapper.Main
euribe @ unevm-dfuseplat01 | 15:33:24 / $ ps -eLf | grep 1293 | wc -l
euribe @ unevm-dfuseplat01 | 15:33:35 / $ ps -eLf | grep 1047 | wc -l
euribe @ unevm-dfuseplat01 | 15:33:40 / $

euribe @ unevm-dfuseplat01 | 15:33:40 / $ ulimit -u
euribe @ unevm-dfuseplat01 | 15:37:05 / $

# ps auxwwwm | egrep -v "USER" | awk '{ print $1 }' | sort -u > /tmp/users.txt
# for i in `cat /tmp/users.txt`; do echo "User $i has open processes : `grep $i /tmp/users.txt | wc -l`"; done

The above doesn't work as is. It might work better if simplified to:

ps auxwwwm | egrep -v "USER" | awk '{ print $1 }' | sort | uniq -c

thanks, Joe! i've posted your command to the article, slightly modified.

For RHEL 7 and 8 isn't this solution article in some ways just misleading? If the service is started from systemd (e.g. using user=oracle and/or group=something) then nothing (absolutely nothing) will look at any of the limits configured (since they are set by the PAM module pam_limits) since you do not login using PAM. For example Oracle publishes this (I can't read the content):


Which says you must use the various Limit*****= in the systemd service definition not by setting up limits.

I could understand that if there was something in the saying that this only applies to legacy init scripts (since they usually do a full su inside the script) however this cannot possibly work with a native systemd service unless systemd has code or a directive to run most or parts of PAM modules so something like pam_limits does get used? Are you really really sure that this is always applicable in RHEL 7 and 8 including if this is started in a native systemd service?