many tmpfs

Latest response

Dear All,

After Install RHEL 7 have many tmpfs (6 tmpfs filesystem), these tmpfs mandatory for OS ,If not required we can unmount tmpfs?

File system look like after installation

devtmpfs 32G 0 32G 0% /dev
tmpfs 32G 88K 32G 1% /dev/shm
tmpfs 32G 9.7M 32G 1% /run
tmpfs 32G 0 32G 0% /sys/fs/cgroup

Responses

Yes, that's normal.

  • devtmpfs is your /dev, which is essential to the system.

  • /dev/shm allows a filesystem interface to shared memory, please see: https://www.cyberciti.biz/tips/what-is-devshm-and-its-practical-usage.html

  • /run is what was /var/run in previous versions of the Linux File System Standard (FSSTND): it includes PID files and sockets for all running system services, so it is pretty much essential unless you want to heavily modify your system.

  • /sys/fs/cgroup is a filesystem interface for control groups (cgroups). This is used by systemd, so it's also essential.

Thank you for valuable information shared. Actual these file system auto allocated each 32GB Space.Whether we can reduce the size. file system showing below devtmpfs 32G 0 32G 0% /dev tmpfs 32G 88K 32G 1% /dev/shm tmpfs 32G 9.7M 32G 1% /run tmpfs 32G 0 32G 0% /sys/fs/cgroup

Tmpfs filesystems are RAM-based: they don't use any disk space at all. And they only use as much RAM as is actually being used, so the four tmpfs filesystems you listed are currently only occupying about 9.8M of RAM all together.

The 32G is just the ultimate maximum limit for their growth, assuming that there is free RAM.

Thank you

Dear Matti Kurkela,

Whether we can make it one file system below mention tmpfs file systems or we can hide below file sytems?

tmpfs 32G 88K 32G 1% /dev/shm tmpfs 32G 9.7M 32G 1% /run tmpfs 32G 0 32G 0% /sys/fs/cgroup

You cannot make them into just one filesystem without modifying the system boot-up scripts, and I believe RedHat might consider those modifications unsupportable.

But you can hide the tmpfs filesystems from "df" command output by using the -x option to exclude some filesystem types:

df -x tmpfs

If you like the result, you could add this as a shell alias:

alias df='df -x tmpfs'

and place this alias definition into either your login scripts or even into system-wide login scripts.

If you have some programs using the "df" command and parsing their output (e.g. monitoring scripts/programs), you might instead create a wrapper script as /usr/local/bin/df:

#!/bin/sh
# explicitly call the real df command with options to exclude tmpfs
/bin/df -x tmpfs "$@"

Since /usr/local/bin is before /bin in standard $PATH setting, programs that use the df command should end up using /usr/local/bin/df instead of the real command. Then /usr/local/bin/df adds the exclude options and runs the real df command with the modified command line.

But monitoring programs should already be ignoring /proc and /sys filesystems: it should be possible to add either the tmpfs filesystem type or the /dev/shm, /run and /sys/fs/cgroup mount points to the list of filesystems to be ignored in the monitoring program.

Yes, Matti was spot on. I found this Red Hat KB https://access.redhat.com/solutions/2435631

Nice finds Matti,

Also see https://www.freedesktop.org/software/systemd/man/file-hierarchy.html

https://www.freedesktop.org/wiki/Software/systemd/APIFileSystems/

also Red_Hat_Enterprise_Linux-Migration_Planning_Guide-File_System_Layout.html

Also see the kernel-doc (you'll have to install the "kernel-doc" rpm) and look for /usr/share/doc/kernel-doc-[KERNEL-VERSION]/Documentation/filesystem/tmpfs.txt (could not find a functioning web link for this at the moment)

@Matti,

Good explanation, except one remark from me:

To exclude /dev/shm from monitoring might be bad practice, if you want to detect mallware. rootkits might use /dev/shm to manipulate inter process communication using fake shared memory devices.

What is your opinion about this?

Regards,

Jan Gerrit

I assumed that the original poster's question was in the context of the df command and/or general filesystem capacity monitoring at most. Malware detection is an angle I did not even consider.

I think that for malware detection, you'll need a bit more specific test than just monitoring how full /dev/shm filesystem is. Depending on the applications you're running, there might be a valid reason for having large files in /dev/shm. And if /dev/shm is getting full, you'd want it to be handled more as a type of "running out of RAM" alert, not as much an "running out of disk space" alert.

With a capable rootkit hooking into the VFS subsystem, the presence of any fake shm devices could be hidden just as easily as any other rootkit files anyway. So /dev/shm being unusually full might be an useful sign for the presence of some specific malware families, but I'd expect it's going to get worked around pretty quick by the rootkit developers if it turns out to be something that causes rootkits to get caught.

HI Matti Kurkela, Thanks, Last few posts you r given valuable information . I have one more issue once new users created after new tmpfs creating .Showing below tmpfs 6.3G 20K 6.3G 1% /run/user/1000 tmpfs 6.3G 16K 6.3G 1% /run/user/42 tmpfs 6.3G 0 6.3G 0% /run/user/1001 any possible we can restrict ,newly created tmpfs file sytems

That functionality is apparently hardcoded in pam_systemd.so PAM module and cannot be configured.

The document https://access.redhat.com/solutions/2435631 (linked earlier by Sadashiva Murthy M) describes the purpose of those tmpfs filesystems:

/run/user/$UID is a filesystem used by pam_systemd to store files used by running processes for that user. In previous releases these files were typically stored in /tmp as it was the only location specified by the FHS which is local, and writeable by all users. However using /tmp can causes issues because it is writeable by anyone and thus access control was challenging. Using /run/user/$UID fixes the issue because it is only accessible by the target user.

So their purpose is to increase protection between the users, and trying to stop them from being created is probably a bad idea. If your users have scripts that use /tmp as a hardcoded path, tell them they should replace "/tmp" in their pathnames with "${TMPDIR:-/tmp}". If $TMPDIR is defined, its value is used; if $TMPDIR is empty or nonexistent, the legacy /tmp pathname will be automatically used, so the changed scripts will be usable in older systems too.

exact issue I have Observed is /tmpfs get created whenever user is created. Can you please check below tmpfs file system look like after users created tmpfs 6.3G 20K 6.3G 1% /run/user/1000 tmpfs 6.3G 16K 6.3G 1% /run/user/42 tmpfs 6.3G 0 6.3G 0% /run/user/1001

Tomorrow users(probably around 20 users) need to be created

Yes, that is normal and expected for RHEL 7.x (and for many other new Linux distributions that are using systemd). The first time an user logs in after the system is rebooted, a /run/user/ tmpfs filesystem will be created.

Note that the per-user tmpfs filesystems are restricted to 6.3G (I guess 1/10 of the total RAM available on the system). They will actually consume that much RAM only if the user actually writes 6.3G of stuff into his/hers tmpfs filesystem.

Are the tmpfs filesystems actually causing you any problems, or are you just curious about them?

Hi , what is the the default size of tmpfs (/dev/shm ) is half of our physical RAM without swap. If oversize our tmpfs instances is their any impact.

Thanks in advance.

Murali Manupati, for starters, examine this discussion for your question at https://access.redhat.com/discussions/2188021, if needed, perhaps continue the discussion at that link.... let us know how it goes at that link

In my case, the root is getting filled up quickly. Is there a way to increase the root filesystem? I am using a different disk for Docker-related activities without using /, but this is the result. Frustrating.

[root@code-docker-lab /]# df -h

Filesystem Size Used Avail Use% Mounted on

/dev/sda1 10G 9.9G 161M 99% /

devtmpfs 7.8G 0 7.8G 0% /dev

tmpfs 7.8G 0 7.8G 0% /dev/shm

tmpfs 7.8G 17M 7.8G 1% /run

tmpfs 7.8G 0 7.8G 0% /sys/fs/cgroup

/dev/sdb 148G 12G 129G 9% /dockerdisk

tmpfs 1.6G 0 1.6G 0% /run/user/1001

In my case, the root is getting filled up quickly. Is there a way to increase the root filesystem?

It is non-trivial to increase the space on / When installing my system, I frequently try to have /var on it's own filesystem so that it doesn't take up all available space on / if something goes bad.

It may be useful to see which top level directories are taking lots of space.

Hi I have requirement to create separate filesystem say xfs and mount it on /var/run . This was symbolic link to /run. we tried unlink and mounted /var/run with xfs but lot of service were failed.

Is it possible to create separate filesystem and mount it on /var/run without affecting the service ?

Close

Welcome! Check out the Getting Started with Red Hat page for quick tours and guides for common tasks.