Disk space warnings on /root
I am getting warnings that my /root directory is running out of space but when I check it, there seems to be plenty of space:
[root@Spock Scott]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/rhel-root 50G 13G 38G 26% /
devtmpfs 252G 0 252G 0% /dev
tmpfs 252G 152K 252G 1% /dev/shm
tmpfs 252G 18M 252G 1% /run
tmpfs 252G 0 252G 0% /sys/fs/cgroup
/dev/sda1 494M 202M 293M 41% /boot
/dev/sdb 28T 22T 6.2T 78% /run/media/Scott/30TB_XFS
/dev/md127p1 2.2T 1.9T 381G 84% /run/media/Scott/FIORd0_XFS
/dev/mapper/rhel-home 318G 6.3G 312G 2% /home
//kirk/kirk_c 931G 908G 24G 98% /run/media/Scott/Kirk_c
//kirk/kirk_d 1.9T 1.4T 439G 77% /run/media/Scott/Kirk_d
tmpfs 51G 28K 51G 1% /run/user/1000
I read some instructions on resizing the root directory but it seems there is already plenty of space, so I haven't tried it.
Would appreciate any advice, as I am now getting a similar warning from another system, same issue.
Thanks!
Attachments
Responses
try looking for deleted files that still have a file handle open.
lsof | grep delete
This will show the pid of the process - stopping/killing the process will close the handle and release the disk space.
Deleted-but-still-open files will be included in the Used and Use% values of the df command; it's just that when you have deleted-but-still-open files on the e.g. root filesystem, "du -kxs /" will result in a number that will be lower than the Used value reported by the df command.
With various commands, you get subtly different views of disk space allocation:
file size reported by "ls -l": "how much data is in this file?"
size reported by "du": "how much disk space is occupied by this file or set of files?" (can be smaller than the sum of "ls -l" sizes of respective files if the files are sparse)
used space reported by "df": how much disk space is in use (i.e. not free) in this filesystem?" (the "Used" value normally includes filesystem metadata and deleted-but-still-open files, so can be larger than the sum of sizes of visible files on that filesystem)
Note: some filesystems which don't quite fit the POSIX filesystem abstraction may require special considerations, e.g. the first version of GFS had to have a dedicated "gfs_tool df" command for accurate used/free disk space reporting.
Which filesystem type you're using on your root filesystem?
If you're using a filesystem type that cannot dynamically generate new inodes on demand, and you have a lot of small files on the filesystem, you might be running out of inodes instead of actual disk space. Check with "df -i".
For example, ext2/3/4 filesystems have a fixed ratio of inodes per unit of space, which is set at filesystem creation time. Therefore, extending the filesystem also increases the number of inodes available, so it can be used as a workaround if you're critically low on inodes and cannot e.g. archive large numbers of tiny files into a single archive file to free some inodes.
According to your df -h listing, your root filesystem seems to be using LVM, so you'll be in a good position for on-line extending the filesystem, if it turns out to be necessary.
XFS can/will generate inodes dynamically on-demand, so the lack of inodes won't be an issue for you... unless of course your warnings are generated by a program that is not aware of the fact, and gives you false alarms when the number of free inodes approaches 0. What exactly were the warnings you saw? Do you have any idea what piece of software created them?
When you went to "init 3", the Gnome processes were killed and any disk space held by their deleted-but-still-open should have been immediately released. But when you restarted Gnome and logged in again, many of those files (apparently temporary files used by Gnome components) are created again.
tracker-e 94604 Scott 14r REG 253,2 53936 536871164 /home/Scott/.local/share/gvfs-metadata/root.3QS7ZY (deleted)
tracker-e 94604 Scott 15r REG 253,2 32768 536871532 /home/Scott/.local/share/gvfs-metadata/root-315dde70.log (deleted)
These get repeated in the list several times, since process#94604 that was currently holding onto them is multithreaded, and the lsof listing includes the file handles held by each thread.
They look like gvfs-metadata files, which are basically directory caches for Gnome file management tools. They have the largest size/offset value of the files in your listing, but those values are about 52 KB and 32 KB respectively, so they would be fairly modest in size.
In short, your lsof listing doesn't look like nearly enough to cause your problems.
...oh dear. I just realized that your original df -h listing includes these lines:
dev/sdb 28T 22T 6.2T 78% /run/media/Scott/30TB_XFS
/dev/md127p1 2.2T 1.9T 381G 84% /run/media/Scott/FIORd0_XFS
So, you apparently have some pretty serious storage there. Tens of terabytes of it. And the path /run/media/Scott suggests these might be mounted by GVFS, Gnome's file management layer.
As a modern desktop environment, Gnome includes a search indexer that will work in the background.
Unless you have excluded those large storage volumes from indexing, the indexer might try and create search indexes for those... and if you have something like a large collection of music or photos in there (as opposed to e.g. 4K video files, which tend to be rather Huge and so would be considerably fewer in number), there might be... quite a lot of them.
So the index would become quite large, perhaps so large that it might trigger a bug in the indexer. Or maybe your configuration includes some search plugins, and the indexer will attempt to index all your archived photos not only by filename, but also by timestamps, tags, geolocation, face detection and whatnot... and the index gets so big it no longer fits into your home directory within the root filesystem, and as soon as the indexer notices it has run out of disk space it recognizes it has goofed and cleans up most of its temporary files.
Apparently some versions of GVFS in some Linux distributions are known to have bugs in them that cause it to sometimes eat up a lot of disk space without a good reason. (I wouldn't know, I prefer KDE personally.)
Of course, good indexing is what makes such a huge storage much more useful... but I think your storage is so big that gvfs might need some configuration to handle it in a sane way.
Welcome! Check out the Getting Started with Red Hat page for quick tours and guides for common tasks.
