Shrinking a volume - check my commands

Latest response

I have been asked by my superiors to make 1tb of space on one of our Redhat EL 6.9 servers available for a new application. We had previously put all of the available space into one Volume Group, so that one needs to be shrunk to make the space available to create the new one. My goal is to reduce the size with the least risk possible. I have tried booting this server to a gparted live iso through ilo before, but was unsuccessful, so we had to accomplish resizing of partitions through the ssh console.

Unfortunately it is impractical for us to back up the data on the existing volume so I know there is some risk of data loss. I'm hoping however that the more experienced Linux sysadmins here can check my commands for errors and to help me minimize the risk of data loss.

according to the output of the pvs command, /dev/foo_vg is mounted on /dev/sdb1, is currently sized at 4.37t and has 3.37t free.
according to the output of lvs, I have four logical volumes totaling 1020g within /dev/foo_vg

Here are the commands I propose to run:

unmount file system

umount -v /dev/foo_vg

check for errors

e2fsck -f /dev/foo_vg

Resize file system (greater than size of data but less than all available free space)

resize2fs /dev/foo_vg 2200GB

Reduce size of volume (less than reduction of file system size done by resize2fs)

lvreduce -l -1024G /dev/foo_vg

check for errors again

e2fsck -f /dev/foo_vg

Resize to use all available free space in volume

resize2fs /dev/foo_vg

Mount file system

mount /dev/foo_vg /dev/sdb1

Thanks for any helpful comments or suggestions anyone can give me.

Responses

Oh my.

Your terminology seems... very confused.

But if I understood the actual facts from your post correctly, I don't think you have to shrink anything: you have more than enough space to create a new 1 TB logical volume to contain your new application.

Shrinking might be necessary only if you need to move the new application to another host or another storage independently of other applications and their filesystems - and in that case, you would need a new disk or LUN anyway, so something else would have to be done. But you have not stated anything that would indicate such a requirement.

Unless your system has completely replaced the standard udev rules with custom ones, /dev/sdb1 is not a mountpoint: it's the physical volume (PV), the lowest level of the LVM stack. With LVM, you have the opportunity to aggregate one or more PVs into a single pile of storage: the volume group (VG). From your description of the pvs command output, it looks like /dev/foo_vg is actually the volume group (as its name seems to indicate!): it is definitely not a filesystem. On the top of the stack are the LVs, which are used to slice the total capacity of the VG into pieces again to hold one or more filesystems.

Based on what you say about the output of the lvs command, you have four logical volumes, and thus probably four separate application filesystems. Of course, some of your logical volumes might be used as raw databases or other non-filesystems instead.

(I would have really liked to see the actual command outputs instead of your possibly-distorted descriptions of them, but based on your username, I completely understand the requirement to edit out any identifiable details.)

Since your PV apparently has 3.37t free, it will be easy for you to provide a new filesystem of 1 tb size. You'll just need to use that free space to create a new LV, create a filesystem on it, and mount it wherever you wish. For the sake of example, let me assume that you want to name the new LV "new_app"; of course you can name it whatever you wish.

You'll need to do something very similar to this:

# lvcreate -L 1t -n new_app foo_vg        # create a new LV using the free space within the VG

# mkfs.xfs /dev/mapper/foo_vg-new_app   # create a filesystem on it

# mkdir /some/where/newapp  # create a mount point in a desired location

# echo "/dev/mapper/foo_vg-new_app /some/where/newapp xfs defaults 1 2" >>/etc/fstab
# mount /some/where/newapp

After this, your VG (and the single PV that currently contains it) will have 2.37t of free space left, which you can confirm using the commands "vgs" or "pvs". The foo_vg volume group will now have five LVs instead of four. Each of these LVs can be a separate filesystem, or a raw database storage area, or a swap area, or an encrypted container for a filesystem - whatever you need them to be.

Good and detailed response Matti.

Matti, thanks. I'm mainly a Windows sysadmin so forgive my poor terminology. Also, my past Linux experience did not include LVM so this is all new to me. I get now that a VG is just a logical container without a fixed size so my worries were unfounded. I was able to create a new LV with a filesystem, and mount it mostly using the commands you supplied. I really appreciate you helping me out.

Now does anyone know if I can change my displayed name on these forums? Or should I just create a personal account and use that one for the forums?

When I was reading your original post, I admit that I was thinking something like "From what I'm reading, this guy's idea of LVM structure seems to be almost completely backwards! Now, this will end badly unless his concept of LVM is straightened out... but how to get that idea across in a polite manner?" But all's well that ends well :-)

Some further points:

1.) You may have come across the fact that the LV device names can be specified in two ways: for example, /dev/mapper/foo_vg-new_app could also be specified as /dev/foo_vg/new_app. The first form seems to be the preferred one in modern RHEL; the second one is for legacy compatibility.

2.) The device name of your PV was /dev/sdb1. It tells me that your hardware is presenting the system a single disk/LUN (/dev/sdb) that appears to be partitioned, with partition 1 (/dev/sdb1) probably covering the whole disk. There are two schools of thought on how to use disks/LUNs as PVs. You can either just initialize the whole disk as a PV, in which case the PV name won't include a partition number at all. Or you can create a single whole-disk partition and make it the PV, like it seems to be done in your case.

The advantage of using a partition is that it allows other operating systems to easily recognize that the disk is in use. The disadvantage is that editing the partition table while the disk is in use can be difficult, as it used to be impossible to change the size of partitions that are currently mounted or otherwise in use by the system. Today, there is the "partprobe" command that is supposed to be able to make the kernel accept changes in partition sizes, but I've had mixed experiences with it.

The advantage of using the whole disk as a LVM PV is that if the disk is actually a LUN presented from a SAN, or a logical disk of a hardware RAID controller, it might be easy to extend it while the system is in use. First, you would do whatever is needed at the hardware side of things to increase the capacity of the logical disk/LUN. Then you would use a command like "echo 1 > /sys/block/sdb/device/rescan" to make the kernel aware of the new size of the disk, and then you could use "pvresize /dev/sdb" to let the LVM PV take up the new space and make it usable within the VG - all this while the filesystems are mounted and the applications are in use.

3.) If you run out of free space in your current VG, it might be easier to just add a second PV instead of extending the existing one, as your current PV is partitioned. Let's say you add a new disk: /dev/sdc. You can then partition it to match the structure of the existing disk, giving you the partition device name /dev/sdc1. Then you can initialize the new partition with "pvcreate /dev/sdc1" and add it to the VG with "vgextend foo_vg /dev/sdc1". You can definitely do this while the VG is being used. And now your VG again has some free space, which you can use for extending any existing LVs or for creating new ones.

4.) As long as your VG has free space available, extending an existing LV is a very simple on-line operation: first extend the LV with the lvextend command, e.g. "lvextend -L +100G /dev/mapper/foo_vg-new_app". Then resize the filesystem inside the LV with a filesystem-type-specific command to let it use the extended space. For xfs, the command is xfs_growfs; for ext2/ext3/ext4 filesystems, the command is resize2fs. Unfortunately the naming of the filesystem extension commands is not standardized yet.

There's also the fsadm command that should eventually provide an unified command for this, however it is not yet available in all Linux distributions - and you might still want to read the man page of the filesystem-type-specific extension tool to be aware of any limits the extension procedure might have with any particular filesystem type.

Regarding your displayed name: at the very top of the page, there should be a white bar with a person icon and your username on the right side. Click on it and a drop-down menu should appear: select "My Profile" under the "Customer Portal" subtitle. Then click on the blue "Edit Profile" icon. Review the information you see in there: if you have your email in place of the name, you can fix it yourself there. If that's not the problem, I have no clue - you might have to contact the administrators of the community discussions area to fix it then.

Close

Welcome! Check out the Getting Started with Red Hat page for quick tours and guides for common tasks.