Issues Growing Root Disk

Latest response

I'd generated a standard RHEL 6 AMI for use by my organization (our IA requirements mean that LVM on AWS/EBS is mandatory). I've had a couple of customers who complained "the root disk's partitioning is too small and when we allocate extra disk space to it in the cloud console, the extra space goes unused". Wasn't a use-case I was anticipating, but figured "I'll take a stab at sorting it out." I ended up updating the initramfs scripts that handle doing the resizing of the partition hosting /. The hacked initramfs pre-mount script sorta works - sfdisk sees the extended partition:

# sfdisk -l -uM /dev/xvda 

    Disk /dev/xvda: 3263 cylinders, 255 heads, 63 sectors/track
    Units = mebibytes of 1048576 bytes, blocks of 1024 bytes, counting from 0

       Device Boot Start   End    MiB    #blocks   Id  System
    /dev/xvda1         1    476    476     487424   83  Linux
    /dev/xvda2       477  25595- 25119-  25721599+  8e  Linux LVM
    /dev/xvda3         0      -      0          0    0  Empty
    /dev/xvda4         0      -      0          0    0  Empty

However, the sizes seen by lsblk (and, apparently by extension, pvresize), still shows the old size-value for /dev/xvda2:

    NAME                            MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
    xvda                            202:0    0   25G  0 disk
    ├─xvda1                         202:1    0  476M  0 part /boot
    └─xvda2                         202:2    0 19.5G  0 part
      ├─VolGroup00-rootVol (dm-0)   253:0    0    4G  0 lvm  /
      ├─VolGroup00-swapVol (dm-1)   253:1    0    2G  0 lvm
      ├─VolGroup00-homeVol (dm-12)  253:12   0    1G  0 lvm  /home
      ├─VolGroup00-varVol (dm-13)   253:13   0    2G  0 lvm  /var
      ├─VolGroup00-logVol (dm-14)   253:14   0    2G  0 lvm  /var/log
      └─VolGroup00-auditVol (dm-15) 253:15   0  8.5G  0 lvm  /var/log/audit

Any ideas on how I get the run-time OS's higher-level storage tools to see the extra space that sfdisk/fdisk see?

Responses

Hmm... Looks like my modifying the script actually works ...but it takes a reboot after the initial-launch of the AMI to get things to a sane state.

Basically, I'd shut my test system down, Friday afternoon, in hopes that I'd have an "ah ha!" moment over the weekend. No such moment happened. However, when I booted the instance I'd shut down, Friday, when it came up, this morning, (s)fdisk and lsblk were in agreement:

# fdisk -lu /dev/xvda

Disk /dev/xvda: 53.7 GB, 53687091200 bytes
255 heads, 63 sectors/track, 6527 cylinders, total 104857600 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00008625

Device Boot      Start         End      Blocks   Id  System
/dev/xvda1            2048      976895      487424   83  Linux
Partition 1 does not end on cylinder boundary.
/dev/xvda2          976896   104856254    51939679+  8e  Linux LVM

# lsblk /dev/xvda
NAME                           MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
xvda                           202:0    0   50G  0 disk
├─xvda1                        202:1    0  476M  0 part /boot
└─xvda2                        202:2    0 49.5G  0 part
  ├─VolGroup00-rootVol (dm-0)  253:0    0    4G  0 lvm  /
  ├─VolGroup00-swapVol (dm-1)  253:1    0    2G  0 lvm
  ├─VolGroup00-homeVol (dm-2)  253:2    0    1G  0 lvm  /home
  ├─VolGroup00-varVol (dm-3)   253:3    0    2G  0 lvm  /var
  ├─VolGroup00-logVol (dm-4)   253:4    0    2G  0 lvm  /var/log
  └─VolGroup00-auditVol (dm-5) 253:5    0  8.5G  0 lvm  /var/log/audit

Interestingly, the boot apparently hadn't caused pvscan/pvresize to run properly, as my LVM2 root volume-group was still showing 20GiB. However, since lsblk was showing the space, I did a manual run of pvresize and things got happy:

# vgdisplay -s
  "VolGroup00" 49.53 GiB [19.53 GiB used / 30.00 GiB free]

Now to figure out if there's a way to make everything consistent from the first boot rather than requiring an instantiation-time multi-boot sequence.

Still: anyone got any ideas that might help me puzzle this out quicker?

Ended up submitting a BugZilla asking about getting LVM support added (provided the modifications that got me part of my end-state condition - hopefully the bug-owner will take that, run with it, and figure out how to get it all visible within the initial boot.

If anyone has ideas on further hacks to make it so the storage is fully available on initial cloud instance-launch, it'd be awesome if you could post it up, here.

Unfortunately higher level storage tools (like lvm, multipath) will not see new space until underlying block device size is increased. Also you have to manually resize lvm after underlying block device (which is used for pv) size is increased.

In your case, I see that lvm is present on partition device (xvda2) so here additional steps are required.

Below are general steps to achieve same :
1) Increase size of disk using cloud provider tools 2) if reboot possible then reboot system to see new size, otherwise perform scsi rescan to see new size 3) in your case, you have to increase size of partition device (xvda2) as well. This is challenging part. Take latest backup of data. To increase xvda2, go to rescue mode and do not mount root file system. here first delete this partition using fdisk and then re-create it with same starting cylinder and end cylinder with last cylinder of disk. 4) Now size of xvda2 is increased, boot system and then just perform pvresize, vgresize and lvresize to see new size at lvm layer 5) increase file system using resize2fs

Other easy way is to add new disk to system, perform scsi rescan to see it (https://access.redhat.com/solutions/3941). Once disk is detected , then pvcreate on it, vgextend , lvextend (https://access.redhat.com/solutions/1265413).

Hope above answer your query.

Regards Ajit Mote

Unfortunately, it doesn't really answer to what I'm trying to accomplish (i.e., have the OS able to make immediate use of the additional storage during the initial boot-from-template boot).

The underlying block device's size has been increased and the end-of-disk partition (that hosts the PV/LV that hosts the "/" filesystem) has been appropriately resized (see output from sfdisk). This size-increase is effected by a pre-mount script in the initramfs. The problem is that using pvresize isn't possible during the launch-boot because the size-increase seen by sfdisk isn't visible to pvresize.

Overall, I'm trying to avoid having to have the root volume group split across PVs. I'm also trying to avoid a launch-sequence that requires I embed a secondary boot, as it make the automated-provisioning logic more complicated than it needs to be (since the cloud-init tools really work their best for "run once" types of tasks when all those tasks are handled within the initial launch-boot).

Close

Welcome! Check out the Getting Started with Red Hat page for quick tours and guides for common tasks.