Red Hat EC2 images and LVM

Latest response

Hi,

I've been looking at the images provided by Red Hat on Amazon's EC2 instances. Looking at both the on-demand Amazon image and the one provided via the Red Hat Cloud Access program, it appears that there is no use of LVM.

One way of getting around this may be to upload our own RHEL build via Amazon's import service. However, I was wondering if anyone had tried (successfully) to convert the vanilla Red Hat provided image to an LVM based one with separate boot partition? I believe it could be possible via use of an additional EBS volume presented to an instance running Red Hat's image.

The reason I want to do this is to be able to easily grow create volumes for new instances that are stood up.

I appreciate any help/advise anyone can offer.

Regards,

Nathan

Responses

Hi Nathan - I will present a question back to you...

Instead of growing the OS volumes (existing) have you considered adding another device and having a separate Volume Group for your Application? (I admit I have no idea what your particular use case is).

Hi James,

It's still early days but yes the plan is with application volumes to create a separate volume group for it and mount that under /opt or somewhere. Currently I'm only thinking of the root volume.

What I am seeing is that the default /dev/xvda1 partition on the Red Hat image is set at approximately 6.5GB. When presented with a larger root disk it doesn't fill the additional free space; I can see there are methods using snapshot (via AWS) to grow the root volume, but that doesn't seem straightforward or something that could be done on the fly at the creation of a new instance.

What I'd like to have is a /dev/xvda1 partition that carries just a boot partition, then a /dev/xvda2 partition that would be home to an LVM volume group containing the root volume. A trigger of somesort (firstboot, chef, whatever) could grow the LVM volume quite easily to occupy the whole disk an resize logical volumes to take advantage.

This way we could have one saved AMI that could be deployed to any size EC2 instance and its root volume grow accordingly.

Nathan

Something to bear in mind with what you're looking to do: if you really want to grow /dev/xvda2 to the end of an arbitrarily-sized EBS volume you'll need to do your LVM partition-expansion before the instance's run-time kernel loads. The EL6 kernel doesn't really like when you try to alter an existing/in-use partition of an active disk.

Your other alternatives are:
- Have a standard, small image-basis and then simply add a third partition to /dev/xvda as part of the bootup process for up-sized instantiations. Then you can use LVM to grow the root volume-group onto the remainder of the upsized EBS.
- Automate your AMI-generation process to make it quick/easy to maintain a family of AMIs with different sized root EBSes.

Good point regarding the issue with on-line resizing. As you say creating a /dev/xvda3 partition and using that to grow the root volume would be something I could do instead.

Thanks Tom!

I agree that the image should ship with LVM but I would approach expansion in the following way (no manipulation of the LVM partition):

/dev/sda1 - boot
/dev/sda2 - LVM (eg. vg_001)

Then post build (first run script) add disk space by introducing an additional physical volume(s) (/dev/sdb) to the existing volume group eg. vg_001 using pvcreate/vgextend. This gives you the flexibility to resize root partitions and your own application partitions from the same pool of disk.

AWS:
- you can't assume /dev/sdNs accross all instance types
- you can't assume contiguous device-naming (just because you say "attach at /dev/sdb" doesn't mean that's how the OS will create the device-node

That said, if your instance has the EC2 tools installed, you can write your first-run script to leverage the EC2 tools to provide a reliable mapping from the secondary EBS(es) to the host's device-node(s).

Could probably do something similar in the external-to-instance tools, as well (but I haven't had a chance to play much with those, yet). Those might also allow you to do an offline repartitioning so that you don't end up with partitionitis on your first EBS (and, while you'd still have non-contiguously-allocated volume objects, with AWS's backing storage-system, it's generally not overly relevant).

Agreed, I should have been clearer with complexities with device naming, I included the device names to provide some clarity around the method I was suggesting. Additionally, I should I have included the LVM partition creation when the disk is introduced and used 'sdb1'.

The first run script I have leveraged in the past discovers all un-partioned disks and LVM partitions + adds them to the primary volume group, regardless of name, which worked well for that customer's requirement.

Setting up an LVM-enabled, single EBS device is pretty trivial. We had security requirements to follow as many of the SCAP recommendations as possible - inclusive of disk partitioning. While you can do the image upload method, it's both a pain in the ass and takes a stupidly-long time to do.

That said, yes, it's possible to create an AMI that uses a two-partition (/boot + LVM) disk-scheme by bootstrapping from your original ELx image and building onto an attached EBS. I ended up scripting out the process. It takes about 15 minutes from the time you attach the secondary EBS till you've got a launchable, LVM-enabled AMI.

Full disclosure: the process is a lot easier when using CentOS, Scientific Linux or Amazon Linux than it use using RedHat. This derives from a simple dependency issue, however. It's slightly harder to satisfy a chroot-build installation method with RHEL than the other ELs because the other ELs have public RPM repos to draw from. You'd probably have to set up your own RHEL repo from which to initiate your chroot-build.

Disk partitioning is also something I want to do, separate out /var /tmp etc from /.

That's something else I was somewhat surprised to see was missing from the standard EC2 RHEL images. I assume that the objective was just to deliver the smallest simplistic file-system layout possible, then have people alter it to their needs.

Interesting that Centos may be simpler than RHEL. I will try out creating a process to build to an attached EBS volume using a Centos image first.

It's more a legacy issue with EC2: the PVM instance types didn't used to support partitioned disks (or, at least, Amazon didn't support doing so). With the 1.0.3 PVM kernel, you could partition your EBSes and be able to boot a PVM instance-type. WIth the 1.0.4 PVM kernel, even though they still have hd0 and hd00 kernels in the instance-creation's kernel list, they're actaully both the same.

CentOS is only simpler inasmuch as doing a chroot install of RHEL means that you need to have unfettered access to your source RedHat RPM repository. I don't know whether the RedHat image that's on the Amazon market instantiates bound to a Satellite or, with BYOL, you need to take the instance you launched from the Marketplace AMI and then rhn_register it.

I'm "poor", so, I don't have my own "L" to "BYO", thus, I use the freebies to make my AMIs from.

Assuming you have your instance bound to RHN, you could probably do the chroot install just as easily under RHEL as you can CentOS, SciLin or Amazon Linux.

The other thing is, if you have RHN access, what access level do you have to your entitlement. If you're planning to instantiate and terminate instances on a frequent basis, if you don't have sufficient access to RHN, managing available entitlements can be kind of a pain (some of which would be aleviated if you do key-based registration so that the new instance can programatically supercede prior instances' entitlements).

Do you have example of this script?

Threading in the forums is kind of sub-optimal. Couple posters are referencing scripts - which are you after?

Yours "That said, yes, it's possible to create an AMI that uses a two-partition (/boot + LVM) disk-scheme by bootstrapping from your original ELx image and building onto an attached EBS. I ended up scripting out the process. It takes about 15 minutes from the time you attach the secondary EBS till you've got a launchable, LVM-enabled AMI."

:D

Ah. It's a set of scripts on GitHub. Follow the steps outlined in the README.scripts file to ensure everything's done in the correct order. The README.dependecies file tells you what additional packages your build-host will need in order to use the scripts.

Currently, it's optimized for CentOS and SciLin. Doing a RedHat LVM is a touch more involved:

  • Gotta stage all of the installation RPMs to the build host and create a local repo from that staging location, then apply your build rule. Look at the README.notes_for_RHEL6 file for necessary repo contents.
  • If you want to have update-entitlement done through a billed AWS account, you need to build from an AWS Red Hat image, spin a throwaway AWS RedHat image and attach your EBS in place of the default root-EBS (failure to do this means that the biller attribute won't get populated into your instances and you won't be able to use the AWS RedHat yum repos - I discuss this at my blog

I've not automated these additional steps, yet, however,

Tom - Thanks! This is excellent! I was looking for a way to build AMI using our internal standards [partitioning, repos, pkgs etc] and this is perfect. Like you mentioned there are a few glitches but i was able to take the scripts and idea and customize to our needs.

I noticed the scripts create HVM AMIs, however how can I create a paravirtual LVM Based AMI with the scripts?

They'll create either PVM or HVM if you don't care about inheriting the billingProducts attribute.

Basically, once you've chroot-built an OS onto the target EBS, you can create a single snapshot, then register a PV and/or and HVM instance from the common snap (meaning you get two, identical-but-for-virtualization-style AMIs).

I was going to try to solve the "inherit billingProducts attribute" problem (when I had some free time), but Amazon is generally in the process of deprecating PV instances (e.g., the new Frankfurt region is HVM-only). So, it wasn't as immediate a priority as solving the general "figure out how to create a common build across all ELn derivatives" problem.

The core of the "inherit billingProducts attribute" problem is that the tool used to register an AMI from an instance doesn't allow you to reset the root device node. Having "/dev/sda1" as your root device node with HVM isn't a problem (it's what you need, actually). Having your root device node on "/dev/sda1" when you've partitioned the EBS results in PV-GRUB not wanting to find "/boot".

Resurrecting this post.

Addressing Thomas or anybody that may have encountered this same hurdle.

Is there a workaround to the resetting the root device above (thomas.jones2@dodiis.mil)? I've run into this limitation.

Generally not a good idea to resurrect posts. Most people don't want to bother reading through years old history to try to answer a murky question.

It's generally better to start a new thread and more-clearly state your problem.

Have opened post https://access.redhat.com/discussions/3793221 to avoid resurrecting this post.