LVM Frustrations in AWS

Latest response

Our security folks require that any EL6 systems we deploy have a specific minimum partitioning scheme for the root filesystems. The most practical way to meet these requirements is to use LVM. Under a 'cattle' model, using LVM for root is a non-issue ("botched a config file in /etc and broke the system? No problem: nuke it and re-launch"). Unfortunately, many of the applications we support haven't reached the point of re-launch being quicker than a repair-attempt).

Further complicating things is that our security rules mean that application owners only have a very limited set of AMIs they're allowed to launch (i.e., can't just launch a community/MarketPlace AMI that doesn't use LVM to act as a fix-host). If they want to move their root disk to a fix-host, there are LVM collisions because the LVM objects' names and UUIDs.

The name issue is fairly trivial to solve with about four commands and a reboot. The UUID problem, however, is a blocker. Even with non-colliding LVM object names, instances that launch from the same AMI have the same UUIDs. This confuses the hell out of the LVM tools making it so that doing a pvchange --uuid /dev/path against the (to-be-imported) disk fails.

Anyone know a good way to alter the PV UUIDs outside the scope of the LVM tools, or is this just a total horror-show to try to sort out?

Responses

Sorted out the LVM name-collisions with a user-data script. Still can't solve the PV UUID collisions, though (the pvchange fails because of the collisions). Only way to avoid it is to use different months' AMIs for recovery than were used for launching the production instances.

So, only halfway there.

Close

Welcome! Check out the Getting Started with Red Hat page for quick tours and guides for common tasks.