Vagrant(Virtualbox)/AMI Parity

Latest response

Hello,

My team is going towards this development workflow: Develop locally on a vagrant VM using test kitchen with chef. Once we're good with that we push to CI which runs the same test-kitchen scripts, but against ec2 in AWS. Ideally our vagrant base box (built using packer from the rhel-server 7.2 x86_64 dvd iso) would be extremely similar, if not the exact same as the base RHEL AMI.

The question is: What's the best way to start from an ISO and run the kickstart in such a way as to end up with a VM that looks as close as possible to the published AMI?

OR

Should we instead start from the ISO, KS it however we want, and then publish that as a private AMI in amazon?

Thanks!

Responses

[dupe-post content deleted]

We came to things from an AWS-first angle. Our initial projects were deployed on AWS, but then we had customers that wanted to be able to use our build outside of AWS - both in VirtualBox and VMware. We ended up taking the automated build-tools we'd written for direct-building in AWS and encapsulating them into Packer-managed workflows for VirtualBox and VMware (and, eventually, Azure and Google Cloud).

Using Packer to manage builds in the target environments is a major time-saver - particularly when making cloud images. Transferring a locally-built VM image into the cloud services easily trebles the time required to create a launchable template in the cloud-provider's environment (versus doing a direct-build in that environment).

Overall, our approach was to build an LVM-enabled "@Core" build (our security requirements necessitate use of LVM for root filesystems) for each environment and then layering on the components specific to the deployment target (e.g., our AWS AMIs contain the @Core RPM-set plus cloud-init plus all of the the AWS-related RPMs normally found in Amazon Linux - aws-cli, aws-cfn-bootstrap, aws-apitools, ec2-net-utils, etc.).

Note: putting the root partitions under LVM also meant having to hack at the dracut modules that support resizing the root disk to understand what to do with LVM-encapsulated boot partitions.

Hey Tom, thanks for replying! I want to make sure I understand, you're saying don't bother trying to convert offline VM to cloud as the time sink is a lot. So for your VirtualBox/VMWare builds start from....an ISO?

What we found is that our initial build from the ISO was very different from the default RHEL AMI (installed packages especially). This may be due to how we do our initial build from ISO.

Our hope is to indeed keep parallel builds (using packer templates) to keep virtualbox VMs in line with AWS AMIs

What do you all do as part of your kickstart from ISO?

With respect to the cloud-hosted instances, the time it took to transfer an image created in either VMware or Virtual box took us 3x the time just to transfer that it took to simply build our AMI directly within the cloud. So, we build our AMIs directly in the cloud (using Packer as an orchestrator for the cloud-hosted scripted-builds).

With respect to building images for VirtualBox/VMware/etc., we created a custom-KickStart file based on the package-list used for our cloud-hosted images. Whether you use a custom-KickStart to govern an ISO-build or a network-based build isn't particularly material to the resultant system-image. We start from ISO because most of the groups we provide image-standardization to have no network-based build-infrastructure. That said, if you had Satellite, SpaceWalk, etc to net-build your images, the KickStart file should work in that context, as well.

The primary key to getting your packages consistent across images is your %packages section. Something basic like:

@Core

kernel
ntp
openssh-clients
selinux-policy

Is a good starting point.

Close

Welcome! Check out the Getting Started with Red Hat page for quick tours and guides for common tasks.