Creating and Consuming Execution Environments
Create consistent and reproducible automation execution environments for your Red Hat Ansible Automation Platform.
Abstract
Preface
Use Ansible Builder to create consistent and reproducible automation execution environments for your Red Hat Ansible Automation Platform needs.
Making open source more inclusive
Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright’s message.
Chapter 1. Introduction to Automation execution environments
Using Ansible content that depends on non-default dependencies can be complicated because the packages must be installed on each node, interact with other software installed on the host system, and be kept in sync.
Automation execution environments help simplify this process and can easily be created with Ansible Builder.
1.1. About automation execution environments
Automation execution environments are container images on which all automation in Red Hat Ansible Automation Platform is run. Automation execution environments create a common language for communicating automation dependencies, and provide a standard way to build and distribute the automation environment.
An automation execution environment is expected to contain the following:
- Ansible 2.9 or Ansible Core 2.11-2.13
- Python 3.8-3.10
- Ansible Runner
- Ansible content collections
- Collection, Python, or system dependencies
1.1.1. Why use automation execution environments?
With automation execution environments, Red Hat Ansible Automation Platform has transitioned to a distributed architecture by separating the control plane from the execution plane. Keeping automation execution independent of the control plane results in faster development cycles and improves scalability, reliability, and portability across environments. Red Hat Ansible Automation Platform also includes access to Ansible content tools, making it easy to build and manage automation execution environments.
In addition to speed, portability, and flexibility, automation execution environments provide the following benefits:
- They ensure that automation runs consistently across multiple platforms and make it possible to incorporate system-level dependencies and collection-based content.
- They give Red Hat Ansible Automation Platform administrators the ability to provide and manage automation environments to meet the needs of different teams.
- They allow automation to be easily scaled and shared between teams by providing a standard way of building and distributing the automation environment.
- They enable automation teams to define, build, and update their automation environments themselves.
- Automation execution environments provide a common language to communicate automation dependencies.
Chapter 2. Using Ansible Builder
Ansible Builder is a command line tool that automates the process of building automation execution environments by using metadata defined in various Ansible Collections or created by the user.
2.1. Why use Ansible Builder?
Before Ansible Builder was developed, Red Hat Ansible Automation Platform users could run into dependency issues and errors when creating custom virtual environments or containers that included all of the required dependencies installed.
Now, with Ansible Builder, you can easily create a customizable automation execution environments definition file that specifies the content you want included in your automation execution environments such as, collections, requirements, and system level packages. This allows you to fulfill all of the necessary requirements and dependencies to get jobs running.
2.2. Installing Ansible Builder
You can install Ansible Builder using Red Hat Subscription Management (RHSM) to attach your Red Hat Ansible Automation Platform subscription. Attaching your Red Hat Ansible Automation Platform subscription allows you to access subscription-only resources necessary to install ansible-builder
. Once you attach your subscription, the necessary repository for ansible-builder
is automatically enabled.
You must have valid subscriptions attached on the host before installing ansible-builder
.
Procedure
In your terminal, run the following command to install Ansible Builder and activate your Ansible Automation Platform repo:
# dnf install --enablerepo ansible-automation-platform-2.2-for-rhel-8-x86_64-rpms ansible-builder
2.3. Building a definition file
Once you have Ansible Builder installed, you can create a definition file that Ansible Builder uses to create your automation execution environment image. The high level process to build an automation execution environment image is for Ansible Builder to read and validate your definition file, then create a Containerfile
, and finally pass the Containerfile
to Podman which then packages and creates your automation execution environment image. The definition file created is in yaml
format and contains different sections. For more information about the definition file content, see Breakdown of definition file content.
The following is an example of a definition file:
Example 2.1. A definition file
version: 1 build_arg_defaults: 1 ANSIBLE_GALAXY_CLI_COLLECTION_OPTS: "-v" dependencies: 2 galaxy: requirements.yml python: requirements.txt system: bindep.txt additional_build_steps: 3 prepend: | RUN whoami RUN cat /etc/os-release append: - RUN echo This is a post-install command! - RUN ls -la /etc
For more information about these definition file parameters, see Breakdown of definition file content.
2.4. Executing the build and creating commands
Prerequisites
- You have created a definition file
Procedure
To build an automation execution environment image, run:
$ ansible-builder build
By default, Ansible Builder will look for a definition file named execution-environment.yml
but a different file path can be specified as an argument via the -f
flag:
$ ansible-builder build -f definition-file-name.yml
where definition-file-name specifies the name of your definition file.
2.5. Breakdown of definition file content
A definition file is required for building automation execution environments with Ansible Builder, because it specifies the content that is included in the automation execution environment container image.
The following sections breaks down the different parts of a definition file.
2.5.1. Build args and base image
The build_arg_defaults
section of the definition file is a dictionary whose keys can provide default values for arguments to Ansible Builder. See the following table for a list of values that can be used in build_arg_defaults
:
Value | Description |
---|---|
| Allows the user to pass arbitrary arguments to the ansible-galaxy CLI during the collection installation phase. For example, the –pre flag to enable the installation of pre-release collections, or -c to disable verification of the server’s SSL certificate. |
| Specifies the parent image for the automation execution environment, enabling a new image to be built that is based off of an already-existing image. This is typically a supported execution environment base image such as ee-minimal or ee-supported, but it can also be an execution environment image that you have created previously and want to customize further.
The default image is |
|
Specifies the intermediate builder image used for Python dependency collection and compilation; must contain a matching Python version with
The default image is |
The values given inside build_arg_defaults
will be hard-coded into the Containerfile
, so these values will persist if podman build
is called manually.
If the same variable is specified in the CLI --build-arg
flag, the CLI value will take higher precedence.
2.5.2. Ansible config file path
The ansible_config
directive allows specifying the path to an ansible.cfg
file to pass a token and other settings for a private account to an automation hub server during the Collection installation stage of the build. The config file path should be relative to the definition file location, and will be copied to the generated container build context.
The ansible.cfg
file should be formatted like the following example:
Example 2.2. An ansible.cfg
file
[galaxy] server_list = automation_hub [galaxy_server.automation_hub] url=https://cloud.redhat.com/api/automation-hub/ auth_url=https://sso.redhat.com/auth/realms/redhat-external/protocol/openid-connect/token token=my_ah_token
For more information on how to download a collection from automation hub, please see the related Ansible documentation page.
2.5.3. Dependencies
To avoid issues with your automation execution environment image, make sure that the entries for Galaxy, Python, and system point to a valid requirements file.
2.5.3.1. Galaxy
The galaxy
entry points to a valid requirements file for the ansible-galaxy collection install -r …
command.
The entry requirements.yml
may be a relative path from the directory of the automation execution environment definition’s folder, or an absolute path.
The content of a requirements.yml
file may look like the following:
Example 2.3. A requirements.yml
file for Galaxy
collections: - community.aws - kubernetes.core
2.5.3.2. Python
The python
entry in the definition file points to a valid requirements file for the pip install -r …
command.
The entry requirements.txt
is a file that installs extra Python requirements on top of what the Collections already list as their Python dependencies. It may be listed as a relative path from the directory of the automation execution environment definition’s folder, or an absolute path. The contents of a requirements.txt
file should be formatted like the following example, similar to the standard output from a pip freeze
command:
Example 2.4. A requirements.txt
file for Python
boto>=2.49.0 botocore>=1.12.249 pytz python-dateutil>=2.7.0 awxkit packaging requests>=2.4.2 xmltodict azure-cli-core==2.11.1 python_version >= '2.7' collection community.vmware google-auth openshift>=0.6.2 requests-oauthlib openstacksdk>=0.13 ovirt-engine-sdk-python>=4.4.10
2.5.3.3. System
The system
entry in the definition points to a bindep requirements file, which will install system-level dependencies that are outside of what the collections already include as their dependencies. It can be listed as a relative path from the directory of the automation execution environment definition’s folder, or an absolute path. A minimum expectation is that the collection(s) specify necessary requirements for [platform:rpm]
.
To demonstrate this, the following is an example bindep.txt
file that adds the libxml2
and subversion
packages to a container:
Example 2.5. A bindep.txt
file
libxml2-devel [platform:rpm] subversion [platform:rpm]
Entries from multiple collections are combined into a single file. This is processed by bindep
and then passed to dnf
. Only requirements with no profiles or no runtime requirements will be installed to the image.
2.5.4. Additional custom build steps
The prepend
and append
commands may be specified in the additional_build_steps section
. These will add commands to the Containerfile
which will run either before or after the main build steps are executed.
The syntax for additional_build_steps
must be one of the following:
a multi-line string
Example 2.6. A multi-line string entry
prepend: | RUN whoami RUN cat /etc/os-release
a list
Example 2.7. A list entry
append: - RUN echo This is a post-install command! - RUN ls -la /etc
2.6. Optional build command arguments
The -t
flag will tag your automation execution environment image with a specific name. For example, the following command will build an image named my_first_ee_image
:
$ ansible-builder build -t my_first_ee_image
If you do not use -t
with build
, an image called ansible-execution-env`
is created and loaded into the local container registry.
If you have multiple definition files, you can specify which one to use by utilizing the -f
flag:
$ ansible-builder build -f another-definition-file.yml -t another_ee_image
In the example above, Ansible Builder will use the specifications provided in the file another-definition-file.yml
instead of the default execution-environment.yml
to build an automation execution environment image named another_ee_image
.
For other specifications and flags that are possible to use with the build command, enter ansible-builder build --help
to see a list of additional options.
2.7. Containerfile
Once your definition file is created, Ansible Builder reads and validates it, then creates a Containerfile
, and finally passes the Containerfile
to Podman to package and create your automation execution environment image using the following instructions:
- Fetch base image
- In the ephemeral copy of base image, collections are downloaded and the list of declared Python and system dependencies, if any, are collected for later.
- In the ephemeral builder image, Python wheels for all Python dependencies listed in the definition file are downloaded and built (as needed), including all Python dependencies declared by collections listed in the definition file.
-
prepend
for additional_build_steps from the definition file are run. - In the final automation execution environments image, system dependencies listed in the definition file are installed, including all system dependencies declared by collections listed in the definition file.
- In the final automation execution environments image, the downloaded collections are copied and the previously fetched Python dependencies are installed.
-
append
for additional_build_steps from the definition file are run.
2.8. Creating a Containerfile without building an image
To create a shareable Containerfile
without building an image from it, run:
$ ansible-builder create
Chapter 3. Publishing an automation execution environment
3.1. Customizing an existing automation execution environments image
Ansible Controller includes three default execution environments:
-
Ansible 2.9
- no collections are installed other than Controller modules -
Minimal
- contains the latest Ansible 2.13 release along with Ansible Runner, but contains no collections or other additional content -
EE Supported
- Minimal, plus all Red Hat-supported collections and dependencies
While these environments cover many automation use cases, you can add additional items to customize these containers for your specific needs. The following procedure adds the kubernetes.core
collection to the ee-minimal
default image:
Procedure
Log in to
registry.redhat.io
via Podman:$ podman login -u="[username]" -p="[token/hash]" registry.redhat.io
Ensure that you can pull the desired automation execution environment base image
podman pull registry.redhat.io/ansible-automation-platform-22/ee-minimal-rhel8:latest
Configure your Ansible Builder files to specify the desired base image and any additional content to add to the new execution environment image.
For example, to add the Kubernetes Core Collection from Galaxy to the image, fill out the
requirements.yml
file as follows:collections: - kubernetes.core
- For more information on definition files and their content, refer to to definition file breakdown section.
In the execution environment definition file, specify the original
ee-minimal
container’s URL and tag in theEE_BASE_IMAGE
field. In doing so, your finalexecution-environment.yml
file will look like the following:Example 3.1. A customized
execution-environment.yml
fileversion: 1 build_arg_defaults: EE_BASE_IMAGE: 'registry.redhat.io/ansible-automation-platform-22/ee-minimal-rhel8:latest' dependencies: galaxy: requirements.yml
NoteSince this example uses the community version of
kubernetes.core
and not a certified collection from automation hub, we do not need to create anansible.cfg
file or reference that in our definition file.Build the new execution environment image using the following command:
$ ansible-builder build -t registry.redhat.io/[username]/new-ee
where
[username]
specifies your username, andnew-ee
specifies the name of your new container image.
If you do not use -t
with build
, an image called ansible-execution-env
is created and loaded into the local container registry.
Use the
podman images
command to confirm that your new container image is in that list:Example 3.2. Output of a
podman images
command with the imagenew-ee
REPOSITORY TAG IMAGE ID CREATED SIZE localhost/new-ee latest f5509587efbb 3 minutes ago 769 MB
Verify that the collection is installed:
$ podman run registry.redhat.io/[username]/new-ee ansible-doc -l kubernetes.core
Tag the image for use in your automation hub:
$ podman tag registry.redhat.io/[username]/new-ee [automation-hub-IP-address]/[username]/new-ee
Log in to your automation hub using Podman:
NoteYou must have
admin
or appropriate container repository permissions for automation hub to push a container. See Managing containers in private automation hub in the Red Hat Ansible Automation Platform documentation for more information.$ podman login -u="[username]" -p="[token/hash]" [automation-hub-IP-address]
Push your image to the container registry in automation hub:
$ podman push [automation-hub-IP-address]/[username]/new-ee
- Pull your new image into your automation controller instance:
- Navigate to automation controller.
- From the side-navigational bar, click Administration → Execution Environments.
- Click Add.
Enter the appropriate information then click Save to pull in the new image.
NoteIf your instance of automation hub is password or token protected, ensure that you have the appropriate container registry credential set up.
Chapter 4. Populating your private automation hub container registry
By default, private automation hub does not include container images. To populate your container registry, you need to push a container image to it. The procedures in this section describe how to pull images from the Red Hat Ecosystem Catalog (registry.redhat.io), tag them, and push them to your private automation hub container registry.
4.1. Prerequisites
- You have permissions to create new containers and push containers to private automation hub.
4.2. Obtaining images for use in automation hub
Before you can push container images to your private automation hub, you must first pull them from an existing registry and tag them for use. This example details how to pull an image from the Red Hat Ecosystem Catalog (registry.redhat.io).
Prerequisites
You have permissions to pull images from registry.redhat.io.
Procedure
Log in to Podman using your registry.redhat.io credentials:
$ podman login registry.redhat.io
- Enter your username and password at the prompts.
Pull a container image:
$ podman pull registry.redhat.io/<container_image_name>:<tag>
Verification
List the images in local storage:
$ podman images
- Verify that the image you recently pulled is contained in the list.
- Verify that the tag is correct.
Additional resources
- See Red Hat Ecosystem Catalog Help for information on registering and getting images.
4.3. Tagging images for use in automation hub
After you pull images from a registry, tag them for use in your private automation hub container registry.
Prerequisites
- You have pulled a container image from an external registry.
- You have the FQDN or IP address of the automation hub instance.
Procedure
Tag a local image with the automation hub container repository
$ podman tag registry.redhat.io/<container_image_name>:<tag> <automation_hub_URL>/<container_image_name>
Verification
List the images in local storage:
$ podman images
- Verify that the image you recently tagged with your automation hub information is contained in the list.
4.4. Pushing a container image to private automation hub
You can push tagged container images to private automation hub to create new containers and populate the container registry.
Prerequisites
- You have permissions to create new containers.
- You have the FQDN or IP address of the automation hub instance.
Procedure
Log in to Podman using your automation hub location and credentials:
$ podman login -u=<username> -p=<password> <automation_hub_url>
Push your container image to your automation hub container registry:
$ podman push <automation_hub_url>/<container_image_name>
NoteThe
--remove-signatures
flag is required when signed images from registry.redhat.io are pushed to the automation hub container registry. Thepush
operation re-compresses image layers during the upload, which is not guaranteed to be reproducible and is client implementation dependent. This may lead to image-layer digest changes and a failed push operation, resulting inError: Copying this image requires changing layer representation, which is not possible (image is signed or the destination specifies a digest)
.
Verification
- Log in to your automation hub.
- Navigate to Container Registry.
- Locate the container in the container repository list.
Chapter 5. Setting up your container repository
You can setup your container repository to add a description, include a README, add groups who can access the repository, and tag images.
5.1. Prerequisites
- You’re logged in to a private Automation Hub with permissions to change the repository.
5.2. Adding a README to your container repository
Add a README to your container repository to provide instructions to your users for how to work with the container. Automation hub container repositories support Markdown for creating a README. By default, the README will be empty.
Prerequisites
- You have permissions to change containers.
Procedure
- Navigate to Execution Environments.
- Select your container repository.
- On the Detail tab, click Add.
- In the Raw Markdown text field, enter your README text in Markdown.
- Click Save when finished.
Once you add a README, you can edit it at any time by clicking Edit and repeating steps 4 and 5.
5.3. Providing access to your container repository
Provide access to your container repository to users who need to work the images. Adding a group allows you to modify the permissions the group can have to the container repository. You can use this option to extend or restrict permissions based on what the group is assigned.
Prerequisites
- You have change container namespace permissions.
Procedure
- Navigate to Execution Environments.
- Select your container repository.
- Click Edit at the top right of your window.
Under Groups with access, select a group or groups to grant access to.
- Optional: Add or remove permissions for a specific group using the drop down under that group name.
- Click Save.
5.4. Tagging container images
Tag images to add an additional name to images stored in your automation hub container repository. If no tag is added to an image, automation hub defaults to latest
for the name.
Prerequisites
- You have change image tags permissions.
Procedure
- Navigate to Execution Environments.
- Select your container repository.
- Click the Images tab.
- Click , then click Manage tags.
Add a new tag in the text field and click Add.
- Optional: Remove current tags by clicking the x on any of the tags for that image.
- Click Save.
Verification
- Click the Activity tab and review the latest changes.
Chapter 6. Pulling images from a container repository
Pull images from the automation hub container registry to make a copy to your local machine. Automation hub provides the podman pull
command for each latest
image in the container repository. You can copy and paste this command into your terminal, or use podman pull
to copy an image based on an image tag.
6.1. Prerequisites
- You have permission to view and pull from a private container repository.
6.2. Pulling an image
You can pull images from the automation hub container registry to make a copy to your local machine. Automation hub provides the podman pull
command for each latest
image in the container repository.
Procedure
- Navigate to Execution Environments.
- Select your container repository.
- In the Pull this image entry, click Copy to clipboard.
- Paste and run the command in your terminal.
Verification
-
Run
podman images
to view images on your local machine.
6.3. Additional resources
- See the Podman documentation for options to use when pulling images.
Appendix A. Automation execution environments precedence
Project updates will always use the control plane automation execution environments by default, however, jobs will use the first available automation execution environments as follows:
-
The
execution_environment
defined on the template (job template or inventory source) that created the job. -
The
default_environment
defined on the project that the job uses. -
The
default_environment
defined on the organization of the job. -
The
default_environment
defined on the organization of the inventory the job uses. -
The current
DEFAULT_EXECUTION_ENVIRONMENT
setting (configurable atapi/v2/settings/jobs/
) -
Any image from the
GLOBAL_JOB_EXECUTION_ENVIRONMENTS
setting. - Any other global execution environment.
If more than one execution environment fits a criteria (applies for 6 and 7), the most recently created one is used.