Chapter 4. Disconnected installation
4.1. Ansible Automation Platform installation on disconnected RHEL
Install Ansible Automation Platform (AAP) automation controller and a private automation hub, with an installer-managed database located on the automation controller without an Internet connection.
To install AAP on a disconnected network, complete the following prerequisites:
- Create a subscription manifest.
- Download the AAP setup bundle.
- Create DNS records for the automation controller and private automation hub servers.
The setup bundle includes additional components that make installing AAP easier in a disconnected environment. These include the AAP RPMs and the default execution environment (EE) images.
4.1.2. System Requirements
Hardware requirements are documented in the Automation Platform Installation Guide. Reference the "Red Hat Ansible Automation Platform Installation Guide" in the AAP Product Documentation for your version of AAP.
4.1.3. RPM Source
RPM dependencies for AAP that come from the BaseOS and AppStream repositories are not included in the setup bundle.To add these dependencies, you must obtain access to BaseOS and AppStream repositories.
- Satellite is the recommended method from Red Hat to synchronize repositories
- reposync - Makes full copies of the required RPM repositories and hosts them on the disconnected network
- RHEL Binary DVD - Use the RPMs available on the RHEL 8 Binary DVD
The RHEL Binary DVD method requires the DVD for supported versions of RHEL 8.4 or higher. See Red Hat Enterprise Linux Life Cycle for information on which versions of RHEL are currently supported.
4.2. Synchronizing RPM repositories by using reposync
To perform a reposync you need a RHEL host that has access to the Internet. After the repositories are synced, you can move the repositories to the disconnected network hosted from a web server.
Attach the BaseOS and AppStream required repositories:
# subscription-manager repos \ --enable rhel-8-for-x86_64-baseos-rpms \ --enable rhel-8-for-x86_64-appstream-rpms
Perform the reposync:
# dnf install yum-utils # reposync -m --download-metadata --gpgcheck \ -p /path/to/download
Make certain that you use reposync with
--newest-only. See [RHEL 8] Reposync.
If not using
--newest-onlythe repos downloaded will be ~90GB.
--newest-onlythe repos downloaded will be ~14GB.
- Make certain that you use reposync with
If you plan to use Red Hat Single Sign-On (RHSSO) you must also sync these repositories.
- After the reposync is completed your repositories are ready to use with a web server.
- Move the repositories to your disconnected network.
4.3. Creating a new web server to host repositories
If you do not have an existing web server to host your repositories, create one with the synced repositories.
Use the following steps if creating a new web server.
$ sudo dnf install httpd
Configure httpd to serve the repo directory:
/etc/httpd/conf.d/repository.conf DocumentRoot '/path/to/repos' <LocationMatch "^/+$"> Options -Indexes ErrorDocument 403 /.noindex.html </LocationMatch> <Directory '/path/to/repos'> Options All Indexes FollowSymLinks AllowOverride None Require all granted </Directory>
Ensure that the directory is readable by an apache user:
$ sudo chown -R apache /path/to/repos
$ sudo semanage fcontext -a -t httpd_sys_content_t "/path/to/repos(/.*)?" $ sudo restorecon -ir /path/to/repos
$ sudo systemctl enable --now httpd.service
$ sudo firewall-cmd --zone=public --add-service=http –add-service=https --permanent $ sudo firewall-cmd --reload
On automation controller and automation hub, add a repo file at /etc/yum.repos.d/local.repo, add the optional repos if needed:
[Local-BaseOS] name=Local BaseOS baseurl=http://<webserver_fqdn>/rhel-8-for-x86_64-baseos-rpms enabled=1 gpgcheck=1 gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release [Local-AppStream] name=Local AppStream baseurl=http://<webserver_fqdn>/rhel-8-for-x86_64-appstream-rpms enabled=1 gpgcheck=1 gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release
4.4. Accessing RPM Repositories for Locally Mounted DVD
If you are going to access the repositories from the DVD, it is necessary to set up a local repository. This section shows how to do that.
Mount DVD or ISO
# mkdir /media/rheldvd && mount /dev/sr0 /media/rheldvd
# mkdir /media/rheldvd && mount -o loop rhrhel-8.6-x86_64-dvd.iso /media/rheldvd
Create yum repo file at
[dvd-BaseOS] name=DVD for RHEL - BaseOS baseurl=file:///media/rheldvd/BaseOS enabled=1 gpgcheck=1 gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release [dvd-AppStream] name=DVD for RHEL - AppStream baseurl=file:///media/rheldvd/AppStream enabled=1 gpgcheck=1 gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release
Import the gpg key
# rpm --import /media/rheldvd/RPM-GPG-KEY-redhat-release
If the key is not imported you will see an error similar to
# Curl error (6): Couldn't resolve host name for https://www.redhat.com/security/data/fd431d51.txt [Could not resolve host: www.redhat.com]
In order to set up a repository see Need to set up yum repository for locally-mounted DVD on Red Hat Enterprise Linux 8.
4.5. Adding a Subscription Manifest to AAP without an Internet connection
In order to add a subscription to AAP without an Internet connection, create and import a subscription manifest.
- Login to access.redhat.com.
- Navigate to Subscriptions → Subscriptions.
- Click Subscription Allocations.
- Click Create New subscription allocation.
- Name the new subscription allocation.
- Select Satellite 6.8 → Satellite 6.8 as the type.
- Click Create. The Details tab will open for your subscription allocation.
- Click Subscriptions tab.
- Click Add Subscription.
- Find your AAP subscription, in the Entitlements box add the number of entitlements you want to assign to your environment. A single entitlement is needed for each node that will be managed by AAP: server, network device, etc.
- Click Submit.
- Click Export Manifest.
- This downloads a file manifest_<allocation name>_<date>.zip that be imported with Automation Controller after installation.
4.6. Installing the AAP Setup Bundle
The “bundle” version is strongly recommended for disconnected installations as it comes with the RPM content for AAP as well as the default execution environment images that will be uploaded to your Private Automation Hub during the installation process.
4.6.1. Downloading the Setup Bundle
- Download the AAP setup bundle package by navigating to https://access.redhat.com/downloads/content/480 and click Download Now for the Ansible Automation Platform 2.3 Setup Bundle.
220.127.116.11. Installing the Setup Bundle
The download and installation of the setup bundle needs to be located on the automation controller. From the controller, untar the bundle, edit the inventory file, and run the setup.
Untar the bundle
$ tar xvf \ ansible-automation-platform-setup-bundle-2.3-1.2.tar.gz $ cd ansible-automation-platform-setup-bundle-2.3-1.2
Edit the inventory file to include the required options
- automationcontroller group
- automationhub group
- automationhub_pg_host, automationhub_pg_port
[automationcontroller] automationcontroller.example.org ansible_connection=local [automationcontroller:vars] peers=execution_nodes [automationhub] automationhub.example.org [all:vars] admin_password='password123' pg_database='awx' pg_username='awx' pg_password='dbpassword123' receptor_listener_port=27199 automationhub_admin_password='hubpassword123' automationhub_pg_host='automationcontroller.example.org' automationhub_pg_port='5432' automationhub_pg_database='automationhub' automationhub_pg_username='automationhub' automationhub_pg_password='dbpassword123' automationhub_pg_sslmode='prefer'Note
The inventory should be kept intact after installation since it is used for backup, restore, and upgrade functions. Consider keeping a backup copy in a secure location, given that the inventory file contains passwords.
Run the AAP setup bundle executable as the root user
$ sudo -i # cd /path/to/ansible-automation-platform-setup-bundle-2.3-1.2 # ./setup.sh
- Once installation is complete, navigate to the Fully Qualified Domain Name (FQDN) for the Automation controller node that was specified in the installation inventory file.
- Log in with the administrator credentials specified in the installation inventory file.
4.7. Completing Post Installation Tasks
4.7.1. Adding Controller Subscription
Navigate to the FQDN of the Automation controller. Login with admin and the password you specified as
admin_passwordin your inventory file.
- Click Browse and select the manifest.zip you created earlier.
- Click Next.
- Uncheck User analytics and Automation analytics. These rely on an Internet connection and should be turned off.
- Click Next.
- Read the End User License Agreement and click Submit if you agree.
4.7.2. Updating the CA trust store
18.104.22.168. Self-Signed Certificates
By default, AAP hub and controller are installed using self signed certificates. This creates an issue where the controller does not trust the hub’s certificate and will not download the execution environments from the hub. The solution is to import the hub’s CA cert as a trusted cert on the controller. You can use SCP or directly copy and paste from one file into another to perform this action. The steps below are copied from a KB article found at https://access.redhat.com/solutions/6707451.
22.214.171.124. Copying the root certificate on the private automation hub to the automation controller using secure copy (SCP)
If SSH is available as the root user between the controller and the private automation hub, use SCP to copy the root certificate on the private automation hub to the controller and run
update-ca-trust on the controller to update the CA trust store.
On the Automation controller
$ sudo -i # scp <hub_fqdn>:/etc/pulp/certs/root.crt /etc/pki/ca-trust/source/anchors/automationhub-root.crt # update-ca-trust
126.96.36.199. Copying and Pasting
If SSH is unavailable as root between the private automation hub and the controller, copy the contents of the file /etc/pulp/certs/root.crt on the private automation hub and paste it into a new file on the controller called /etc/pki/ca-trust/source/anchors/automationhub-root.crt. After the new file is created, run the command
update-ca-trust in order to update the CA trust store with the new certificate.
On the Private automation hub
$ sudo -i # cat /etc/pulp/certs/root.crt (copy the contents of the file, including the lines with 'BEGIN CERTIFICATE' and 'END CERTIFICATE')
On the Automation controller
$ sudo -i # vi /etc/pki/ca-trust/source/anchors/automationhub-root.crt (paste the contents of the root.crt file from the private automation hub into the new file and write to disk) # update-ca-trust
4.8. Importing Collections into Private Automation Hub
You can download collection tarball files from the following sources:
4.8.1. Downloading collection from Red Hat Automation Hub
This section gives instructions on how to download a collection from Red Hat Automation Hub. If the collection has dependencies, they will also need to be downloaded and installed.
- Navigate to https://console.redhat.com/ansible/automation-hub/ and login with your Red Hat credentials.
- Click on the collection you wish to download.
- Click Download tarball
- To verify if a collection has dependencies, click the Dependencies tab.
- Download any dependencies needed for this collection.
4.9. Creating Collection Namespace
The namespace of the collection must exist for the import to be successful. You can find the namespace name by looking at the first part of the collection tarball filename. For example the namespace of the collection ansible-netcommon-3.0.0.tar.gz is ansible.
- Login to private automation hub web console.
- Navigate to Collections → Namespaces.
- Click Create.
- Provide the namespace name.
- Click Create.
4.9.1. Importing the collection tarball with GUI
- Login to private automation hub web console.
- Navigate to Collections → Namespaces.
- Click on View collections of the namespace you will be importing the collection into.
- Click Upload collection.
- Click the folder icon and select the tarball of the collection.
- Click Upload.
This opens the 'My Imports' page. You can see the status of the import and various details of the files and modules that have been imported.
188.8.131.52. Importing the collection tarball using ansible-galaxy via CLI
You can import collections into the private automation hub by using the command-line interface rather than the GUI.
- Copy the collection tarballs to the private automation hub.
- Log in to the private automation hub server via SSH.
Add the self-signed root CA cert to the trust store on the automation hub.
# cp /etc/pulp/certs/root.crt \ /etc/pki/ca-trust/source/anchors/automationhub-root.crt # update-ca-trust
/etc/ansible/ansible.cfgfile with your hub configuration. Use either a token or a username and password for authentication.
[galaxy] server_list = private_hub [galaxy_server.private_hub] url=https://<hub_fqdn>/api/galaxy/ token=<token_from_private_hub>
- Import the collection using the ansible-galaxy command.
$ ansible-galaxy collection publish <collection_tarball>
Create the namespace that the collection belongs to in advance or publishing the collection will fail.
4.10. Approving the Imported Collection
After you have imported collections with either the GUI or the CLI method, you must approve them by using the GUI. After they are approved, they are available for use.
- Login to private automation hub web console.
- Go to Collections → Approval.
- Click Approve for the collection you wish to approve.
- The collection is now available for use in your private automation hub.
The collection is added to the "Published" repository regardless of its source.
- Import any dependency for the collection using these same steps.
Recommended collections depend on your use case. Ansible and Red Hat provide these collections.
4.10.1. Custom Execution Environments
Use the ansible-builder program to create custom execution environment images. For disconnected environments, custom EE images can be built in the following ways:
- Build an EE image on an internet-facing system and import it to the disconnected environment
- Build an EE image entirely on the disconnected environment with some modifications to the normal process of using ansible-builder
- Create a minimal base container image that includes all of the necessary modifications for a disconnected environment, then build custom EE images from the base container image
184.108.40.206. Transferring a Custom EE Images Across a Disconnected Boundary
A custom execution environment image can be built on an internet-facing machine using the existing documentation. Once an execution environment has been created it is available in the local podman image cache. You can then transfer the custom EE image across a disconnected boundary. To transfer the custom EE image across a disconnected boundary, first save the image:
- Save the image:
$ podman image save localhost/custom-ee:latest | gzip -c custom-ee-latest.tar.gz
Transfer the file across the disconnected boundary by using an existing mechanism such as sneakernet, one-way diode, etc.. After the image is available on the disconnected side, import it into the local podman cache, tag it, and push it to the disconnected hub:
$ podman image load -i custom-ee-latest.tar.gz $ podman image tag localhost/custom-ee <hub_fqdn>/custom-ee:latest $ podman login <hub_fqdn> --tls-verify=false $ podman push <hub_fqdn>/custom-ee:latest
4.11. Building an Execution Environment in a Disconnected Environment
When building a custom execution environment, the ansible-builder tool defaults to downloading the following requirements from the internet:
- Ansible Galaxy (galaxy.ansible.com) or Automation Hub (cloud.redhat.com) for any collections added to the EE image.
- PyPI (pypi.org) for any python packages required as collection dependencies.
The UBI repositories (cdn.redhat.com) for updating any UBI-based EE images.
- The RHEL repositories might also be needed to meet certain collection requirements.
- registry.redhat.io for access to the ansible-builder-rhel8 container image.
Building an EE image in a disconnected environment requires a subset of all of these mirrored, or otherwise made available on the disconnected network. See Importing Collections into Private Automation Hub for information on importing collections from Galaxy or Automation Hub into a private automation hub.
Mirrored PyPI content once transferred into the high-side network can be made available using a web server or an artifact repository like Nexus.
The UBI repositories can be mirrored on the low-side using a tool like
reposync, imported to the disconnected environment, and made available from Satellite or a simple web server (since the content is freely redistributable).
ansible-builder-rhel8 container image can be imported into a private automation hub in the same way a custom EE can be imported. See Transferring a Custom EE Images Across a Disconnected Boundary for details substituting
registry.redhat.io/ansible-automation-platform-21/ansible-builder-rhel8. This will make the ansible-builder-rhel8 image available in the private automation hub registry along with the default EE images.
Once all of the prerequisites are available on the high-side network, ansible-builder and podman can be used to create a custom execution environment image.
4.12. Installing the ansible-builder RPM
On a RHEL system, install the ansible-builder RPM. This can be done in one of several ways:
- Subscribe the RHEL box to a Satellite on the disconnected network.
- Attach the Ansible Automation Platform subscription and enable the AAP repo.
Install the ansible-builder RPM.Note
This is preferred if a Satellite exists because the EE images can leverage RHEL content from the Satellite if the underlying build host is registered.
- Unarchive the AAP setup bundle.
Install the ansible-builder RPM and its dependencies from the included content:
$ tar -xzvf ansible-automation-platform-setup-bundle-2.3-1.2.tar.gz $ cd ansible-automation-platform-setup-bundle-2.3-1.2/bundle/el8/repos/ $ sudo yum install ansible-builder-1.2.0-1.el9ap.noarch.rpm python38-requirements-parser-0.2.0-4.el9ap.noarch.rpm
Create a directory for your custom EE build artifacts.
$ mkdir custom-ee $ cd custom-ee/
Create an execution-environment.yml file that defines the requirements for your custom EE following the documentation at https://ansible-builder.readthedocs.io/en/stable/definition/. Override the
EE_BUILDER_IMAGEvariables to point to the EEs available in your private automation hub.
$ cat execution-environment.yml --- version: 1 build_arg_defaults: EE_BASE_IMAGE: '<hub_fqdn>/ee-supported-rhel8:latest' EE_BUILDER_IMAGE: '<hub_fqdn>/ansible-builder-rhel8:latest' dependencies: python: requirements.txt galaxy: requirements.yml
Create an ansible.cfg file that points to your private automation hub and contains credentials that allow uploading, such as an admin user token.
$ cat ansible.cfg [galaxy] server_list = private_hub [galaxy_server.private_hub] url=https://<hub_fqdn>/api/galaxy/ token=<admin_token>
Create a ubi.repo file that points to your disconnected UBI repo mirror (this could be your Satellite if the UBI content is hosted there).
This is an example output where
reposyncwas used to mirror the UBI repos.
$ cat ubi.repo [ubi-8-baseos] name = Red Hat Universal Base Image 8 (RPMs) - BaseOS baseurl = http://<ubi_mirror_fqdn>/repos/ubi-8-baseos enabled = 1 gpgkey = file:///etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release gpgcheck = 1 [ubi-8-appstream] name = Red Hat Universal Base Image 8 (RPMs) - AppStream baseurl = http://<ubi_mirror_fqdn>/repos/ubi-8-appstream enabled = 1 gpgkey = file:///etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release gpgcheck = 1
Add the CA certificate used to sign the private automation hub web server certificate.
- For self-signed certificates (the installer default), make a copy of the file /etc/pulp/certs/root.crt from your private automation hub and name it hub-root.crt.
- If an internal certificate authority was used to request and sign the private automation hub web server certificate, make a copy of that CA certificate called hub-root.crt.
- Create your python requirements.txt and ansible collection requirements.yml with the content needed for your custom EE image. Note that any collections you require should already be uploaded into your private automation hub.
Use ansible-builder to create the context directory used to build the EE image.
$ ansible-builder create Complete! The build context can be found at: /home/cloud-user/custom-ee/context $ ls -1F ansible.cfg context/ execution-environment.yml hub-root.crt pip.conf requirements.txt requirements.yml ubi.repo
Copy the files used to override the internet-facing defaults into the context directory.
$ cp ansible.cfg hub-root.crt pip.conf ubi.repo context/
Edit the file context/Containerfile and add the following modifications.
In the first EE_BASE_IMAGE build section, add the ansible.cfg and hub-root.crt files and run the
- In the EE_BUILDER_IMAGE build section, add the ubi.repo and pip.conf files.
In the final EE_BASE_IMAGE build section, add the ubi.repo and pip.conf files.
$ cat context/Containerfile ARG EE_BASE_IMAGE=<hub_fqdn>/ee-supported-rhel8:latest ARG EE_BUILDER_IMAGE=<hub_fqdn>/ansible-builder-rhel8:latest FROM $EE_BASE_IMAGE as galaxy ARG ANSIBLE_GALAXY_CLI_COLLECTION_OPTS= USER root ADD _build /build WORKDIR /build # this section added ADD ansible.cfg /etc/ansible/ansible.cfg ADD hub-root.crt /etc/pki/ca-trust/source/anchors/hub-root.crt RUN update-ca-trust # end additions RUN ansible-galaxy role install -r requirements.yml \ --roles-path /usr/share/ansible/roles RUN ansible-galaxy collection install \ $ANSIBLE_GALAXY_CLI_COLLECTION_OPTS -r requirements.yml \ --collections-path /usr/share/ansible/collections FROM $EE_BUILDER_IMAGE as builder COPY --from=galaxy /usr/share/ansible /usr/share/ansible ADD _build/requirements.txt requirements.txt RUN ansible-builder introspect --sanitize \ --user-pip=requirements.txt \ --write-bindep=/tmp/src/bindep.txt \ --write-pip=/tmp/src/requirements.txt # this section added ADD ubi.repo /etc/yum.repos.d/ubi.repo ADD pip.conf /etc/pip.conf # end additions RUN assemble FROM $EE_BASE_IMAGE USER root COPY --from=galaxy /usr/share/ansible /usr/share/ansible # this section added ADD ubi.repo /etc/yum.repos.d/ubi.repo ADD pip.conf /etc/pip.conf # end additions COPY --from=builder /output/ /output/ RUN /output/install-from-bindep && rm -rf /output/wheels
- In the first EE_BASE_IMAGE build section, add the ansible.cfg and hub-root.crt files and run the
Create the EE image in the local podman cache using the
$ podman build -f context/Containerfile \ -t <hub_fqdn>/custom-ee:latest
Once the custom EE image builds successfully, push it to the private automation hub.
$ podman push <hub_fqdn>/custom-ee:latest
4.12.1. Workflow for upgrading between minor AAP releases
To upgrade between minor releases of AAP 2, use this general workflow.
- Download and unarchive the latest AAP 2 setup bundle.
- Take a backup of the existing installation.
- Copy the existing installation inventory file into the new setup bundle directory.
./setup.shto upgrade the installation.
For example, to upgrade from version 2.2.0-7 to 2.3-1.2, make sure that both setup bundles are on the initial controller node where the installation occurred:
$ ls -1F ansible-automation-platform-setup-bundle-2.2.0-7/ ansible-automation-platform-setup-bundle-2.2.0-7.tar.gz ansible-automation-platform-setup-bundle-2.3-1.2/ ansible-automation-platform-setup-bundle-2.3-1.2.tar.gz
Back up the 2.2.0-7 installation:
$ cd ansible-automation-platform-setup-bundle-2.2.0-7 $ sudo ./setup.sh -b $ cd ..
Copy the 2.2.0-7 inventory file into the 2.3-1.2 bundle directory:
$ cd ansible-automation-platform-setup-bundle-2.2.0-7 $ cp inventory ../ansible-automation-platform-setup-bundle-2.3-1.2/ $ cd ..
Upgrade from 2.2.0-7 to 2.3-1.2 with the setup.sh script:
$ cd ansible-automation-platform-setup-bundle-2.3-1.2 $ sudo ./setup.sh