Chapter 4. Disconnected installation

If you are not connected to the internet or do not have access to online repositories, you can install Red Hat Ansible Automation Platform without an active internet connection.

4.1. Prerequisites

Before installing Ansible Automation Platform on a disconnected network, you must meet the following prerequisites:

  1. A created subscription manifest. See Obtaining a manifest file for more information.
  2. The Ansible Automation Platform setup bundle at Customer Portal is downloaded.
  3. The DNS records for the automation controller and private automation hub servers are created.

4.2. Ansible Automation Platform installation on disconnected RHEL

You can install Ansible Automation Platform automation controller and private automation hub without an internet connection by using the installer-managed database located on the automation controller. Use the setup bundle for a disconnected installation as it includes additional components that make installing Ansible Automation Platform easier in a disconnected environment. These include the Ansible Automation Platform Red Hat package managers (RPMs) and the default execution environment (EE) images.

4.2.1. System requirements for disconnected installation

Ensure that your system has all the hardware requirements before performing a disconnected installation of Ansible Automation Platform. For more information about hardware requirements, see Chapter 2. System requirements.

4.2.2. RPM Source

RPM dependencies for Ansible Automation Platform that come from the BaseOS and AppStream repositories are not included in the setup bundle. To add these dependencies, you must first obtain access to BaseOS and AppStream repositories. Use Satellite to sync repositories and add dependencies. If you prefer an alternative tool, you can choose between the following options:

  • Reposync
  • The RHEL Binary DVD
Note

The RHEL Binary DVD method requires the DVD for supported versions of RHEL, including version 8.6 or higher. See Red Hat Enterprise Linux Life Cycle for information about which versions of RHEL are currently supported.

Additional Resources

4.3. Synchronizing RPM repositories using reposync

To perform a reposync you need a RHEL host that has access to the internet. After the repositories are synced, you can move the repositories to the disconnected network hosted from a web server.

Procedure

  1. Attach the BaseOS and AppStream required repositories:

    # subscription-manager repos \
        --enable rhel-8-for-x86_64-baseos-rpms \
        --enable rhel-8-for-x86_64-appstream-rpms
  2. Perform the reposync:

    # dnf install yum-utils
    # reposync -m --download-metadata --gpgcheck \
        -p /path/to/download
    1. Use reposync with --download-metadata and without --newest-only. See RHEL 8 Reposync.

      • If you are not using --newest-only, the repos downloaded will be ~90GB.
      • If you are using --newest-only, the repos downloaded will be ~14GB.
  3. If you plan to use Red Hat Single Sign-On, sync these repositories:

    1. jb-eap-7.3-for-rhel-8-x86_64-rpms
    2. rh-sso-7.4-for-rhel-8-x86_64-rpms

    After the reposync is completed, your repositories are ready to use with a web server.

  4. Move the repositories to your disconnected network.

4.4. Creating a new web server to host repositories

If you do not have an existing web server to host your repositories, you can create one with your synced repositories.

Procedure

  1. Install prerequisites:

    $ sudo dnf install httpd
  2. Configure httpd to serve the repo directory:

    /etc/httpd/conf.d/repository.conf
    
    DocumentRoot '/path/to/repos'
    
    <LocationMatch "^/+$">
        Options -Indexes
        ErrorDocument 403 /.noindex.html
    </LocationMatch>
    
    <Directory '/path/to/repos'>
        Options All Indexes FollowSymLinks
        AllowOverride None
        Require all granted
    </Directory>
  3. Ensure that the directory is readable by an apache user:

    $ sudo chown -R apache /path/to/repos
  4. Configure SELinux:

    $ sudo semanage fcontext -a -t httpd_sys_content_t "/path/to/repos(/.*)?"
    $ sudo restorecon -ir /path/to/repos
  5. Enable httpd:

    $ sudo systemctl enable --now httpd.service
  6. Open firewall:

    $ sudo firewall-cmd --zone=public --add-service=http –add-service=https --permanent
    $ sudo firewall-cmd --reload
  7. On automation controller and automation hub, add a repo file at /etc/yum.repos.d/local.repo, and add the optional repos if needed:

    [Local-BaseOS]
    name=Local BaseOS
    baseurl=http://<webserver_fqdn>/rhel-8-for-x86_64-baseos-rpms
    enabled=1
    gpgcheck=1
    gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release
    
    [Local-AppStream]
    name=Local AppStream
    baseurl=http://<webserver_fqdn>/rhel-8-for-x86_64-appstream-rpms
    enabled=1
    gpgcheck=1
    gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release

4.5. Accessing RPM repositories from a locally mounted DVD

If you plan to access the repositories from the RHEL binary DVD, you must first set up a local repository.

Procedure

  1. Mount DVD or ISO:

    1. DVD

      # mkdir /media/rheldvd && mount /dev/sr0 /media/rheldvd
    2. ISO

      # mkdir /media/rheldvd && mount -o loop rhrhel-8.6-x86_64-dvd.iso /media/rheldvd
  2. Create yum repo file at /etc/yum.repos.d/dvd.repo

    [dvd-BaseOS]
    name=DVD for RHEL - BaseOS
    baseurl=file:///media/rheldvd/BaseOS
    enabled=1
    gpgcheck=1
    gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release
    
    [dvd-AppStream]
    name=DVD for RHEL - AppStream
    baseurl=file:///media/rheldvd/AppStream
    enabled=1
    gpgcheck=1
    gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release
  3. Import the gpg key:

    # rpm --import /media/rheldvd/RPM-GPG-KEY-redhat-release
Note

If the key is not imported you will see an error similar to

# Curl error (6): Couldn't resolve host name for
https://www.redhat.com/security/data/fd431d51.txt [Could not resolve host:
www.redhat.com]

Additional Resources

For further detail on setting up a repository see Need to set up yum repository for locally-mounted DVD on Red Hat Enterprise Linux 8.

4.6. Adding a subscription manifest to Ansible Automation Platform without an internet connection

To add a subscription to Ansible Automation Platform without an internet connection, create and import a subscription manifest.

Procedure

  1. Log in to access.redhat.com.
  2. From the navigation panel, select SubscriptionsSubscriptions.
  3. Select Subscription AllocationsSubscription Allocations.
  4. Click Create New subscription allocation.
  5. Name the new subscription allocation.
  6. Select Satellite 6.8Satellite 6.8 as the type.
  7. Click Create. The Details tab opens for your subscription allocation.
  8. Select the SubscriptionsSubscriptions tab.
  9. Click Add Subscription.
  10. Find your Ansible Automation Platform subscription, and in the Entitlements box, add the number of entitlements you want to assign to your environment. A single entitlement is needed for each node that will be managed by Ansible Automation Platform: server, network device, etc.
  11. Click Submit.
  12. Click Export Manifest.

This downloads a file manifest_<allocation name>_<date>.zip that be imported with automation controller after installation.

4.7. Downloading and installing the Ansible Automation Platform setup bundle

Choose the setup bundle to download Ansible Automation Platform for disconnected installations. This bundle includes the RPM content for Ansible Automation Platform and the default execution environment images that will be uploaded to your private automation hub during the installation process.

Procedure

  1. Download the Ansible Automation Platform setup bundle package by navigating to the Red Hat Ansible Automation Platform download page and clicking Download Now for the Ansible Automation Platform 2.4 Setup Bundle.
  2. From automation controller, untar the bundle:

    $ tar xvf \
       ansible-automation-platform-setup-bundle-2.4-1.tar.gz
    $ cd ansible-automation-platform-setup-bundle-2.4-1
  3. Edit the inventory file to include the required options:

    1. automationcontroller group
    2. automationhub group
    3. admin_password
    4. pg_password
    5. automationhub_admin_password
    6. automationhub_pg_host, automationhub_pg_port
    7. automationhub_pg_password

      Example Inventory file

      [automationcontroller]
      automationcontroller.example.org ansible_connection=local
      
      [automationcontroller:vars]
      peers=execution_nodes
      
      [automationhub]
      automationhub.example.org
      
      [all:vars]
      admin_password='password123'
      
      pg_database='awx'
      pg_username='awx'
      pg_password='dbpassword123'
      
      receptor_listener_port=27199
      
      automationhub_admin_password='hubpassword123'
      
      automationhub_pg_host='automationcontroller.example.org'
      automationhub_pg_port=5432
      
      automationhub_pg_database='automationhub'
      automationhub_pg_username='automationhub'
      automationhub_pg_password='dbpassword123'
      automationhub_pg_sslmode='prefer'
  4. Run the Ansible Automation Platform setup bundle executable as the root user:

    $ sudo -i
    # cd /path/to/ansible-automation-platform-setup-bundle-2.4-1
    # ./setup.sh
  5. When installation is complete, navigate to the Fully Qualified Domain Name (FQDN) for the automation controller node that was specified in the installation inventory file.
  6. Log in using the administrator credentials specified in the installation inventory file.
Note

The inventory file must be kept intact after installation because it is used for backup, restore, and upgrade functions. Keep a backup copy in a secure location, given that the inventory file contains passwords.

4.8. Completing post installation tasks

After you have completed the installation of Ansible Automation Platform, ensure that automation hub and automation controller deploy properly.

4.8.1. Adding a controller subscription

Procedure

  1. Navigate to the FQDN of the Automation controller. Log in with the username admin and the password you specified as admin_password in your inventory file.
  2. Click Browse and select the manifest.zip you created earlier.
  3. Click Next.
  4. Uncheck User analytics and Automation analytics. These rely on an internet connection and must be turned off.
  5. Click Next.
  6. Read the End User License Agreement and click Submit if you agree.

4.8.2. Updating the CA trust store

As part of your post-installation tasks, you must update the software’s certificates. By default, Ansible Automation Platform automation hub and automation controller are installed using self-signed certificates. Because of this, the controller does not trust the hub’s certificate and will not download the execution environment from the hub.

To ensure that automation controller downloads the execution environment from automation hub, you must import the hub’s Certificate Authority (CA) certificate as a trusted certificate on the controller. You can do this in one of two ways, depending on whether SSH is available as root user between automation controller and private automation hub.

4.8.2.1. Using secure copy (SCP) as a root user

If SSH is available as the root user between the controller and private automation hub, use SCP to copy the root certificate on the private automation hub to the controller.

Procedure

  1. Run update-ca-trust on the controller to update the CA trust store:
$ sudo -i
# scp <hub_fqdn>:/etc/pulp/certs/root.crt
/etc/pki/ca-trust/source/anchors/automationhub-root.crt
# update-ca-trust

4.8.2.2. Copying and pasting as a non root user

If SSH is unavailable as root between the private automation hub and the controller, copy the contents of the file /etc/pulp/certs/root.crt on the private automation hub and paste it into a new file on the controller called /etc/pki/ca-trust/source/anchors/automationhub-root.crt.

Procedure

  1. Run update-ca-trust to update the CA trust store with the new certificate. On the private automation hub, run:
$ sudo -i
# cat /etc/pulp/certs/root.crt
(copy the contents of the file, including the lines with 'BEGIN CERTIFICATE' and
'END CERTIFICATE')
  1. On the automation controller:
$ sudo -i
# vi /etc/pki/ca-trust/source/anchors/automationhub-root.crt
(paste the contents of the root.crt file from the private automation hub into the new file and write to disk)
# update-ca-trust

Additional Resources

4.9. Importing collections into private automation hub

You can download a collection as a tarball file from Ansible automation hub for use in your private automation hub. Certified collections are available on the automation hub Hybrid Cloud Console, and community collections are on Ansible Galaxy. You must also download and install any dependencies needed for the collection.

Procedure

  1. Navigate to console.redhat.com and log in with your Red Hat credentials.
  2. Click on the collection you want to download.
  3. Click Download tarball
  4. To verify if a collection has dependencies, click the Dependencies tab.
  5. Download any dependencies needed for this collection.

4.10. Creating a collection namespace

Before importing a collection, you must first create a namespace for the collection in your private automation hub. You can find the namespace name by looking at the first part of the collection tarball filename. For example, the namespace of the collection ansible-netcommon-3.0.0.tar.gz is ansible.

Procedure

  1. Log in to the automation hub Hybrid Cloud Console.
  2. From the navigation panel, select CollectionsNamespaces.
  3. Click Create.
  4. Provide the namespace name.
  5. Click Create.

4.10.1. Importing the collection tarball by using the web console

Once the namespace has been created, you can import the collection by using the web console.

Procedure

  1. Log in to automation hub Hybrid Cloud Console.
  2. From the navigation panel, select CollectionsNamespaces.
  3. Click View collections next to the namespace you will be importing the collection into.
  4. Click Upload collection.
  5. Click the folder icon and select the tarball of the collection.
  6. Click Upload.

This opens the 'My Imports' page. You can see the status of the import and various details of the files and modules that have been imported.

4.10.2. Importing the collection tarball by using the CLI

You can import collections into your private automation hub by using the command-line interface rather than the GUI.

Procedure

  1. Copy the collection tarballs to the private automation hub.
  2. Log in to the private automation hub server via SSH.
  3. Add the self-signed root CA cert to the trust store on automation hub.

    # cp /etc/pulp/certs/root.crt \
        /etc/pki/ca-trust/source/anchors/automationhub-root.crt
    # update-ca-trust
  4. Update the /etc/ansible/ansible.cfg file with your automation hub configuration. Use either a token or a username and password for authentication.

    [galaxy]
    server_list = private_hub
    
    [galaxy_server.private_hub]
    url=https://<hub_fqdn>/api/galaxy/
    token=<token_from_private_hub>
  5. Import the collection using the ansible-galaxy command.
$ ansible-galaxy collection publish <collection_tarball>

4.11. Approving the imported collections

After you have imported collections by using either the GUI or the CLI method, you must approve them by using the GUI. After they are approved, they are available for use.

Procedure

  1. Log in to automation hub Hybrid Cloud Console.
  2. From the navigation panel, select CollectionsApproval.
  3. Click Approve for the collection you want to approve.
  4. The collection is now available for use in your private automation hub.
  5. Import any dependency for the collection by repeating steps 2 and 3.
Note

The collection is added to the "Published" repository regardless of its source.

Recommended collections depend on your use case. Ansible and Red Hat provide these collections.

4.11.1. Custom automation execution environments

Use the ansible-builder program to create custom execution environment images. For disconnected environments, custom execution environment images can be built in the following ways:

  • Build an execution environment image on an internet-facing system and import it to the disconnected environment.
  • Build an execution environment image entirely on the disconnected environment with some modifications to the normal process of using ansible-builder.
  • Create a minimal base container image that includes all of the necessary modifications for a disconnected environment, then build custom execution environment images from the base container image.

4.11.1.1. Transferring custom execution environment images across a disconnected boundary

You can build a custom execution environment image on an internet-facing machine. After you create an execution environment, it is available in the local podman image cache. You can then transfer the custom execution environment image across a disconnected boundary.

Procedure

  1. Save the image:

    $ podman image save localhost/custom-ee:latest | gzip -c custom-ee-latest.tar.gz

    Transfer the file across the disconnected boundary by using an existing mechanism such as sneakernet or one-way diode.

  2. After the image is available on the disconnected side, import it into the local podman cache, tag it, and push it to the disconnected hub:
$ podman image load -i custom-ee-latest.tar.gz
$ podman image tag localhost/custom-ee <hub_fqdn>/custom-ee:latest
$ podman login <hub_fqdn> --tls-verify=false
$ podman push <hub_fqdn>/custom-ee:latest

4.12. Building an execution environment in a disconnected environment

Creating execution environments for Ansible Automation Platform is a common task which works differently in disconnected environments. When building a custom execution environment, the ansible-builder tool defaults to downloading content from the following locations on the internet:

  • Red Hat Automation hub (console.redhat.com) or Ansible Galaxy (galaxy.ansible.com) for any Ansible content collections added to the execution environment image.
  • PyPI (pypi.org) for any python packages required as collection dependencies.
  • RPM repositories such as the RHEL or UBI repositories (cdn.redhat.com) for adding or updating RPMs to the execution environment image, if needed.
  • registry.redhat.io for access to the base container images.

Building an execution environment image in a disconnected environment requires mirroring content from these locations. See Importing Collections into private automation hub for information on importing collections from Ansible Ansible Galaxy or automation hub into a private automation hub.

Mirrored PyPI content once transferred into the disconnected network can be made available using a web server or an artifact repository like Nexus. The RHEL and UBI repository content can be exported from an internet-facing Red Hat Satellite server, copied into the disconnected environment, then imported into a disconnected Satellite so it is available for building custom execution environments. See ISS Export Sync in an Air-Gapped Scenario for details.

The default base container image, ee-minimal-rhel8, is used to create custom execution environment images and is included with the bundled installer. This image is added to the private automation hub at install time. If a different base container image such as ee-minimal-rhel9 is required, it must be imported to the disconnected network and added to the private automation hub container registry.

Once all of the prerequisites are available on the disconnected network, the ansible-builder command can be used to create custom execution environment images.

4.12.1. Installing the Ansible Builder RPM

On the RHEL system where custom execution environments will be built, you will install the Ansible Builder RPM using a Satellite Server that already exists in the environment. This method is preferred because the execution environment images can use any RHEL content from the pre-existing Satellite if required.

Procedure

  1. Install the Ansible Builder RPM from the Ansible Automation Platform repository.

    1. Subscribe the RHEL system to a Satellite on the disconnected network.
    2. Attach the Ansible Automation Platform subscription and enable the AAP repository. The repository name will either be ansible-automation-platform-2.4-for-rhel-8-x86_64-rpms or ansible-automation-platform-2.4-for-rhel-9-x86_64-rpms depending on the version of RHEL used on the underlying system.
    3. Install the Ansible Builder RPM. The version of the Ansible Builder RPM must be 3.0.0 or later in order for the examples below to work properly.
  2. Install the Ansible Builder RPM from the Ansible Automation Platform setup bundle. Use this method if a Satellite server is not available on your disconnected network.

    1. Unarchive the Ansible Automation Platform setup bundle.
    2. Install the Ansible Builder RPM and its dependencies from the included content.
$ tar -xzvf ansible-automation-platform-setup-bundle-2.4-3-x86_64.tar.gz
$ cd ansible-automation-platform-setup-bundle-2.4-3-x86_64/bundle/packages/el8/repos/
$ sudo dnf install ansible-builder-3.0.0-2.el8ap.noarch.rpm \
    python39-requirements-parser-0.2.0-4.el8ap.noarch.rpm \
    python39-bindep-2.10.2-3.el8ap.noarch.rpm \
    python39-jsonschema-4.16.0-1.el8ap.noarch.rpm \
    python39-pbr-5.8.1-2.el8ap.noarch.rpm \
    python39-distro-1.6.0-3.el8pc.noarch.rpm \
    python39-packaging-21.3-2.el8ap.noarch.rpm \
    python39-parsley-1.3-2.el8pc.noarch.rpm \
    python39-attrs-21.4.0-2.el8pc.noarch.rpm \
    python39-pyrsistent-0.18.1-2.el8ap.x86_64.rpm \
    python39-pyparsing-3.0.9-1.el8ap.noarch.rpm
Note

The specific versions may be slightly different depending on the version of the setup bundle being used.

Additional resources

4.12.2. Creating the custom execution environment definition

Once the Ansible Builder RPM is installed, use the following steps to create your custom execution environment.

  1. Create a directory for the build artifacts used when creating your custom execution environment. Any new files created with the steps below will be created under this directory.

    $ mkdir $HOME/custom-ee $HOME/custom-ee/files
    $ cd $HOME/custom-ee/
  2. Create an execution-environment.yml file that defines the requirements for your custom execution environment.

    Note

    Version 3 of the execution environment definition format is required, so ensure the execution-environment.yml file contains version: 3 explicitly before continuing.

    1. Override the base image to point to the minimal execution environment available in your private automation hub.
    2. Define the additional build files needed to point to any disconnected content sources that will be used in the build process. Your custom execution-environment.yml file should look similar to the following example:
    $ cat execution-environment.yml
    ---
    version: 3
    
    images:
      base_image:
        name: private-hub.example.com/ee-minimal-rhel8:latest
    
    dependencies:
      python: requirements.txt
      galaxy: requirements.yml
    
    additional_build_files:
      - src: files/ansible.cfg
        dest: configs
      - src: files/pip.conf
        dest: configs
      - src: files/hub-ca.crt
        dest: configs
      # uncomment if custom RPM repositories are required
      #- src: files/custom.repo
      #  dest: configs
    
    additional_build_steps:
      prepend_base:
        # copy a custom pip.conf to override the location of the PyPI content
        - ADD _build/configs/pip.conf /etc/pip.conf
        # remove the default UBI repository definition
        - RUN rm -f /etc/yum.repos.d/ubi.repo
        # copy the hub CA certificate and update the trust store
        - ADD _build/configs/hub-ca.crt /etc/pki/ca-trust/source/anchors
        - RUN update-ca-trust
        # if needed, uncomment to add a custom RPM repository configuration
        #- ADD _build/configs/custom.repo /etc/yum.repos.d/custom.repo
    
      prepend_galaxy:
        - ADD _build/configs/ansible.cfg ~/.ansible.cfg
    
    ...
  3. Create an ansible.cfg file under the files/ subdirectory that points to your private automation hub.

    $ cat files/ansible.cfg
    [galaxy]
    server_list = private_hub
    
    [galaxy_server.private_hub]
    url = https://private-hub.example.com/api/galaxy/
  4. Create a pip.conf file under the files/ subdirectory which points to the internal PyPI mirror (a web server or something like Nexus):

    $ cat files/pip.conf
    [global]
    index-url = https://<pypi_mirror_fqdn>/
    trusted-host = <pypi_mirror_fqdn>
  5. Optional: If a bindep.txt file is being used to add RPMs the custom execution environment, create a custom.repo file under the files/ subdirectory that points to your disconnected Satellite or other location hosting the RPM repositories. If this step is necessary, uncomment the steps in the example execution-environment.yml file that correspond with the custom.repo file.

    The following example is for the UBI repos. Other local repos can be added to this file as well. The URL path may need to change depending on where the mirror content is located on the web server.

    $ cat files/custom.repo
    [ubi-8-baseos]
    name = Red Hat Universal Base Image 8 (RPMs) - BaseOS
    baseurl = http://<ubi_mirror_fqdn>/repos/ubi-8-baseos
    enabled = 1
    gpgkey = file:///etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release
    gpgcheck = 1
    
    [ubi-8-appstream]
    name = Red Hat Universal Base Image 8 (RPMs) - AppStream
    baseurl = http://<ubi_mirror_fqdn>/repos/ubi-8-appstream
    enabled = 1
    gpgkey = file:///etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release
    gpgcheck = 1
  6. Add the CA certificate used to sign the private automation hub web server certificate. If the private automation hub uses self-signed certificates provided by the installer:

    1. Copy the file /etc/pulp/certs/pulp_webserver.crt from your private automation hub and name it hub-ca.crt.
    2. Add the hub-ca.crt file to the files/ subdirectory.
  7. If the private automation hub uses user-provided certificates signed by a certificate authority:

    1. Make a copy of that CA certificate and name it hub-ca.crt.
    2. Add the hub-ca.crt file to the files/ subdirectory.
  8. Once the preceeding steps have been completed, create your python requirements.txt and Ansible collection requirements.yml files, with the content needed for your custom execution environment image.

    Note

    Any required collections must already be uploaded into your private automation hub.

    The following files should exist under the custom-ee/ directory, with bindep.txt and files/custom.repo being optional:

$ cd $HOME/custom-ee
$ tree .
.
├── bindep.txt
├── execution-environment.yml
├── files
│   ├── ansible.cfg
│   ├── custom.repo
│   ├── hub-ca.crt
│   └── pip.conf
├── requirements.txt
└── requirements.yml

1 directory, 8 files

Additional resources

For more information on the Version 3 format and requirements, see Execution Environment Definition: Version 3 Format.

4.12.3. Building the custom execution environment

Before creating the new custom execution environment, an API token from the private hub will be needed in order to download content.

Generate a token by taking the following steps:

  1. Log in to your private hub.
  2. Choose "Collections" from the left-hand menu.
  3. Choose the"API token" under the "Collections" section of the menu.
  4. Once you have the token, set the following environment variable so that Ansible Builder can access the token:

    $ export ANSIBLE_GALAXY_SERVER_PRIVATE_HUB_TOKEN=<your_token>
  5. Create the custom execution environment by using the command:

    $ cd $HOME/custom-ee
    $ ansible-builder build -f execution-environment.yml -t private-hub.example.com/custom-ee:latest -v 3
    Note

    If the build fails with an error that the private hub certificate is signed by an unknown authority, you can pull the required image into the local image cache by running the command:

    $ podman pull private-hub.example.com/ee-minimal-rhel8:latest --tls-verify=false

    Alternately, you can add the private hub CA certificate to the podman certificate store:

    $ sudo mkdir /etc/containers/certs.d/private-hub.example.com
    $ sudo cp $HOME/custom-ee/files/hub-ca.crt /etc/containers/certs.d/private-hub.example.com

4.12.4. Uploading the custom execution environment to the private automation hub

Before the new execution environment image can be used for automation jobs, it must be uploaded to the private automation hub.

First, verify that the execution environment image can be seen in the local podman cache:

$ podman images --format "table {{.ID}} {{.Repository}} {{.Tag}}"
IMAGE ID	    REPOSITORY					              TAG
b38e3299a65e	private-hub.example.com/custom-ee     	  latest
8e38be53b486	private-hub.example.com/ee-minimal-rhel8  latest

Then log in to the private automation hub’s container registry and push the image to make it available for use with job templates and workflows:

$ podman login private-hub.example.com -u admin
Password:
Login Succeeded!
$ podman push private-hub.example.com/custom-ee:latest

4.13. Upgrading between minor Ansible Automation Platform releases

To upgrade between minor releases of Ansible Automation Platform 2, use this general workflow.

Procedure

  1. Download and unarchive the latest Ansible Automation Platform 2 setup bundle.
  2. Create a backup of the existing installation.
  3. Copy the existing installation inventory file into the new setup bundle directory.
  4. Run ./setup.sh to upgrade the installation.

For example, to upgrade from version 2.2.0-7 to 2.3-1.2, make sure that both setup bundles are on the initial controller node where the installation occurred:

    $ ls -1F
ansible-automation-platform-setup-bundle-2.2.0-7/
ansible-automation-platform-setup-bundle-2.2.0-7.tar.gz
ansible-automation-platform-setup-bundle-2.3-1.2/
ansible-automation-platform-setup-bundle-2.3-1.2.tar.gz

Back up the 2.2.0-7 installation:

$ cd ansible-automation-platform-setup-bundle-2.2.0-7
$ sudo ./setup.sh -b
$ cd ..

Copy the 2.2.0-7 inventory file into the 2.3-1.2 bundle directory:

$ cd ansible-automation-platform-setup-bundle-2.2.0-7
$ cp inventory ../ansible-automation-platform-setup-bundle-2.3-1.2/
$ cd ..

Upgrade from 2.2.0-7 to 2.3-1.2 with the setup.sh script:

$ cd ansible-automation-platform-setup-bundle-2.3-1.2
$ sudo ./setup.sh