Chapter 4. Migrating isolated nodes to execution nodes

Upgrading from version 1.x to the latest version of the Red Hat Ansible Automation Platform requires platform administrators to migrate data from isolated legacy nodes to execution nodes. This migration is necessary to deploy the automation mesh.

This guide explains how to perform a side-by-side migration. This ensures that the data on your original automation environment remains untouched during the migration process.

The migration process involves the following steps:

  1. Verify upgrade configurations.
  2. Backup original instance.
  3. Deploy new instance for a side-by-side upgrade.
  4. Recreate instance groups in the new instance using ansible controller.
  5. Restore original backup to new instance.
  6. Set up execution nodes and upgrade instance to Red Hat Ansible Automation Platform 2.3.
  7. Configure upgraded controller instance.

4.1. Prerequisites for upgrading Ansible Automation Platform

Before you begin to upgrade Ansible Automation Platform, ensure your environment meets the following node and configuration requirements.

4.1.1. Node requirements

The following specifications are required for the nodes involved in the Ansible Automation Platform upgrade process:

  • 16 GB of RAM for controller nodes, database node, execution nodes and hop nodes.
  • 4 CPUs for controller nodes, database nodes, execution nodes, and hop nodes.
  • 150 GB+ disk space for database node.
  • 40 GB+ disk space for non-database nodes.
  • DHCP reservations use infinite leases to deploy the cluster with static IP addresses.
  • DNS records for all nodes.
  • Red Hat Enterprise Linux 8 or later 64-bit (x86) installed for all nodes.
  • Chrony configured for all nodes.
  • Python 3.9 or later for all content dependencies.

4.1.2. Automation controller configuration requirements

The following automation controller configurations are required before you proceed with the Ansible Automation Platform upgrade process:

Configuring NTP server using Chrony

Each Ansible Automation Platform node in the cluster must have access to an NTP server. Use the chronyd to synchronize the system clock with NTP servers. This ensures that cluster nodes using SSL certificates that require validation do not fail if the date and time between nodes are not in sync.

This is required for all nodes used in the upgraded Ansible Automation Platform cluster:

  1. Install chrony:

    # dnf install chrony --assumeyes
  2. Open /etc/chrony.conf using a text editor.
  3. Locate the public server pool section and modify it to include the appropriate NTP server addresses. Only one server is required, but three are recommended. Add the 'iburst' option to speed up the time it takes to properly sync with the servers:

    # Use public servers from the pool.ntp.org project.
    # Please consider joining the pool (http://www.pool.ntp.org/join.html).
    server <ntp-server-address> iburst
  4. Save changes within the /etc/chrony.conf file.
  5. Start the host and enable the chronyd daemon:

    # systemctl --now enable chronyd.service
  6. Verify the chronyd daemon status:

    # systemctl status chronyd.service

Attaching Red Hat subscription on all nodes

Red Hat Ansible Automation Platform requires you to have valid subscriptions attached to all nodes. You can verify that your current node has a Red Hat subscription by running the following command:

# subscription-manager list --consumed

If there is no Red Hat subscription attached to the node, see Attaching your Ansible Automation Platform subscription for more information.

Creating non-root user with sudo privileges

Before you upgrade Ansible Automation Platform, it is recommended to create a non-root user with sudo privileges for the deployment process. This user is used for:

  • SSH connectivity.
  • Passwordless authentication during installation.
  • Privilege escalation (sudo) permissions.

The following example uses ansible to name this user. On all nodes used in the upgraded Ansible Automation Platform cluster, create a non-root user named ansible and generate an ssh key:

  1. Create a non-root user:

    # useradd ansible
  2. Set a password for your user:

    # passwd ansible 1
    Changing password for ansible.
    Old Password:
    New Password:
    Retype New Password:
    1
    Replace ansible with the non-root user from step 1, if using a different name
  3. Generate an ssh key as the user:

    $ ssh-keygen -t rsa
  4. Disable password requirements when using sudo:

    # echo "ansible ALL=(ALL) NOPASSWD:ALL" | sudo tee -a /etc/sudoers.d/ansible

Copying SSH keys to all nodes

With the ansible user created, copy the ssh key to all the nodes used in the upgraded Ansible Automation Platform cluster. This ensures that when the Ansible Automation Platform installation runs, it can ssh to all the nodes without a password:

$ ssh-copy-id ansible@node-1.example.com
Note

If running within a cloud provider, you might need to instead create an ~/.ssh/authorized_keys file containing the public key for the ansible user on all your nodes and set the permissions to the authorized_keys file to only the owner (ansible) having read and write access (permissions 600).

Configuring firewall settings

Configure the firewall settings on all the nodes used in the upgraded Ansible Automation Platform cluster to permit access to the appropriate services and ports for a successful Ansible Automation Platform upgrade. For Red Hat Enterprise Linux 8 or later, enable the firewalld daemon to enable the access needed for all nodes:

  1. Install the firewalld package:

    # dnf install firewalld --assumeyes
  2. Start the firewalld service:

    # systemctl start firewalld
  3. Enable the firewalld service:

    # systemctl enable --now firewalld

4.1.3. Ansible Automation Platform configuration requirements

The following Ansible Automation Platform configurations are required before you proceed with the Ansible Automation Platform upgrade process:

Configuring firewall settings for execution and hop nodes

After upgrading your Red Hat Ansible Automation Platform instance, add the automation mesh port on the mesh nodes (execution and hop nodes) to enable automation mesh functionality. The default port used for the mesh networks on all nodes is 27199/tcp. You can configure the mesh network to use a different port by specifying recptor_listener_port as the variable for each node within your inventory file.

Within your hop and execution node set the firewalld port to be used for installation.

  1. Ensure that firewalld is running:

    $ sudo systemctl status firewalld
  2. Add the firewalld port to your controller database node (e.g. port 27199):

    $ sudo firewall-cmd --permanent --zone=public --add-port=27199/tcp
  3. Reload firewalld:

    $ sudo firewall-cmd --reload
  4. Confirm that the port is open:

    $ sudo firewall-cmd --list-ports

4.2. Back up your Ansible Automation Platform instance

Back up an existing Ansible Automation Platform instance by running the .setup.sh script with the backup_dir flag, which saves the content and configuration of your current environment:

  1. Navigate to your ansible-tower-setup-latest directory.
  2. Run the ./setup.sh script following the example below:

    $ ./setup.sh -e ‘backup_dir=/ansible/mybackup’ -e ‘use_compression=True’ @credentials.yml -b 12
    1
    backup_dir specifies a directory to save your backup to.
    2
    @credentials.yml passes the password variables and their values encrypted via ansible-vault.

With a successful backup, a backup file is created at /ansible/mybackup/tower-backup-latest.tar.gz .

This backup will be necessary later to migrate content from your old instance to the new one.

4.3. Deploy a new instance for a side-by-side upgrade

To proceed with the side-by-side upgrade process, deploy a second instance of Ansible Tower 3.8.x with the same instance group configurations. This new instance will receive the content and configuration from your original instance, and will later be upgraded to Red Hat Ansible Automation Platform 2.3.

4.3.1. Deploy a new instance of Ansible Tower

To deploy a new Ansible Tower instance, do the following:

  1. Download the Tower installer version that matches your original Tower instance by navigating to the Ansible Tower installer page.
  2. Navigate to the installer, then open the inventory file using a text editor to configure the inventory file for a Tower installation:

    1. In addition to any Tower configurations, remove any fields containing isolated_group or instance_group.

      Note

      For more information about installing Tower using the Ansible Automation Platform installer, see the Ansible Automation Platform Installation Guide for your specific installation scenario.

  3. Run the setup.sh script to begin the installation.

Once the new instance is installed, configure the Tower settings to match the instance groups from your original Tower instance.

4.3.2. Recreate instance groups in the new instance

To recreate your instance groups in the new instance, do the following:

Note

Make note of all instance groups from your original Tower instance. You will need to recreate these groups in your new instance.

  1. Log in to your new instance of Tower.
  2. In the navigation pane, select AdministrationInstance groups.
  3. Click Create instance group.
  4. Enter a Name that matches an instance group from your original instance, then click Save
  5. Repeat until all instance groups from your original instance have been recreated.

4.4. Restore backup to new instance

Running the ./setup.sh script with the restore_backup_file flag migrates content from the backup file of your original 1.x instance to the new instance. This effectively migrates all job histories, templates, and other Ansible Automation Platform related content.

Procedure

  1. Run the following command:

    $ ./setup.sh -r -e ‘restore_backup_file=/ansible/mybackup/tower-backup-latest.tar.gz’ -e ‘use_compression=True’ -e @credentials.yml -r -- --ask-vault-pass 123
    1
    restore_backup_file specifies the location of the Ansible Automation Platform backup database
    2
    use_compression is set to True due to compression being used during the backup process
    3
    -r sets the restore database option to True
  2. Log in to your new RHEL 8 Tower 3.8 instance to verify whether the content from your original instance has been restored:

    1. Navigate to AdministrationInstance groups. The recreated instance groups should now contain the Total Jobs from your original instance.
    2. Using the side navigation panel, check that your content has been imported from your original instance, including Jobs, Templates, Inventories, Credentials, and Users.

You now have a new instance of Ansible Tower with all the Ansible content from your original instance.

You will upgrade this new instance to Ansible Automation Platform 2.3 so that you keep all your previous data without overwriting your original instance.

4.5. Upgrading to Ansible Automation Platform 2.3

To upgrade your instance of Ansible Tower to Ansible Automation Platform 2.3, copy the inventory file from your original Tower instance to your new Tower instance and run the installer. The Red Hat Ansible Automation Platform installer detects a pre-2.3 and offers an upgraded inventory file to continue with the upgrade process:

  1. Download the latest installer for Red Hat Ansible Automation Platform from the Red Hat Customer Portal.
  2. Extract the files:

    $ tar xvzf ansible-automation-platform-setup-<latest_version>.tar.gz
  3. Navigate into your Ansible Automation Platform installation directory:

    $ cd ansible-automation-platform-setup-<latest_version>/
  4. Copy the inventory file from your original instance into the directory of the latest installer:

    $ cp ansible-tower-setup-3.8.x.x/inventory ansible-automation-platform-setup-<latest_version>
  5. Run the setup.sh script:

    $ ./setup.sh

    The setup script pauses and indicates that a "pre-2.x" inventory file was detected, but offers a new file called inventory.new.ini allowing you to continue to upgrade your original instance.

  6. Open inventory.new.ini with a text editor.

    Note

    By running the setup script, the Installer modified a few fields from your original inventory file, such as renaming [tower] to [automationcontroller].

  7. Modify the newly generated inventory.new.ini file to configure your automation mesh by assigning relevant variables, nodes, and relevant node-to-node peer connections:

    Note

    The design of your automation mesh topology depends on the automation needs of your environment. It is beyond the scope of this document to provide designs for all possible scenarios. The following is one example automation mesh design. Review the full Ansible Automation Platform automation mesh guide for information on designing it for your needs.

    Example inventory file with a standard control plane consisting of three nodes utilizing hop nodes:

    [automationcontroller]
    control-plane-1.example.com
    control-plane-2.example.com
    control-plane-3.example.com
    
    [automationcontroller:vars]
    node_type=control 1
    peers=execution_nodes 2
    
    
    [execution_nodes]
    execution-node-1.example.com peers=execution-node-2.example.com
    execution-node-2.example.com peers=execution-node-3.example.com
    execution-node-3.example.com peers=execution-node-4.example.com
    execution-node-4.example.com peers=execution-node-5.example.com node_type=hop
    execution-node-5.example.com peers=execution-node-6.example.com node_type=hop 3
    execution-node-6.example.com peers=execution-node-7.example.com
    execution-node-7.example.com
    
    [execution_nodes:vars]
    node_type=execution

    1
    Specifies a control node that runs project and inventory updates and system jobs, but not regular jobs. Execution capabilities are disabled on these nodes.
    2
    Specifies peer relationships for node-to-node connections in the [execution_nodes] group.
    3
    Specifies hop nodes that route traffic to other execution nodes. Hop nodes cannot execute automation.
  8. Import or generate a automation hub API token.

    • Import an existing API token with the automationhub_api_token flag:

      automationhub_api_token=<api_token>
    • Generate a new API token, and invalidate any existing tokens, by setting the generate_automationhub_token flag to True:

      generate_automationhub_token=True
  9. Once you have finished configuring your inventory.new.ini for automation mesh, run the setup script using inventory.new.ini:

    $ ./setup.sh -i inventory.new.ini -e @credentials.yml -- --ask-vault-pass
  10. Once the installation completes, verify that your Ansible Automation Platform has been installed successfully by logging in to the Ansible Automation Platform dashboard UI across all automation controller nodes.

Additional resources

4.6. Configuring your upgraded Ansible Automation Platform

4.6.1. Configuring automation controller instance groups

After upgrading your Red Hat Ansible Automation Platform instance, associate your original instances to its corresponding instance groups by configuring settings in the automation controller UI:

  1. Log into the new Controller instance.
  2. Content from old instance, such as credentials, jobs, inventories should now be visible on your Controller instance.
  3. Navigate to AdministrationInstance Groups.
  4. Associate execution nodes by clicking on an instance group, then click the Instances tab.
  5. Click Associate. Select the node(s) to associate to this instance group, then click Save.
  6. You can also modify the default instance to disassociate your new execution nodes.