Chapter 4. System requirements
Use this information when planning your Red Hat Ansible Automation Platform installations and designing automation mesh topologies that fit your use case.
You must be able to obtain root access either through the
sudocommand, or through privilege escalation. For more on privilege escalation see Understanding Privilege Escalation.
- You must be able to de-escalate privileges from root to users such as: AWX, PostgreSQL, Event-Driven Ansible, or Pulp.
- You must configure an NTP client on all nodes. For more information, see Configuring NTP server using Chrony.
4.1. Red Hat Ansible Automation Platform system requirements
Your system must meet the following minimum system requirements to install and run Red Hat Ansible Automation Platform.
Table 4.1. Base system
Valid Red Hat Ansible Automation Platform
Red Hat Enterprise Linux 8.6 or later 64-bit (x86, ppc64le, s390x, aarch64)
Red Hat Ansible Automation Platform is also supported on OpenShift, see Deploying the Red Hat Ansible Automation Platform operator on OpenShift Container Platform for more information.
version 2.14 (to install)
Ansible Automation Platform ships with execution environments that contain ansible-core 2.15.
3.8 or later
A currently supported version of Mozilla FireFox or Google Chrome
PostgreSQL version 13
The following are necessary for you to work with project updates and collections:
- Ensure that the network ports and protocols listed in Table 5.9. Automation Hub are available for successful connection and download of collections from automation hub or Ansible Galaxy server.
- SSL inspection must be disabled either when using self signed certificates or for the Red Hat domains.
The requirements for systems managed by Ansible Automation Platform are the same as for Ansible. See Getting started with Ansible in the Ansible User Guide.
Additional notes for Red Hat Ansible Automation Platform requirements
- Although Red Hat Ansible Automation Platform depends on Ansible Playbooks and requires the installation of the latest stable version of Ansible before installing automation controller, manual installations of Ansible are no longer required.
- For new installations, automation controller installs the latest release package of Ansible 2.14.
- If performing a bundled Ansible Automation Platform installation, the installation program attempts to install Ansible (and its dependencies) from the bundle for you.
- If you choose to install Ansible on your own, the Ansible Automation Platform installation program detects that Ansible has been installed and does not attempt to reinstall it.
You must install Ansible using a package manager such as
dnf, and the latest stable version of the package manager must be installed for Red Hat Ansible Automation Platform to work properly. Ansible version 2.14 is required for versions 2.4 and later.
4.2. Automation controller system requirements
Automation controller is a distributed system, where different software components can be co-located or deployed across multiple compute nodes. In the installer, node types of control, hybrid, execution, and hop are provided as abstractions to help you design the topology appropriate for your use case:
- Execution nodes
- Run automation. Increase memory and CPU to increase capacity for running more forks.
- Hop nodes
- Serve to route traffic from one part of the Automation Mesh to another (for example, could be a bastion host into another network). RAM could affect throughput, CPU activity is low. Network bandwidth and latency are generally a more important factor than either RAM or CPU.
- Control nodes
- Process events and runs cluster jobs including project updates and cleanup jobs. Increasing CPU and memory can help with job event processing.
- Hybrid nodes
- Run both automation and cluster jobs. Comments on CPU and memory for execution and control nodes also apply to this node type.
Use the following recommendations for node sizing:
On control and hybrid nodes, allocate a minimum of 20 GB to
/var/lib/awx for execution environment storage.
Table 4.2. Execution and hop nodes
Table 4.3. Control and hybrid nodes
Actual RAM requirements vary based on how many hosts automation controller will manage simultaneously (which is controlled by the
forksparameter in the job template or the system
ansible.cfgfile). To avoid possible resource conflicts, Ansible recommends 1 GB of memory per 10 forks + 2 GB reservation for automation controller, see Automation controller Capacity Determination and Job Impact for further details. If
forksis set to 400, 42 GB of memory is recommended.
- A larger number of hosts can of course be addressed, though if the fork number is less than the total host count, more passes across the hosts are required. These RAM limitations are avoided when using rolling updates or when using the provisioning callback system built into automation controller, where each system requesting configuration enters a queue and is processed as quickly as possible; or in cases where automation controller is producing or deploying images such as AMIs. All of these are great approaches to managing larger environments. For further questions, please contact Ansible support through the Red Hat Customer portal.
4.3. Automation hub system requirements
Automation hub enables you to discover and use new certified automation content from Red Hat Ansible and Certified Partners. On Ansible automation hub, you can discover and manage Ansible Collections, which are supported automation content developed by Red Hat and its partners for use cases such as cloud automation, network automation, and security automation.
Automation hub has the following system requirements:
8 GB minimum
For capacity based on forks in your configuration, see additional resources.
60 GB disk
A minimum of 40GB should be dedicated to /var for collection storage.
Private automation hub
If you install private automation hub from an internal address, and have a certificate which only encompasses the external address, this can result in an installation which cannot be used as container registry without certificate issues.
To avoid this, use the
automationhub_main_url inventory variable with a value like https://pah.example.com linking to the private automation hub node in the installation inventory file.
This adds the external address to
/etc/pulp/settings.py. This implies that you only want to use the external address.
For information on inventory file variables, see Inventory File Variables in the Red Hat Ansible Automation Platform Installation Guide.
4.3.1. High availability automation hub requirements
Before deploying a high availability (HA) automation hub, ensure that you have a shared filesystem installed in your environment and that you have configured your network storage system, if applicable.
22.214.171.124. Required shared filesystem
A high availability automation hub requires you to have a shared file system, such as NFS, already installed in your environment. Before you run the Red Hat Ansible Automation Platform installer, verify that you installed the
/var/lib/pulp directory across your cluster as part of the shared file system installation. The Red Hat Ansible Automation Platform installer returns an error if
/var/lib/pulp is not detected in one of your nodes, causing your high availability automation hub setup to fail.
126.96.36.199. Network Storage Installation Requirements
If you intend to install a HA automation hub using a network storage on the automation hub nodes itself, you must first install and use
firewalld to open the necessary ports as required by your shared storage system before running the Ansible Automation Platform installer.
Install and configure
firewalld by executing the following commands:
$ dnf install firewalld
Add your network storage under <service> using the following command:
$ firewall-cmd --permanent --add-service=<service>Note
For a list of supported services, use the
$ firewall-cmd --get-servicescommand
Reload to apply the configuration:
$ firewall-cmd --reload
4.4. Event-Driven Ansible controller system requirements
The Event-Driven Ansible controller is a single-node system capable of handling a variable number of long-running processes (such as, Rulebook activations) on-demand, depending on the number of CPU cores. Use the following minimum requirements to execute a maximum of 9 simultaneous activations:
40 GB minimum
4.5. PostgreSQL requirements
Red Hat Ansible Automation Platform uses PostgreSQL 13.
- PostgreSQL user passwords are hashed with SCRAM-SHA-256 secure hashing algorithm before storing in the database.
To determine if your automation controller instance has access to the database, you can do so with the command,
Table 4.4. Database
20 GB dedicated hard disk space
Optionally, you can configure the PostgreSQL database as separate nodes that are not managed by the Red Hat Ansible Automation Platform installer. When the Ansible Automation Platform installer manages the database server, it configures the server with defaults that are generally recommended for most workloads. See Database Settings for more information on the settings you can use to improve database performance.
For more information on tuning your PostgreSQL server, see the PostgreSQL documentation.
4.5.1. Enabling the hstore extension for the automation hub PostgreSQL database
From Ansible Automation Platform 2.4, the database migration script uses
hstore fields to store information, therefore the
hstore extension to the automation hub PostgreSQL database must be enabled.
This process is automatic when using the Ansible Automation Platform installer and a managed PostgreSQL server.
However, when the PostgreSQL database is external, you must carry out this step manually before automation hub installation.
hstore extension is not enabled before automation hub installation, a failure is raised during database migration.
Check if the extension is available on the PostgreSQL server (automation hub database).
$ psql -d <automation hub database> -c "SELECT * FROM pg_available_extensions WHERE name='hstore'"
Where the default value for
<automation hub database>is
This gives an output similar to the following:
name | default_version | installed_version |comment ------+-----------------+-------------------+--------------------------------------------------- hstore | 1.7 | | data type for storing sets of (key, value) pairs (1 row)
This indicates that the
hstore1.7 extension is available, but not enabled.
hstoreextension is not available on the PostgreSQL server, the result is similar to the following:
name | default_version | installed_version | comment ------+-----------------+-------------------+--------- (0 rows)
On a RHEL based server, the
hstoreextension is included in the
postgresql-contribRPM package, which is not installed automatically when installing the PostgreSQL server RPM package.
To install the RPM package, use the following command:
dnf install postgresql-contrib
hstorePostgreSQL extension on the automation hub database with the following command:
$ psql -d <automation hub database> -c "CREATE EXTENSION hstore"
The output of which is:
In the following output, the
installed_versionfield contains the
hstoreextension used, indicating that
name | default_version | installed_version | comment -----+-----------------+-------------------+------------------------------------------------------ hstore | 1.7 | 1.7 | data type for storing sets of (key, value) pairs (1 row)
4.5.2. Benchmarking storage performance for the Ansible Automation Platform PostgreSQL database
The following procedure describes how to benchmark the write/read IOPS performance of the storage system to check whether the minimum Ansible Automation Platform PostgreSQL database requirements are met.
You have installed the Flexible I/O Tester (fio) storage performance benchmarking tool.
To install fio, run the following command as the root user:
# yum -y install fio
You have adequate disk space to store the fio test data log files.
The examples shown in the procedure require at least 60GB disk space in the
numjobssets the number of jobs run by the command.
size=10Gsets the file size generated by each job.
To reduce the amount of test data, adjust the value of the
Run a random write test:
$ fio --name=write_iops --directory=/tmp --numjobs=3 --size=10G \ --time_based --runtime=60s --ramp_time=2s --ioengine=libaio --direct=1 \ --verify=0 --bs=4K --iodepth=64 --rw=randwrite \ --group_reporting=1 > /tmp/fio_benchmark_write_iops.log \ 2>> /tmp/fio_write_iops_error.log
Run a random read test:
$ fio --name=read_iops --directory=/tmp \ --numjobs=3 --size=10G --time_based --runtime=60s --ramp_time=2s \ --ioengine=libaio --direct=1 --verify=0 --bs=4K --iodepth=64 --rw=randread \ --group_reporting=1 > /tmp/fio_benchmark_read_iops.log \ 2>> /tmp/fio_read_iops_error.log
Review the results:
In the log files written by the benchmark commands, search for the line beginning with
iops. This line shows the minimum, maximum, and average values for the test.
The following example shows the line in the log file for the random read test:
$ cat /tmp/fio_benchmark_read_iops.log read_iops: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=64 […] iops : min=50879, max=61603, avg=56221.33, stdev=679.97, samples=360 […]
You must review, monitor, and revisit the log files according to your own business requirements, application workloads, and new demands.