Chapter 2. Verifying the Red Hat Hyperconverged Infrastructure for Cloud Requirements
As a technician, you need to verify three core requirements before deploying the Red Hat Hyperconverged Infrastructure for Cloud solution.
2.1. Prerequisites
2.2. Verifying the Red Hat Hyperconverged Infrastructure for Cloud Hardware Requirements
Implementors of hyper-converged infrastructures will reflect a wide variety of hardware configurations. Red Hat recommends the following minimums when considering hardware:
- CPU
- For Controller/Monitor nodes, use dual-socket, 8-core CPUs. For Compute/OSD nodes, use dual-socket, 14-core CPUs for nodes with NVMe storage media, or dual-socket, 10-core CPUs for nodes with SAS/SATA SSDs.
- RAM
- Configure twice the RAM needed by the resident Nova virtual machine workloads.
- OSD Disks
- Use 7,200 RPM enterprise HDDs for general-purpose workloads or NVMe SSDs for IOPS-intensive workloads.
- Journal Disks
- Use SAS/SATA SSDs for general-purpose workloads or NVMe SSDs for IOPS-intensive workloads.
- Network
- Use two 10GbE NICs for Red Hat Ceph Storage (RHCS) nodes. Additionally, use dedicated NICs to meet the Nova virtual machine workload requirements. See Section 2.4, “Verifying the Red Hat Hyperconverged Infrastructure for Cloud Network Requirements” for more details.
Table 2.1. Minimum Node Quantity
| Qty. | Role | Physical / Virtual |
|---|---|---|
| 1 | Red Hat OpenStack Platform director (RHOSP-d) | Either* |
| 3 | RHOSP Controller & RHCS Monitor | Physical |
| 3 | RHOSP Compute & RHCS OSD | Physical |
The RHOSP-d node can be virtualized for small deployments, that is less than 20TB in total capacity. If the solution deployment is larger than 20TB in capacity, then Red Hat recommends the RHOSP-d node be a physical node. Additional hyper-converged compute/storage nodes can be initially deployed or added at a later time.
Red Hat recommends using standalone compute and storage nodes for deployments spanning more than one datacenter rack, which is 42 nodes.
2.3. Verifying the Red Hat Hyperconverged Infrastructure for Cloud Software Requirements
Verify that the nodes have access to the necessary software repositories. The Red Hat Hyperconverged Infrastructure (RHHI) Cloud solution requires specific software packages to be installed to function properly.
Prerequisites
- Have a valid Red Hat Hyperconverged Infrastructure for Cloud subscription.
Procedure
Do the following step on any node, as the root user.
Verify the available subscriptions:
# subscription-manager list --available --all --matches="*OpenStack*"
Additional Resources
- See Appendix A, Red Hat Hyperconverged Infrastructure for Cloud Required Repositories for the required software repositories.
2.4. Verifying the Red Hat Hyperconverged Infrastructure for Cloud Network Requirements
Red Hat recommends using a minimum of five networks to serve various traffic roles:
- Red Hat Ceph Storage
- Ceph Monitor nodes use the public network. Ceph OSDs use the public network, if no private storage cluster network exists. Optionally, OSDs may use a private storage cluster network to handle traffic associated with replication, heartbeating and backfilling, leaving the public network exclusively for I/O. Red Hat recommends using a cluster network for larger deployments. The compute role needs access to this network.
- External
- Red Hat OpenStack Platform director (RHOSP-d) uses the External network to download software updates for the overcloud, and the overcloud operator uses it to access RHOSP-d to manage the overcloud. When tenant services establish connections via reserved floating IP addresses, the Controllers use the External network to route their traffic to the Internet. Overcloud users use the external network to access the overcloud.
- OpenStack Internal API
- OpenStack provides both public facing and private API endpoints. This is an isolated network for the private endpoints.
- OpenStack Tenant Network
- OpenStack tenants create private networks implemented by VLAN or VXLAN on this network.
- Red Hat OpenStack Platform Director Provisioning
- Red Hat OpenStack Platform director serves DHCP and PXE services from this network to install the operating system and other software on the overcloud nodes from bare metal. Red Hat OpenStack Platform director uses this network to manage the overcloud nodes, and the cloud operator uses it to access the overcloud nodes directly by ssh if necessary. The overcloud nodes must be configured to PXE boot from this network provisioning.
Figure 2.1. Network Separation Diagram

The NICs can be a logical bond of two physical NICs. It is not required to trunk each network to the same interface.
2.5. Additional Resources
- For more information, see the Red Hat Ceph Storage Hardware Guide.

Where did the comment section go?
Red Hat's documentation publication system recently went through an upgrade to enable speedier, more mobile-friendly content. We decided to re-evaluate our commenting platform to ensure that it meets your expectations and serves as an optimal feedback mechanism. During this redesign, we invite your input on providing feedback on Red Hat documentation via the discussion platform.