-
Language:
English
-
Language:
English
Chapter 3. Hardware Recommendations
This reference architecture focuses on:
- Providing configuration instruction details
- Validating the interoperability of Red Hat OpenStack Platform Nova Compute instances and Red Hat Ceph Storage on the same physical servers.
- Providing automated methods to apply resource isolation to avoid contention between Nova Compute and Ceph OSD services.
Red Hat’s experience with early hyper-converged adopters reflect a wide variety of hardware configurations. Baseline hardware performance and sizing recommendations for non-hyper-converged Ceph clusters can be found in the Hardware Selection Guide for Ceph.
Additional considerations for hyper-converged Red Hat OpenStack Platform with Red Hat Ceph Storage server nodes include:
- Network: the recommendation is to configure 2x 10GbE NICs for Ceph. Additional NICs are recommended to meet Nova VM workload networking requirements that include bonding of NICs and trunking of VLANs.
- RAM: the recommendation is to configure 2x RAM needed by the resident Nova VM workloads.
- OSD Media: the recommendation is to configure 7,200 RPM enterprise HDDs for general-purpose workloads or NVMe SSDs for IOPS-intensive workloads. For workloads requiring large amounts of storage capacity, it may be better to configure separate storage and compute server pools (non hyper-converged).
- Journal Media: the recommendation is to configure SAS/SATA SSDs for general-purpose workloads or NVMe SSDs for IOPS-intensive workloads.
- CPU: the recommendation is to configure a minimum dual-socket 16-core CPUs for servers with NVMe storage media, or dual-socket 10-core CPUs for servers with SAS/SATA SSDs.
Details of the hardware configuration for this reference architecture can be found in Appendix: Environment Details.
3.1. Required Servers
The minimum infrastructure requires at least six bare metal servers and either a seventh bare metal server or virtual machine hosted separately, not hosted on the six bare metal servers. These servers should be deployed in the following roles:
- 1 Red Hat OpenStack Platform director server (can be virtualized for small deployments)
- 3 Cloud Controllers/Ceph Monitors (Controller/Mon nodes)
- 3 Compute Hypervisors/Ceph storage servers (Compute/OSD nodes)
As part of this reference architecture, a fourth Compute/Ceph storage node is added to demonstrate scaling of an infrastructure.
Additional Compute/Ceph storage nodes may be initially deployed or added later. However, for deployments spanning more than one datacenter rack (42 nodes), Red Hat recommends the use of standalone storage and compute, and not a hyper-converged approach.
3.2. Recommended Networks
This reference architecture uses six networks to serve the roles described in this section. How these networks may be trunked as VLANs to connect to the servers is illustrated in Figure 1. Network Separation Diagram. Further details of the Networks in this reference architecture are located in Appendix: Environment Details.
3.2.1. Ceph Cluster Network
The Ceph OSDs use this network to balance data according to the replication policy. This private network only needs to be accessed by the OSDs. In a hyper-converged deployment, the compute role needs access to this network. Thus, Chapter 5, Define the Overcloud, describes how to modify nic-configs/compute-nics.yaml to ensure that compute nodes are deployed with a connection to this network.
The Heat Provider that Red Hat OpenStack Platform director uses to define this network can be referenced with the following:
OS::TripleO::Network::StorageMgmt: /usr/share/openstack-tripleo-heat-templates/network/storage_mgmt.yaml
3.2.2. Red Hat Ceph Storage
The Ceph monitor nodes are accessed via this network. The Heat Provider that Red Hat OpenStack Platform director uses to define this network can be referenced with the following:
OS::TripleO::Network::Storage: /usr/share/openstack-tripleo-heat-templates/network/storage.yaml
3.2.3. External
Red Hat OpenStack Platform director uses the external network to download software updates for the overcloud, and the cloud operator uses this network to access director to manage the overcloud.
The Controllers use the external network to route traffic to the Internet for tenant services that are externally connected via reserved floating IPs. Overcloud users use the external network to access the overcloud.
The Compute nodes do not need to be directly connected to the external network, as their instances communicate via the Tenant network to the Controllers who then route external traffic on their behalf to the external network.
The Heat Provider that Red Hat OpenStack Platform director uses to define this network can be referenced with the following:
OS::TripleO::Network::External: /usr/share/openstack-tripleo-heat-templates/network/external.yaml
3.2.4. OpenStack Internal API
OpenStack provides both public facing and private API endpoints. This is an isolated network for the private endpoints.
The Heat Provider that Red Hat OpenStack Platform director uses to define this network can be referenced with the following:
OS::TripleO::Network::InternalApi: /usr/share/openstack-tripleo-heat-templates/network/internal_api.yaml
3.2.5. OpenStack Tenant Network
OpenStack tenants create private networks implemented by VLAN or VXLAN on this network.
The Heat Provider that Red Hat OpenStack Platform director uses to define this network can be referenced with the following:
OS::TripleO::Network::Tenant: /usr/share/openstack-tripleo-heat-templates/network/tenant.yaml
3.2.6. Red Hat OpenStack Platform director Provisioning
Red Hat OpenStack Platform director serves DHCP and PXE services from this network to install the operating system and other software on the overcloud nodes from baremetal. Red Hat OpenStack Platform director uses this network to manage the overcloud nodes, and the cloud operator uses it to access the overcloud nodes directly by ssh
if necessary. The overcloud nodes must be configured to PXE boot from this network provisioning.
Figure 3.1. Figure 1. Network Separation Diagram

In Figure 1. Network Separation Diagram, it is possible that the NICs could be a logical bond of two physical NICs and it is not required that each network be trunked to the same interface.