Chapter 4. Solution Design
This solution is comprised of eleven HPE ProLiant DL servers; eight HPE ProLiant DL360 Gen9 and three HPE ProLiant DL380 Gen9 servers. The servers are configured to create a highly available platform for a Red Hat OpenStack Platform 10 deployment. Each server is configured with redundant hard drives (operating system only), redundant fans, and redundant power supplies. The OpenStack Internal API, Storage, Storage Management, External, and Tenant networks are configured to run on bonded ten Gigabit interfaces that are split across two network interface cards. Hardware based RAID1 provides increased availability for the Red Hat OpenStack Platform 10 and Red Hat Ceph Storage server nodes operating system disks. The three node Red Hat OpenStack Platform 10 controller deployment is configured for high availability clustering using Pacemaker and HA Proxy. The three node Red Hat Ceph Storage cluster provides a high availability storage platform for OpenStack block and object storage services. Red Hat OpenStack Platform 10 services are monitored using a preinstalled Sensu client which can be configured to automatically connect to an existing Sensu monitoring host during deployment.
4.1. Red Hat OpenStack Platform 10 director
In this reference architecture the Red Hat OpenStack Platform 10 director is installed as a virtual machine on one of the HPE DL360 Gen9 servers. The host is running Red Hat Enterprise Linux 7.3 with KVM. The Red Hat OpenStack Platform 10 director virtual machine is running Red Hat Enterprise Linux 7.3. By running the Red Hat OpenStack Platform 10 director as a virtual machine, the physical server can be used to support additional services and virtual machines. Additionally, running the Red Hat OpenStack Platform director on a virtualization platform provides the ability to snapshot the virtual machine at various stages of the installation. Virtual machine snaphots can also be useful if it is necessary to revert the system to a previously known good state or configuration. In this example there are three additional Red Hat Enterprise Linux 7.3 based virtual machines; a logging vm, and two Sensu monitoring virtual machines.
4.2. Network configuration
The network infrastructure is comprised of HPE FF 5700 and HPE FF 5930 network switches. The HPE FF 5700 is used for the management (iLO) and provisioning networks. The HPE FF 5930 is used for the following Red Hat OpenStack Platform 10 networks:
- External
- Internal API
- Tenant
- Storage Management
- Storage network
An overview of the network connections to each of the individual servers is shown in Figure 2 below.

Figure 2: Red Hat OpenStack Platform 10 Overview
4.2.1. VLAN Configuration
A total of 7 VLANs have been configured for this solution. The VLANs include: 1(default), 104, 3040-3044.
Table 2, below lists each VLAN including the correspondingRed Hat OpenStack Platform Network, name, VLAN ID, and IP subnet.
| Red Hat OpenStack Platform Networks | VLAN Name | VLAN ID | Subnet / CIDR |
|---|---|---|---|
| External | hpe-ext | 104 | 10.19.20.128/25 |
| Provision | hpe-prov | 3040 | 192.168.20.0/24 |
| Internal API | hpe-api | 3041 | 172.16.10.0/24 |
| Tenant | hpe-ten | 3044 | 172.16.160.0/24 |
| Storage | hpe-stor | 3042 | 172.16.6.10/24 |
| Storage Mgmt | hpe-stormgmt | 3043 | 172.16.16.0/24 |
Table 2: Red Hat OpenStack Platform Network and VLAN Assignments
4.2.2. HPE FF 5930 Link Aggregation and Bonding
The HPE FF 5930 is configured with 32 QSFP 40 Gigabit ports. Each port will be split into four 10 Gigabit network ports and connected to 10 Gigabit network interfaces on the controller, compute, and ceph storage nodes. Each Red Hat OpenStack Platform 10 compute, controller, and Red Hat Ceph Storage node is equipped with two dual port 10 Gigabit Network Interface Cards. The four 10 Gigabit interfaces on each node will form two bonds on each server, one for storage containing the Storage network, and the other for OpenStack cloud communications including the Internal API, External, Storage Management, and Tenant networks. These bonded interfaces will be connected to Link Aggregation groups on the HPE FF 5930 switch which have the corresponding VLANs assigned. In this reference architecture there is only one HPE FF 5930, for additional redundancy a second HPE FF 5930 could be added to the configuration and configured in an Intelligent Resilient Framework (IRF). The bridge aggregations and bonded interfaces could then be distributed across the two physical switches.
Storage Network
- Storage
Cloud Networks
- External
- Tenant
- Internal API
- Storage Management
4.2.3. Split the 40 Gigabit QSFP ports into 10 Gigabit ports
As mentioned previously, the HPE FF 5930 used in this reference architecture is configured with 32 Forty Gigabit QSFP ports. Each 40 Gigabit port will be split into four 10 Gigabit ports. The first four 40 Gigabit ports (FortyGigE 1/0/1 - FortyGigE 1/0/4) cannot be split. The first port that can be split is port 5 (FortyGigE 1/0/5). Once the interface has been split, there will be four 10 Gigabit ports labeled:
- XGE1/0/5:1 UP 10G(a) F(a) T 1
- XGE1/0/5:2 UP 10G(a) F(a) T 1
- XGE1/0/5:3 UP 10G(a) F(a) T 1
- XGE1/0/5:4 UP 10G(a) F(a) T 1
The following commands are used to split the FortyGigE interface:
[5930-01] interface FortyGigE 1/0/5 [5930-01-FortyGigE1/0/5] using tengige
Repeat for interfaces FortyGigE 1/0/6 – FortyGigE 1/0/15
This will create the Ten Gigabit interfaces shown below in Table 3:
| 10Gb Interface 1 | 10Gb Interface 2 | 10Gb Interface 3 | 10Gb Interface 4 |
|---|---|---|---|
| XGE1/0/5:1 | XGE1/0/5:2 | XGE1/0/5:3 | XGE1/0/5:4 |
| XGE1/0/6:1 | XGE1/0/6:2 | XGE1/0/6:3 | XGE1/0/6:4 |
| XGE1/0/7:1 | XGE1/0/7:2 | XGE1/0/7:3 | XGE1/0/7:4 |
| XGE1/0/8:1 | XGE1/0/8:2 | XGE1/0/8:3 | XGE1/0/8:4 |
| XGE1/0/9:1 | XGE1/0/9:2 | XGE1/0/9:3 | XGE1/0/9:4 |
| XGE1/0/10:1 | XGE1/0/10:2 | XGE1/0/10:3 | XGE1/0/10:4 |
| XGE1/0/11:1 | XGE1/0/11:2 | XGE1/0/11:3 | XGE1/0/11:4 |
| XGE1/0/12:1 | XGE1/0/12:2 | XGE1/0/12:3 | XGE1/0/12:4 |
| XGE1/0/13:1 | XGE1/0/13:2 | XGE1/0/13:3 | XGE1/0/13:4 |
| XGE1/0/14:1 | XGE1/0/14:2 | XGE1/0/14:3 | XGE1/0/14:4 |
| XGE1/0/15:1 | XGE1/0/15:2 | XGE1/0/15:3 | XGE1/0/15:4 |
Table 3: 10Gb Interfaces
4.2.4. Create the Link Aggregation Groups, Assigning Interfaces and VLANS
Link Aggregation interfaces are configured on the HPE FF 5930 switch and the corresponding VLAN have been assigned to these interfaces.
Cloud Network example:
[5930-01]interface Bridge-Aggregation11 port link-type trunk port trunk permit vlan 1 104 3041 3043 to 3044 link-aggregation mode dynamic lacp edge-port [5930-01]interface Ten-GigabitEthernet1/0/5:1 port link-mode bridge port link-type trunk port trunk permit vlan 1 104 3041 3043 to 3044 port link-aggregation group 11 [5930-01]interface Ten-GigabitEthernet1/0/5:3 port link-mode bridge port link-type trunk port trunk permit vlan 1 104 3041 3043 to 3044 port link-aggregation group 11
Storage Network example:
[5930-01]interface Bridge-Aggregation12 port link-type trunk port trunk permit vlan 1 3042 link-aggregation mode dynamic lacp edge-port [5930-01]interface Ten-GigabitEthernet1/0/5:2 port link-mode bridge port link-type trunk port trunk permit vlan 1 3042 port link-aggregation group 12 [5930-01]interface Ten-GigabitEthernet1/0/5:4 port link-mode bridge port link-type trunk port trunk permit vlan 1 3042 port link-aggregation group 12
The tables below, table 4 and table 5, show the full list of Bridge Aggregations, Interfaces, and VLANs defined on the HPE FF 5930 for the Red Hat OpenStack Platform 10 Cloud and Storage networks.
| Cloud Network Aggregations | Interface 1 | Interface 2 | VLANS |
|---|---|---|---|
| Bridge-Aggregation11 | Ten-GigabitEthernet1/0/5:1 | Ten-GigabitEthernet1/0/5:3 | 1,104,3041,3043-3044 |
| Bridge-Aggregation21 | Ten-GigabitEthernet1/0/6:1 | Ten-GigabitEthernet1/0/6:3 | 1,104,3041,3043-3044 |
| Bridge-Aggregation31 | Ten-GigabitEthernet1/0/7:1 | Ten-GigabitEthernet1/0/7:3 | 1,104,3041,3043-3044 |
| Bridge-Aggregation41 | Ten-GigabitEthernet1/0/8:1 | Ten-GigabitEthernet1/0/8:3 | 1,104,3041,3043-3044 |
| Bridge-Aggregation51 | Ten-GigabitEthernet1/0/9:1 | Ten-GigabitEthernet1/0/9:3 | 1,104,3041,3043-3044 |
| Bridge-Aggregation61 | Ten-GigabitEthernet1/0/10:1 | Ten-GigabitEthernet1/0/10:3 | 1,104,3041,3043-3044 |
| Bridge-Aggregation71 | Ten-GigabitEthernet1/0/11:1 | Ten-GigabitEthernet1/0/11:3 | 1,104,3041,3043-3044 |
| Bridge-Aggregation81 | Ten-GigabitEthernet1/0/12:1 | Ten-GigabitEthernet1/0/12:3 | 1,104,3041,3043-3044 |
| Bridge-Aggregation91 | Ten-GigabitEthernet1/0/13:1 | Ten-GigabitEthernet1/0/13:3 | 1,104,3041,3043-3044 |
| Bridge-Aggregation101 | Ten-GigabitEthernet1/0/14:1 | Ten-GigabitEthernet1/0/14:3 | 1,104,3041,3043-3044 |
| Bridge-Aggregation111 | Ten-GigabitEthernet1/0/15:1 | Ten-GigabitEthernet1/0/15:3 | 1,104,3041,3043-3044 |
Table 4: Cloud Network Bridge Aggregations
The HPE FF 5930 Bridge Aggregations, Interfaces, and corresponding VLAN assignments are shown below in Table 5.
| Storage Trunk Aggregations | Interface 1 | Interface 2 | VLANS |
|---|---|---|---|
| Bridge-Aggregation12 | Ten-GigabitEthernet1/0/5:2 | Ten-GigabitEthernet1/0/5:4 | 1,3042 |
| Bridge-Aggregation22 | Ten-GigabitEthernet1/0/6:2 | Ten-GigabitEthernet1/0/6:4 | 1,3042 |
| Bridge-Aggregation32 | Ten-GigabitEthernet1/0/7:2 | Ten-GigabitEthernet1/0/7:4 | 1,3042 |
| Bridge-Aggregation42 | Ten-GigabitEthernet1/0/8:2 | Ten-GigabitEthernet1/0/8:4 | 1,3042 |
| Bridge-Aggregation52 | Ten-GigabitEthernet1/0/9:2 | Ten-GigabitEthernet1/0/9:4 | 1,3042 |
| Bridge-Aggregation62 | Ten-GigabitEthernet1/0/10:2 | Ten-GigabitEthernet1/0/10:4 | 1,3042 |
| Bridge-Aggregation72 | Ten-GigabitEthernet1/0/11:2 | Ten-GigabitEthernet1/0/11:4 | 1,3042 |
| Bridge-Aggregation82 | Ten-GigabitEthernet1/0/12:2 | Ten-GigabitEthernet1/0/12:4 | 1,3042 |
| Bridge-Aggregation92 | Ten-GigabitEthernet1/0/13:2 | Ten-GigabitEthernet1/0/13:4 | 1,3042 |
| Bridge-Aggregation102 | Ten-GigabitEthernet1/0/14:2 | Ten-GigabitEthernet1/0/14:4 | 1,3042 |
| Bridge-Aggregation112 | Ten-GigabitEthernet1/0/15:2 | Ten-GigabitEthernet1/0/15:4 | 1,3042 |
Table 5: Storage Network Bridge Aggregations
4.3. Configure the Red Hat Ceph Storage nodes
The Red Hat Ceph Storage cluster in this solution is comprised of three HPE ProLiant DL380 Gen9 servers. A three node cluster is the absolute minimum replicated cluster size. A cluster size of five nodes for replication and seven nodes for erasure coding would provide better availability and increased performance.
Disk Configuration
The HPE ProLiant DL380 Gen9 ceph-storage nodes are configured with twelve 1.2TB SAS hard drives and two 400GB SAS SSD drives. Two of the 1.2TB drive are configured as a RAID 1 array for the operating system, the remaining ten 1.2TB SAS drives are configured into ten individual RAID 0 logical drives and to be used for OSD. The two SSD drives are configured into individual RAID 0 logical drives to be used for journal files. Create the arrays and logical drives using the HP Smart Storage Administrator. Access the HPE System Utilities by pressing during the boot process and select System Configuration, select the Smart Array P440ar controller and select Exit and launch the HP Smart Storage Administrator (HPSSA). When creating the logical drive for the operating system, take note of the Drive Unique ID located in the logical drive properties. The Drive Unique ID value will be used to identify the boot disk by cross checking the Drive Unique ID value with the serial number in the Swift data generated by the ironic introspection process.

Where did the comment section go?
Red Hat's documentation publication system recently went through an upgrade to enable speedier, more mobile-friendly content. We decided to re-evaluate our commenting platform to ensure that it meets your expectations and serves as an optimal feedback mechanism. During this redesign, we invite your input on providing feedback on Red Hat documentation via the discussion platform.