Chapter 4. Solution Design

This solution is comprised of eleven HPE ProLiant DL servers; eight HPE ProLiant DL360 Gen9 and three HPE ProLiant DL380 Gen9 servers. The servers are configured to create a highly available platform for a Red Hat OpenStack Platform 10 deployment. Each server is configured with redundant hard drives (operating system only), redundant fans, and redundant power supplies. The OpenStack Internal API, Storage, Storage Management, External, and Tenant networks are configured to run on bonded ten Gigabit interfaces that are split across two network interface cards. Hardware based RAID1 provides increased availability for the Red Hat OpenStack Platform 10 and Red Hat Ceph Storage server nodes operating system disks. The three node Red Hat OpenStack Platform 10 controller deployment is configured for high availability clustering using Pacemaker and HA Proxy. The three node Red Hat Ceph Storage cluster provides a high availability storage platform for OpenStack block and object storage services. Red Hat OpenStack Platform 10 services are monitored using a preinstalled Sensu client which can be configured to automatically connect to an existing Sensu monitoring host during deployment.

4.1. Red Hat OpenStack Platform 10 director

In this reference architecture the Red Hat OpenStack Platform 10 director is installed as a virtual machine on one of the HPE DL360 Gen9 servers. The host is running Red Hat Enterprise Linux 7.3 with KVM. The Red Hat OpenStack Platform 10 director virtual machine is running Red Hat Enterprise Linux 7.3. By running the Red Hat OpenStack Platform 10 director as a virtual machine, the physical server can be used to support additional services and virtual machines. Additionally, running the Red Hat OpenStack Platform director on a virtualization platform provides the ability to snapshot the virtual machine at various stages of the installation. Virtual machine snaphots can also be useful if it is necessary to revert the system to a previously known good state or configuration. In this example there are three additional Red Hat Enterprise Linux 7.3 based virtual machines; a logging vm, and two Sensu monitoring virtual machines.

4.2. Network configuration

The network infrastructure is comprised of HPE FF 5700 and HPE FF 5930 network switches. The HPE FF 5700 is used for the management (iLO) and provisioning networks. The HPE FF 5930 is used for the following Red Hat OpenStack Platform 10 networks:

  • External
  • Internal API
  • Tenant
  • Storage Management
  • Storage network

An overview of the network connections to each of the individual servers is shown in Figure 2 below.

Red Hat OpenStack Platform 10 Overview

Figure 2: Red Hat OpenStack Platform 10 Overview

4.2.1. VLAN Configuration

A total of 7 VLANs have been configured for this solution. The VLANs include: 1(default), 104, 3040-3044.

Table 2, below lists each VLAN including the correspondingRed Hat OpenStack Platform Network, name, VLAN ID, and IP subnet.

Red Hat OpenStack Platform NetworksVLAN NameVLAN IDSubnet / CIDR

External

hpe-ext

104

10.19.20.128/25

Provision

hpe-prov

3040

192.168.20.0/24

Internal API

hpe-api

3041

172.16.10.0/24

Tenant

hpe-ten

3044

172.16.160.0/24

Storage

hpe-stor

3042

172.16.6.10/24

Storage Mgmt

hpe-stormgmt

3043

172.16.16.0/24

Table 2: Red Hat OpenStack Platform Network and VLAN Assignments

4.2.3. Split the 40 Gigabit QSFP ports into 10 Gigabit ports

As mentioned previously, the HPE FF 5930 used in this reference architecture is configured with 32 Forty Gigabit QSFP ports. Each 40 Gigabit port will be split into four 10 Gigabit ports. The first four 40 Gigabit ports (FortyGigE 1/0/1 - FortyGigE 1/0/4) cannot be split. The first port that can be split is port 5 (FortyGigE 1/0/5). Once the interface has been split, there will be four 10 Gigabit ports labeled:

  • XGE1/0/5:1 UP 10G(a) F(a) T 1
  • XGE1/0/5:2 UP 10G(a) F(a) T 1
  • XGE1/0/5:3 UP 10G(a) F(a) T 1
  • XGE1/0/5:4 UP 10G(a) F(a) T 1

The following commands are used to split the FortyGigE interface:

[5930-01] interface FortyGigE 1/0/5
[5930-01-FortyGigE1/0/5] using tengige

Repeat for interfaces FortyGigE 1/0/6 – FortyGigE 1/0/15

This will create the Ten Gigabit interfaces shown below in Table 3:

10Gb Interface 110Gb Interface 210Gb Interface 310Gb Interface 4

XGE1/0/5:1

XGE1/0/5:2

XGE1/0/5:3

XGE1/0/5:4

XGE1/0/6:1

XGE1/0/6:2

XGE1/0/6:3

XGE1/0/6:4

XGE1/0/7:1

XGE1/0/7:2

XGE1/0/7:3

XGE1/0/7:4

XGE1/0/8:1

XGE1/0/8:2

XGE1/0/8:3

XGE1/0/8:4

XGE1/0/9:1

XGE1/0/9:2

XGE1/0/9:3

XGE1/0/9:4

XGE1/0/10:1

XGE1/0/10:2

XGE1/0/10:3

XGE1/0/10:4

XGE1/0/11:1

XGE1/0/11:2

XGE1/0/11:3

XGE1/0/11:4

XGE1/0/12:1

XGE1/0/12:2

XGE1/0/12:3

XGE1/0/12:4

XGE1/0/13:1

XGE1/0/13:2

XGE1/0/13:3

XGE1/0/13:4

XGE1/0/14:1

XGE1/0/14:2

XGE1/0/14:3

XGE1/0/14:4

XGE1/0/15:1

XGE1/0/15:2

XGE1/0/15:3

XGE1/0/15:4

Table 3: 10Gb Interfaces

4.3. Configure the Red Hat Ceph Storage nodes

The Red Hat Ceph Storage cluster in this solution is comprised of three HPE ProLiant DL380 Gen9 servers. A three node cluster is the absolute minimum replicated cluster size. A cluster size of five nodes for replication and seven nodes for erasure coding would provide better availability and increased performance.

Disk Configuration

The HPE ProLiant DL380 Gen9 ceph-storage nodes are configured with twelve 1.2TB SAS hard drives and two 400GB SAS SSD drives. Two of the 1.2TB drive are configured as a RAID 1 array for the operating system, the remaining ten 1.2TB SAS drives are configured into ten individual RAID 0 logical drives and to be used for OSD. The two SSD drives are configured into individual RAID 0 logical drives to be used for journal files. Create the arrays and logical drives using the HP Smart Storage Administrator. Access the HPE System Utilities by pressing F9 during the boot process and select System Configuration, select the Smart Array P440ar controller and select Exit and launch the HP Smart Storage Administrator (HPSSA). When creating the logical drive for the operating system, take note of the Drive Unique ID located in the logical drive properties. The Drive Unique ID value will be used to identify the boot disk by cross checking the Drive Unique ID value with the serial number in the Swift data generated by the ironic introspection process.