Chapter 2. Solution Design
2.1. Overview
This solution was designed to provide guidance for automating the deployment of Red Hat OpenShift Container Platform 3.5 on HPE ProLiant DL rack mounted servers. Deployment automation is conducted leveraging Ansible Tower playbooks and workflows. The Ansible playbooks are used to integrate with HPE OneView, Red Hat Satellite, Red Hat OpenShift Container Platform, and Red Hat Container-native storage. Ansible workflows coordinate the execution of the Ansible playbooks allowing a one-click deployment of the entire OpenShift Container Platform environment on nine ProLiant DL baremetal servers.
This solution design deploys Red Hat OpenShift Container Platform directly on baremetal physical servers. This is slightly different than other private and public cloud based deployment models where Red Hat OpenShift Container Platform is deployed on virtual machine instances. Advantages of deploying Red Hat OpenShift on physical servers is the reduction in the amount of operating systems which results in a smaller attack surface, reduced management, and reduced licensing that are typically associated with a hypervisor based solution. In addition, some performance benefits may be realized without the additional virtualization layer and associated overhead of running virtual machines.
Below is a high level description of the solution workflow. Detailed information on the Ansible Tower, HPE OneView, Red Hat Satellite, and Red Hat OpenShift Container Platform will be described in subsequent chapters.
Red Hat Ansible Tower will leverage the HPE Ansible modules for OneView to register the physical servers with HPE OneView 3.1 and create server profiles for each of their respective server roles. The server profiles will be applied to the servers and the latest firmware baseline will be installed. Local disk configurations will be applied to the smart array controllers: RAID 1 for the operating systems and RAID 6 for the Container-native storage nodes. Ansible Tower playbooks will register the servers with Red Hat Satellite and Red Hat Enterprise Linux 7.3 will be installed on each of the ProLiant DL servers. Satellite is configured with Content Views that will provide a consistent version management of software and repositories used in the solution. Ansible playbooks will then install a Red Hat OpenShift Platform 3.5 cluster consisting of three Master nodes, three Infrastructure Nodes, and three hyper-converged Container-native storage nodes which will also host end-user applications.
2.2. Hardware
The focus of this reference architecture is the installation and automation of Red Hat OpenShift Container Platform on HPE ProLiant DL based servers using Red Hat Ansible Tower, Red Hat Satellite, and HPE OneView. The equipment used in this reference architecture was based on the availability of existing lab equipment and may not reflect specific sizing requirements or recommendations for Red Hat OpenShift Container Platform. Refer to the Red Hat OpenShift Container Platform documentation sizing and hardware requirements at https://access.redhat.com/documentation/en-us/openshift_container_platform/3.5/html/installation_and_configuration/installing-a-cluster#sizing for guidance on minimum hardware requirements.
The server hardware used in this reference architecture are Hewlett Packard Enterprise (HPE) rack mounted servers, specifically the HPE ProLiant DL360 Gen9 and HPE ProLiant DL380 Gen9 models. The network switches used in this reference architecture are HPE FlexFabric switches, the HPE FF 5930 and HPE FF 5700. The following section provides an overview of the technical specifications for the servers and network equipment used in this reference architecture.
The servers and networking components are shown below in Figure 1. There are two top of rack switches, an HPE FF 5700 and an HPE FF 5930, the HPE FF 5700 is used for management (iLO) and provisioning, and the HPE FF 5930 is used for production.

Figure 1: HPE ProLiant DL Rack
2.3. HPE ProLiant Servers
HPE ProLiant DL380 Gen9 Server
The HPE ProLiant DL380 Gen9 is an industry leader in rack mount servers. This reference architecture will leverage the HPE DL380 Gen9 server for the application nodes and will host the Container-native storage.
Specifications:
- Processor family Intel Xeon E5-2600 v3 product family; Intel Xeon E5-2600 v4 product family
- Processor core available 22, 20, 18, 16, 14, 12, 10, 8, 6, or 4
- Maximum memory, 3.0TB; With 128-GB DDR4
- Storage controller (1) Dynamic Smart Array B140i and/or; (1,) Smart Array P440ar; (1) Smart Array P840; Depending on model
- Form factor (fully configured) 2U
Refer to the following link for complete HPE ProLiant DL380 Gen9 server specifications:
https://www.hpe.com/us/en/product-catalog/servers/proliant-servers/pip.specifications.hpe-proliant-dl380-gen9-server.7271241.html
Red Hat Container-native storage Nodes HPE ProLiant DL380 Gen9 Configuration
The Red Hat Container-native storage nodes are comprised of three HPE ProLiant DL380 Gen9 servers with the following configuration;
- CPU – 1 x Intel Xeon E5-2643v3 (3.4GHz/6-core/20MB/135W)
- Memory – 64GB Single Rank x4 DDR4-2133
- Storage - 12 HP 1.2TB 6G SAS 10K rpm SFF Drives and 2 HP 400GB 12G SAS Write Intensive SFF Solid State Drives
- Network - HP Ethernet 10Gb 2-port 560FLR-SFP Adapter and ALOM 2 port 10Gb NIC
HPE ProLiant DL360 Gen9 Server
The HPE ProLiant DL360 Gen9 is a 1U rack mount server that will host the Red Hat OpenShift Platform master nodes and the loadbalancer.
Specifications:
- Processor family Intel Xeon E5-2600 v3 product family; Intel Xeon E5-2600 v4 product family
- Processor core available 22, 20, 18, 16, 14, 12, 10, 8, 6, or 4
- Maximum memory 1.5TB
- Storage controller (1) Dynamic Smart Array B140i or; (1) H240ar Host Bus Adapter or; (1) Smart Array P440ar; Depending on model
- Form factor (fully configured) 1U
Refer to the following link for complete HPE ProLiant DL360 Gen9 server specifications:
https://www.hpe.com/us/en/product-catalog/servers/proliant-servers/pip.hpe-proliant-dl360-gen9-server.7252836.html
KVM Hypervisor ProLiant DL360 Gen9 Configuration
- Server Platform – HPE ProLiant DL360 Gen9
- CPU – 2 x Intel Xeon E5-2699v3 (2.6GHz/18-core/45MB/145W)
- Memory – 64GB Dual Rank x4 DDR4-2133
- Storage – 2 x HPE 1.2TB 6G SAS 10K rpm SFF (2.5-inch) Drives
- Network - HPE Ethernet 10Gb 2-port 560FLR-SFP+ FIO Adapter and ALOM 2 port 10Gb NIC
Red Hat OpenShift Container Platform Master Nodes HPE ProLiant DL360 Gen9 Configuration
The Master nodes are deployed on three physical HPE ProLiant DL360 Gen9 servers with the following configuration:
- CPU - 2 x Intel Xeon E5-2699v3 (2.6GHz/18-core/45MB/145W)
- Memory - 64GB Dual Rank x4 DDR4-2133
- Storage - 2 x HPE 1.2TB 6G SAS 10K rpm SFF (2.5-inch) Drives
- Network - HPE Ethernet 10Gb 2-port 560FLR-SFP+ FIO Adapter and ALOM 2 port 10Gb NIC
HAProxy and Infrastructure Nodes HPE ProLiant DL360 Gen9 Configuration
The load balancer and infrastructure nodes are deployed on three ProLiant DL360 Gen9 servers.
- CPU - 2 x Intel Xeon E5-2690v3 (2.6GHz/12-core/30MB/135W) Processor
- Memory - 256GB Dual Rank x4 DDR4-2133
- Storage - 2 x HPE 1.2TB 6G SAS 10K rpm SFF (2.5-inch) Drives
- Network - HPE Ethernet 10Gb 2-port 560FLR-SFP+ FIO Adapter and ALOM 2 port 10Gb NIC
2.4. HPE Network
HPE FlexFabric 5930 32QSFP+ Switch
This solution uses the HPE FlexFabric 5930 switch for Red Hat OpenShift Platform production networks. The HPE specification overview for this switch is described below:
"HPE FlexFabric 5930 Series switches are high density, ultra low latency 10 Gigabit Ethernet (GbE) and 40 GbE top of rack (ToR) datacenter switches. This 1U high model has 32x QSFP+ 40GbE ports, 2x power supply slots, and 2x fan tray slots. Power supplies and fan trays must be ordered separately. The FlexFabric 5930 Switch Series is ideally suited for deployment at the aggregation or server access layer of large enterprise data centers, or at the core layer of medium sized enterprises. These are optimized to meet the increasing requirements for higher performance server connectivity, convergence of Ethernet and storage traffic, the capability to handle virtual environments, and ultra low latency. The FlexFabric product line offers modular core, aggregation, top of rack switching for datacenters."
This description and the full specifications for the HPE FF 5930 can be found at the following link:
https://www.hpe.com/us/en/product-catalog/networking/networking-switches/pip.overview.networking-switches.7526493.html
HPE FlexFabric 5700 48G 4XG 2QSFP+ Switch
The iLO management and provisioning networks use the HPE FlexFabric 5700 switch. The HPE specifications overview for the HPE FF 5700 is switch described below:
"HPE FlexFabric 5700 Series switches are cost effective, high density, ultra low latency, top of rack (ToR) data center switches. This model comes with 48x 10/100/1000 ports, 4x fixed 1000 / 10000 SFP+ ports, and 2x QSFP+ for 40 GbE connections. They are ideally suited for deployment at the server access layer of large enterprise data centers. The FlexFabric product line offers modular core, aggregation, top of rack switching for data centers."
This description and the full specifications for the HPE FF 5930 can be found at the following link:
https://www.hpe.com/us/en/product-catalog/networking/networking-switches/pip.overview.networking-switches.6638103.html
Physical Network configuration
The network infrastructure is comprised of an HPE FF 5700 and two HPE FF 5930 network switches. The HPE FF 5700 is used for the management (iLO) and provisioning networks. The HPE FF 5930 is used for connectivity between Red Hat OpenShift Container Platform Master, Infrastructure, Application, and Container-native storage nodes.

Figure 2: Physical Network
HPE FF 5930 Link Aggregation and Bonding
The HPE FF 5930 is configured with 32 QSFP 40 Gigabit ports. Each port will be split into four 10 Gigabit network ports and connected to 10 Gigabit network interfaces.
As mentioned previously, the HPE FF 5930 used in this reference architecture is configured with 32 Forty Gigabit QSFP ports. Each 40 Gigabit port will be split into four 10 Gigabit ports. The first four 40 Gigabit ports (FortyGigE 1/0/1 - FortyGigE 1/0/4) cannot be split. The first port that can be split is port 5 (FortyGigE 1/0/5). Once the interface has been split, there will be four 10 Gigabit ports labeled:
- XGE1/0/5:1 UP 10G(a) F(a) T 1
- XGE1/0/5:2 UP 10G(a) F(a) T 1
- XGE1/0/5:3 UP 10G(a) F(a) T 1
- XGE1/0/5:4 UP 10G(a) F(a) T 1
The following commands are used to split the FortyGigE interface:
[5930-01] interface FortyGigE 1/0/5 [5930-01-FortyGigE1/0/5] using tengige
Repeat for interfaces FortyGigE 1/0/6 – FortyGigE 1/0/9, and FortyGigE 2/0/5 - 2/0/9
This will solution will use the Ten Gigabit interfaces shown below in Table 1. Each server is configured with dual port 10 Gigabit Ethernet adapters. The first port in each adapter is connected to the interfaces listed in table 1. An etherent bond is configured between the first port on each 10 Gigabit Ethernet adapter and each port is connected to a different HPE FF 5930 switch.
| 10Gb Interface 1 | 10Gb Interface 2 |
|---|---|
| XGE1/0/5:1 | XGE2/0/5:1 |
| XGE1/0/5:3 | XGE2/0/5:3 |
| XGE1/0/6:1 | XGE2/0/6:1 |
| XGE1/0/6:3 | XGE2/0/6:3 |
| XGE1/0/7:1 | XGE2/0/7:1 |
| XGE1/0/7:3 | XGE2/0/7:3 |
| XGE1/0/8:1 | XGE2/0/8:1 |
| XGE1/0/8:3 | XGE2/0/8:3 |
| XGE1/0/9:1 | XGE2/0/9:1 |
Table 1: 10Gb Interfaces
Create the Link Aggregation Groups
Link Aggregation interfaces are configured on the HPE FF 5930 switch.
Link Aggregation Configuration example:
interface Bridge-Aggregation11 port link-type access link-aggregation mode dynamic lacp edge-port interface Ten-GigabitEthernet1/0/5:1 port link-mode bridge port link-type access port link-aggregation group 11 interface Ten-GigabitEthernet2/0/5:1 port link-mode bridge port link-type access port link-aggregation group 11
Table 2, shown below, displays the full list of Bridge Aggregations and Interfaces defined on the HPE FF 5930 for the Red Hat OpenShift Container Platform 10 networks.
| Network Aggregations | Interface 1 | Interface 2 |
|---|---|---|
| Bridge-Aggregation11 | Ten-GigabitEthernet1/0/5:1 | Ten-GigabitEthernet2/0/5:1 |
| Bridge-Aggregation21 | Ten-GigabitEthernet1/0/5:3 | Ten-GigabitEthernet2/0/5:3 |
| Bridge-Aggregation31 | Ten-GigabitEthernet1/0/6:1 | Ten-GigabitEthernet2/0/6:1 |
| Bridge-Aggregation41 | Ten-GigabitEthernet1/0/6:3 | Ten-GigabitEthernet2/0/6:3 |
| Bridge-Aggregation51 | Ten-GigabitEthernet1/0/7:1 | Ten-GigabitEthernet2/0/7:1 |
| Bridge-Aggregation61 | Ten-GigabitEthernet1/0/7:3 | Ten-GigabitEthernet2/0/7:3 |
| Bridge-Aggregation71 | Ten-GigabitEthernet1/0/8:1 | Ten-GigabitEthernet2/0/8:1 |
| Bridge-Aggregation81 | Ten-GigabitEthernet1/0/8:3 | Ten-GigabitEthernet2/0/8:3 |
| Bridge-Aggregation91 | Ten-GigabitEthernet1/0/9:1 | Ten-GigabitEthernet2/0/9:1 |
Table 2: Bridge Aggregations
2.5. Disk Configuration
Master and Infrastructure nodes
The Red Hat OpenShift Container Platform Master and Infrastructure nodes are configured with two 1.2 TB SAS Hard Drives. These drives are configured as a RAID1 volume with a single logical drive on the HPE Smart Array controller. The array and logical drive are defined in the HPE OneView server profile applied to these servers. When the server profiles are applied, the array and logical drive is created.
Red Hat Container-native storage Disk Configuration
The Red Hat Container-native storage in this solution is comprised of three HPE ProLiant DL380 Gen9 servers with twelve 1.2 TB Drives and two 400 GB SSD Drives. The twelve 1.2 TB Drives are configured in a RAID6 volume and used for Gluster storage. The two 400 GB Drives are configured in a RAID1 volume with a single logical drive to be used for the operating system. The arrays and logical drives are defined in the HPE OneView server profiles for the Container-native storage servers. They are created when the server profiles are applied to the servers.
2.6. Software Overview
This section provides and overview of the software used in this reference architecture.
Red Hat OpenShift Container Platform 3.5
Red Hat OpenShift Container Platform - based on kubernetes - is used to provide a platform to build and orchestrate the deployment of container-based applications. The platform can run pre-existing container images, or build custom ones directly from source.
Red Hat Satellite
Red Hat Satellite will be used to kickstart the HPE ProLiant DL servers and install Red Hat Enterprise Linux 7.3 on each of the ProLiant DL servers. Satellite will also provide the software repositories for installing OpenShift Container Platform 3.5 and it’s required dependencies. Satellite will be configured to provide DNS and DHCP services for the OpenShift Container Platform environment.
Red Hat Gluster Container-native storage
Red Hat Gluster Container-native storage will provide persistent storage for container based application. Red Hat Gluster Container-native storage will be deployed on the HPE ProLiant DL380 servers. The deployment and configuration will be automated using Ansible playbooks.
Red Hat Ansible Tower
Red Hat Ansible Tower will leverage HPE OneView Ansible playbooks to register the HPE ProLiant DL servers with HPE OneView and apply OneView Server Profiles for the various server roles used in an OpenShift Container Platform deployment; OpenShift Master, OpenShift Application Nodes, and Infrastructure nodes. Ansible Tower will then deploy Red Hat OpenShift Container Platform 3.5 on HPE ProLiant DL servers.
HPE OneView 3.1
HEP OneView 3.1 is used for management and administration of the HPE ProLiant DL servers. OneView is used to manage the ProLiant DL server firmware levels, monitor the health of the servers, and provide remote administration and access to the physical servers. In this reference architecture the OneView 3.1 management appliance is running on a Red Hat 7.3 KVM server. This is the first release of HPE OneView that is supported to run on KVM.
HPE ProLiant Service Pack
HPE ProLiant server firmware and software updates are maintained in the HPE ProLiant Service Pack. The servers were updated with HPE ProLiant Service Pack, 2017.04.0. The ProLiant Service Pack will be maintained as a baseline firmware in HPE OneView and applied to a HPE ProLiant DL server when a server profile is applied.+
Table 3, below shows the firmware versions installed on the ProLiant DL servers used in this reference architecture.
| Firmware | Version |
|---|---|
| iLO | 2.50 Sep 23 2016 |
| System ROM | P89 v2.40 (02/17/2017) |
| Redundant System ROM | P89 v2.30 (09/13/2016) |
| HP Ethernet 1Gb 4-port 331i Adapter - NIC | 17.4.41 |
| HP Ethernet 10Gb 2-port 560FLR-SFP+ Adapter | 1.1446.0 |
| HPE Ethernet 10Gb 2-port 562SFP+ Adapter | XL710 FW ver |
| HPE Smart Storage Battery 1 Firmware | 1.1 |
| Intelligent Platform Abstraction Data | 24.02 |
| Intelligent Provisioning | N/A |
| Power Management Controller Firmware | 1.0.9 |
| Power Management Controller FW Bootloader | 1.0 |
| SAS Programmable Logic Device | Version 0x02 |
| Server Platform Services (SPS) Firmware | 3.1.3.21.0 |
| Smart Array P440ar Controller | 4.52 |
| System Programmable Logic Device | Version 0x34 |
Table 3: Firmware
ProLiant Service Pack can be found at:
http://h17007.www1.hpe.com/us/en/enterprise/servers/products/service_pack/spp/index.aspx

Where did the comment section go?
Red Hat's documentation publication system recently went through an upgrade to enable speedier, more mobile-friendly content. We decided to re-evaluate our commenting platform to ensure that it meets your expectations and serves as an optimal feedback mechanism. During this redesign, we invite your input on providing feedback on Red Hat documentation via the discussion platform.