Appendix C. Environment Details

This appendix of the reference architecture describes the environment used to execute the use case in the Red Hat Systems Engineering lab.

The servers in this reference architecture are deployed in the following roles.

Table C.1. Server hardware by role

RoleCountModel

Red Hat OpenStack Platform director

1

Virtual Machine on Dell PowerEdge M630*

OpenStack Controller/Ceph MON

3

Dell PowerEdge M630

OpenStack Compute/Ceph OSD

4

Dell PowerEdge R730XD

* It is not possible to run the Red Hat OpenStack Platform director virtual machine on the same systems that host the OpenStack Controllers/Ceph MONs or OpenStack Computes/OSDs.

C.1. Red Hat OpenStack Platform director

The undercloud is a server used exclusively by the OpenStack operator to deploy, scale, manage, and life-cycle the overcloud, the cloud that provides services to users. Red Hat’s undercloud product is Red Hat OpenStack Platform director.

The undercloud system hosting Red Hat OpenStack Platform director is a virtual machine running Red Hat Enterprise Linux 7.3 with the following specifications:

  • 16 virtual CPUs
  • 16GB of RAM
  • 40GB of hard drive space
  • Two virtual 1 Gigabit Ethernet (GbE) connections

The hypervisor which hosts this virtual machine is a Dell M630 with the following specifications:

  • Two Intel E5-2630 v3 @ 2.40 GHz CPUs
  • 128GB of RAM
  • Two 558GB SAS hard disks configured in RAID1
  • Two 1GbE connections
  • Two 10GbE connections

The hypervisor runs Red Hat Enterprise Linux 7.3 and uses the KVM and Libvirt packages shipped with Red Hat Enterprise Linux to host virtual machines.

C.2. Overcloud Controller / Ceph Monitor

Controller nodes are responsible in order to provide endpoints for REST-based API queries to the majority of the OpenStack services. These include compute, image, identity, block, network, and data processing. The controller nodes also manage authentication and sends messaging to all the systems through a message queue and stores the state of the cloud in a database. In a production deployment, the controller nodes should be run as a highly available cluster.

Ceph Monitor nodes, which cohabitate with the controller nodes in this deployment, maintain the overall health of the Ceph cluster by keeping cluster map state including Monitor map, OSD map, Placement Group map, and CRUSH map. Monitors receive state information from other components to maintain maps and to circulate these maps to other Monitor and OSD nodes.

C.2.1. Overcloud Controller / Ceph Monitor Servers for this Reference Architecture

The servers which host the OpenStack Controller and Ceph Monitor services are three Dell M630s with the following specifications:

  • Two Intel E5-2630 v3 @ 2.40 GHz CPUs
  • 128GB of RAM
  • Two 558GB SAS hard disks configured in RAID1
  • Four 1GbE connections
  • Four 10GbE connections

C.3. Overcloud Compute / Ceph OSD

Converged Compute/OSD nodes are responsible for running virtual machine instances after they are launched and for holding all data generated by OpenStack. They must support hardware virtualization and provide enough CPU cycles for the instances they host. They must also have enough memory to support the requirements of the virtual machine instances they host while reserving enough memory for each Ceph OSD. Ceph OSD nodes must also provide enough usable hard drive space for the data required by the cloud.

C.3.1. Overcloud Compute / Ceph OSD Servers for this Reference Architecture

The servers which host the OpenStack Compute and Ceph OSD services are four Dell R730XDs with the following specifications:

  • Two Intel E5-2683 v3 @ 2.00GHz CPUs
  • 256GB of RAM
  • Two 277GB SAS hard disks configured in RAID1
  • Twelve 1117GB SAS hard disks
  • Three 400GB SATA SSD disks
  • Two 1GbE connections (only one is used)
  • Two 10GbE connections

Aside from the RAID1 used for the operating system disks, none of the other disks are using RAID as per Ceph recommended practice.

C.4. Network Environment

C.4.1. Layer 1

The servers used in this reference architecture are physically connected as follows:

  • Red Hat OpenStack Platform director:

    • 1GbE to Provisioning Network
    • 1GbE to External Network
  • OpenStack Controller/Ceph Monitor:

    • 1GbE to Provisioning Network
    • 1GbE to External Network
    • 1GbE to Internal API Network
    • 10GbE to Cloud VLANs
    • 10GbE to Storage VLANs
  • OpenStack Compute/Ceph OSD:

    • 1GbE to Provisioning Network
    • 1GbE to Internal API Network
    • 10GbE to Cloud VLANs
    • 10GbE to Storage VLANs

A diagram illustrating the above can be seen Section 5, Figure 1 Network Separation Diagram.

C.4.2. Layers 2 and 3

The provisioning network is implemented with the following VLAN and range.

  • VLAN 4048 (hci-pxe) 192.168.1.0/24

The internal API network is implemented with the following VLAN and range.

  • VLAN 4049 (hci-api) 192.168.2.0/24

The Cloud VLAN networks are trunked into the first 10GbE interface of the OpenStack Controllers/Ceph Monitors and OpenStack Computes/Ceph OSDs. A trunk is used so that tenant VLANs networks may be added in the future, though the deployment’s default allows VXLAN networks to run on top of the following network.

  • VLAN 4050 (hci-tenant) 192.168.3.0/24

The Storage VLAN networks are trunked into the second 10GbE interface of the OpenStack Controllers/Ceph Monitors and OpenStack Computes/Ceph OSDs. The trunk contains the following two VLANs and their nework ranges.

  • VLAN 4046 (hci-storage-pub) 172.16.1.0/24
  • VLAN 4047 (hci-storage-pri) 172.16.2.0/24

The external network is implemented upstream of the switches that implement the above.