Chapter 1. Executive Summary

The purpose of this reference architecture is to provide a performance and scaling methodology comprised of utilizing common benchmark workloads. With this methodology, end users can determine bottlenecks that may arise when scaling an OpenStack environment, learn to create benchmarking scenarios to optimize an OpenStack private cloud environment, and analyze the underlying results of those scenarios. This reference environment sets its foundation around the "Deploying Red Hat Enterprise Linux OpenStack Platform 7 with RHEL-OSP director 7.1" reference architecture. For any information relating to the best practices in deploying Red Hat Enterprise Linux OpenStack Platform 7 (RHEL-OSP7), please visit: http://red.ht/1PDxLxp

This reference architecture is best suited for system, storage, and OpenStack administrators deploying RHEL-OSP 7 with the intent of using open source tools to assist in scaling their private cloud environment. The topics that will be covered consist of:

  • Benchmarking using Rally1 scenarios
  • Analyzing the different Rally scenario benchmarking results
  • Using the open source project Browbeat2 to assist in determining potential performance issues
  • Detailing the best practices to optimize a Red Hat Enterprise Linux OpenStack Platform 7 environment

1 https://wiki.openstack.org/wiki/Rally

2 https://github.com/jtaleric/browbeat == Reference Architecture Environment

This section focuses on the components for the deployment and scaling of Red Hat Enterprise Linux OpenStack Platform 7 on Red Hat Enterprise Linux 7.

1.1. Reference Architecture Overview

This Figure 1.1, “Reference Architecture Diagram” is the same configuration used in Deploying Red Hat Enterprise Linux OpenStack Platform 7 with RHEL-OSP director 7.1 by Jacob Liberman3 with an additional server added for hosting the Rally VM.

3 http://red.ht/1PDxLxp

Figure 1.1. Reference Architecture Diagram

refarch2
Note

The Section 1.1.2, “Network Topology” section of this paper describes the networking components in detail.

The following Section 1.1.1, “Server Roles” and Section 1.1.2, “Network Topology” detail the environment as described in Deploying Red Hat Enterprise Linux OpenStack Platform 7 with RHEL-OSP director 7.1. While these two sections can be found in the original reference architecture paper, it has been included in this reference architecture for easy access to the reader.

1.1.1. Server Roles

As depicted in Figure 2.1 Figure 1.1, “Reference Architecture Diagram”, the use case requires 13 bare metal servers deployed with the following roles:

  • 1 undercloud server
  • 3 cloud controllers
  • 4 cloud compute nodes
  • 4 Ceph storage servers
  • 1 benchmarking server hosting the Rally VM

Servers are assigned to roles based on their hardware characteristics.

Table 1.1. Server hardware by role

RoleCountModel

Undercloud

1

Dell PowerEdge R720xd

Cloud controller

3

Dell PowerEdge M520

Compute node

4

Dell PowerEdge M520

Ceph storage server

4

Dell PowerEdge R510

Benchmarking Server

1

Dell PowerEdge M520

Appendix C Appendix C, Hardware specifications lists hardware specifics for each server model.

1.1.2. Network Topology

Figure 2.1 Figure 1.1, “Reference Architecture Diagram” details and describes the network topology of this reference architecture.

Each server has two Gigabit interfaces (nic1:2) and two 10-Gigabit interfaces (nic3:4) for network isolation to segment OpenStack communication by type.

The following network traffic types are isolated:

  • Provisioning
  • Internal API
  • Storage
  • Storage Management
  • Tenant
  • External

There are six isolated networks but only four physical interfaces. Two networks are isolated on each physical 10 Gb interface using a combination of tagged and native VLANs.

Table 1.2. Network Isolation

RoleInterfaceVLAN IDNetworkVLAN TypeCIDR

Undercloud

nic1

168

External

Native

10.19.137.0/21

nic2

4040

Provisioning

Native

192.0.2.0/24

Control

nic1

168

External

Native

10.19.137.0/21

nic2

4040

Provisioning

Native

192.0.2.0/24

nic3

4043

Storage Mgmt

Tagged

172.16.3.0/24

nic3

4044

Tenant

Native

172.16.4.0/24

nic4

4041

Internal API

Tagged

172.16.1.0/24

nic4

4042

Storage

Native

172.16.2.0/24

Compute

nic2

4040

Provisioning

Native

192.0.2.0/24

nic3

4044

Tenant

Native

172.16.4.0/24

nic4

4041

Internal API

Tagged

172.16.1.0/24

nic4

4042

Storage

Native

172.16.2.0/24

Ceph storage

nic2

4040

Provisioning

Native

192.0.2.0/24

nic3

4043

Storage Mgmt

Tagged

172.16.3.0/24

nic4

4042

Storage

Native

172.16.2.0/24

Benchmarking Server

nic1

168

External

Native

10.19.137.0/21

nic2

4041

Internal API

Tagged

172.16.1.0/24

nic3

4044

Tenant

Native

172.16.4.0/24

Rally VM

nic1

168

External

Native

10.19.137.0/21

nic2

4041

Internal API

Tagged

172.16.1.0/24

nic3

-

demo_net*

-

172.16.5.0/24

Note

All switch ports must be added to their respective VLANs prior to deploying the overcloud.

Note

demo_net relates to the traffic that is flowing through the Tenant network when VMs get an IP address.