Chapter 11. Design Scenario

This chapter discusses a hypothetical region and zone design for a new CloudForms installation, based on the topics discussed in this guide.

11.1. Environment to be Managed

CloudForms is to be installed to manage six OpenShift Container Platform clusters within a large organization. These clusters represent the development, test, and production application environments in each of two geographic regions - Europe and North America.

11.1.1. Virtual Infrastructure

The organization’s virtual infrastructure in each geographic region is a Red Hat Virtualization (RHV) 4.2 installation. The OpenShift Container Platform nodes are virtual machines in these RHV environments.

11.1.2. Network Factors

Each geographical site has a single datacenter. There is LAN-speed (<1ms) latency between all points within these datacenters, and 25ms latency between datacenters.

11.1.3. Required CloudForms Functionality

The following capabilities of CloudForms are required:

  • Inventory/Insight of all OpenShift Container Platform components such as projects, pods, containers and nodes
  • SmartState Analysis and OpenSCAP scanning of container images
  • Capacity and Utilization statistics from Hawkular
  • Reporting, both regionally and globally

Management of the RHV infrastructure is not required.

11.2. Design Process

The design process usually starts with sizing the region. How many nodes, pods and containers will be managed in total, projected for the next 1-2 years? For this design scenario the projected number of objects to be managed over the next 2 years is shown in Table 11.1, “Provider Object Numbers - 2 Year Projection”

Table 11.1. Provider Object Numbers - 2 Year Projection

OpenShift clusterNodesPodsContainersImages

Europe - Dev

10

500

500

5000

Europe - Test

10

2000

2000

4000

Europe - Prod

50

10000

10000

20000

North America - Dev

10

600

1000

6000

North America - Test

10

1750

1750

3500

North America - Prod

40

7500

7500

15000

Based on the maximum suggested region sizes shown in Section 3.1.1.2, “Sizing Estimation”, it can be estimated that four subordinate regions will be required, each reporting into a single master region. The regions will be as follows:

  • Production (US) Region managing the US Production OpenShift Container Platform cluster
  • Dev/Test (US) Region managing the US Development and Test OpenShift Container Platform clusters
  • Production (EMEA) Region managing the Europe Production OpenShift Container Platform cluster
  • Dev/Test (EMEA) Region managing the Europe Development and Test OpenShift Container Platform clusters
  • Master Region

11.2.1. Network Latency

Latency from worker appliance to VMDB should be LAN speed, around 1ms or less. This will dictate where the VMDB servers should be situated, and also the optimum location of the worker CFME appliances. For this design network latency is good, so the VMDB servers and CFME appliances can be placed anywhere in the same datacenter as their managed provider OpenShift Container Platform cluster.

11.2.2. VMDB Servers

The optimum VMDB server for this design will be a CFME 5.9.2 appliance configured as a standalone PostgreSQL server. Although database high availability (HA) has not been specified as an initial requirement, installing a standalone database VM appliance allows for HA to be configured in future if required.

The database servers will be installed in the Red Hat Virtualization virtual infrastructure in their respective data centers. A 500 GByte disk - to be used as the database volume - will be presented to each subordinate region database server from a datastore backed by iSCSI storage. A 1 TByte iSCSI disk will be presented to the master region database server for use as the database volume.

The database servers will each have 8 GBytes memory, and a PostgreSQL shared_buffers region of 2 GBytes. A 2 GByte hugepage region will be created for PostgreSQL to use.

The two largest regions by numbers of managed objects will be the Europe and North America Production regions, which will contain 6 & 5 CFME appliances respectively (including WebUI zone appliances). Referring to the table in Appendix A, Database Appliance CPU Count shows that the database servers for these regions will need 4 vCPUs to maintain an idle CPU load under 20%. The database servers for the smaller subordinate regions will need 2 vCPUs. Although the overall processing load for a master region is generally less than for a region managing active providers, the database server for the master region will be given 4 vCPUs.

11.2.3. Zones

A zone should be created per provider (OpenShift Container Platform cluster). There should be a minimum of 2 CFME appliances per zone for resilience, and zones should not span networks. The CFME appliances in each zone will be hosted by the RHV virtual infrastructure in the appropriate data center.

For this design scenario the zones listed in Table 11.2, “Regions and Zones” are proposed.

Table 11.2. Regions and Zones

RegionRegion IDZones

Production (EMEA) Region

1

WebUI Zone, Production OCP Zone

Dev/Test (EMEA) Region

2

WebUI Zone, Development OCP Zone, Test OCP Zone

Production (US) Region

3

WebUI Zone, Production OCP Zone

Dev/Test (US) Region

4

WebUI Zone, Development OCP Zone, Test OCP Zone

Master

99

WebUI/Reporting Zone

11.2.3.1. WebUI Zones

A WebUI zone containing 2 CFME appliances will be created in each region, each appliance running the following server roles:

  • Automation Engine (to process zone events)
  • Reporting (if logged-on users will be running their own reports)
  • User Interface
  • Web Services
  • Websocket

The CFME appliances in this zone will be hosted by the RHV virtual infrastructure in the appropriate data center, in a vLAN accessible from user workstations. User access to them will be via a hardware load-balancer and common Fully-Qualified Domain Name.

11.2.3.2. OpenShift Container Platform Zones

The OCP zones will contain the provider-specific ("worker") CFME appliances. The section Section 3.2.2, “Number of CFME Appliances or Pods in a Zone” suggests a scaling factor of 1 C&U Data Collector worker for every 1500 nodes, pods and/or containers. Scaling to 4 C&U Data Collector workers per CFME appliance allows for 6000 active objects per CFME appliance, but there should be a minimum of 2 appliances per zone. Table 11.3, “Active Nodes/Pods/Containers” gives the proposed allocation of CFME appliances per zone.

Table 11.3. Active Nodes/Pods/Containers

ZoneNodes/Pods/ContainersCFME appliances required

Europe - Dev

1010

2

Europe - Test

4010

2

Europe - Prod

20050

4

North America - Dev

1610

2

North America - Test

3510

2

North America - Prod

15040

3

Each of the CFME appliances in these zones should run the following server roles:

  • Automation Engine
  • 3 x C&U roles
  • Provider Inventory
  • Provider Operations
  • Event Monitor
  • SmartProxy
  • SmartState Analysis
  • Git Repositories Owner
  • User Interface
  • Web Services
  • Websocket

The CFME appliances in these zones will also be hosted by the RHV virtual infrastructure in the appropriate data center. The worker count for the C & U Data Collectors should be increased to 4 on each CFME appliance; all other worker counts shoud be left at their defaults (pending further in-operation worker tuning). The :hawkular_force_legacy parameter should be set to true in the Advanced settings on each CFME appliance.

The OpenShift providers will be moved into these zones. Further appliances may need to be added to these zones if the number of managed objects increases.

11.2.3.3. Master Region WebUI/Reporting Zone

A WebUI/Reporting zone containing 2 CFME appliances will be created in the master region, each appliance running the following server roles:

  • Automation Engine (to process zone events)
  • Reporting (if logged-on users will be running their own reports)
  • User Interface
  • Web Services
  • Websocket

The CFME appliances in this zone will be hosted by the RHV virtual infrastructure in the Europe data center, in a vLAN accessible from user workstations. User access to them will be via a hardware load-balancer and common Fully-Qualified Domain Name.

The proposed zone design is shown in Figure 11.1, “Regions and Zones”.

Figure 11.1. Regions and Zones

Screenshot