Red Hat Training

A Red Hat training course is available for Red Hat OpenStack Platform

Chapter 2. Planning your undercloud

2.1. Containerized undercloud

The undercloud is the node that controls the configuration, installation, and management of your final OpenStack Platform environment, which is called the overcloud. The undercloud itself uses OpenStack Platform components in the form of containers to create a toolset called OpenStack Platform director. This means the undercloud pulls a set of container images from a registry source, generates configuration for the containers, and runs each OpenStack Platform service as a container. As a result, the undercloud provides a containerized set of services you can use as a toolset for creating and managing your overcloud.

Since both the undercloud and overcloud uses containers, both use the same architecture to pull, configure, and run containers. This architecture is based on the OpenStack Orchestration service (heat) for provisioning nodes and uses Ansible for configuring services and containers. It is useful to have some familiarity with Heat and Ansible to help you troubleshoot issues you might encounter.

2.2. Preparing your undercloud networking

The undercloud requires access to two main networks:

  • The Provisioning or Control Plane network, which is the network the director uses to provision your nodes and access them over SSH when executing Ansible configuration. This network also enables SSH access from the undercloud to overcloud nodes. The undercloud contains DHCP services for introspection and provisioning other nodes on this network, which means no other DHCP services should exist on this network. The director configures the interface for this network.
  • The External network that enables access to OpenStack Platform repositories, container image sources, and other servers such as DNS servers or NTP servers. Use this network for standard access the undercloud from your workstation. You must manually configure an interface on the undercloud to access the external network.

The undercloud requires a minimum of 2 x 1 Gbps Network Interface Cards: one for the Provisioning or Control Plane network and one for the External network. However, it is recommended to use a 10 Gbps interface for Provisioning network traffic, especially if provisioning a large number of nodes in your overcloud environment.

Note the following:

  • Do not use the same Provisioning or Control Plane NIC as the one that you use to access the director machine from your workstation. The director installation creates a bridge by using the Provisioning NIC, which drops any remote connections. Use the External NIC for remote connections to the director system.
  • The Provisioning network requires an IP range that fits your environment size. Use the following guidelines to determine the total number of IP addresses to include in this range:

    • Include at least one temporary IP address for each node connected to the Provisioning network during introspection.
    • Include at least one permanent IP address for each node connected to the Provisioning network during deployment.
    • Include an extra IP address for the virtual IP of the overcloud high availability cluster on the Provisioning network.
    • Include additional IP addresses within this range for scaling the environment.

2.3. Determining environment scale

Prior to installing the undercloud, it is recommended to determine the scale of your environment. Include the following factors when planningyour environment:

  • How many nodes in your overcloud? The undercloud manages each node within an overcloud. Provisioning overcloud nodes consumes resources on the undercloud. You must provide your undercloud with enough resources to adequately provision and control overcloud nodes.
  • How many simultaneous operations do you want the undercloud perform? Most OpenStack services on the undercloud use a set of workers. Each worker performs an operation specific to that service. Multiple workers provide simultaneous operations. The default number of workers on the undercloud is determined by halving the undercloud’s total CPU thread count [1]. For example, if your undercloud has a CPU with 16 threads, then the director services spawn 8 workers by default. The director also uses a set of minimum and maximum caps by default:
ServiceMinimumMaximum

OpenStack Orchestration (heat)

4

24

All other service

2

12

The undercloud has the minimum CPU and memory requirements:

  • An 8-thread 64-bit x86 processor with support for the Intel 64 or AMD64 CPU extensions. This provides 4 workers for each undercloud service.
  • A minimum of 24 GB of RAM.

    • The ceph-ansible playbook consumes 1 GB resident set size (RSS) per 10 hosts deployed by the undercloud. If the deployed overcloud will use an existing Ceph cluster, or if it will deploy a new Ceph cluster, then provision undercloud RAM accordingly.

To use a larger number of workers, increase your undercloud’s vCPUs and memory using the following recommendations:

  • Minimum: Use 1.5 GB of memory per thread. For example, a machine with 48 threads should have 72 GB of RAM. This provides the minimum coverage for 24 Heat workers and 12 workers for other services.
  • Recommended: Use 3 GB of memory per thread. For example, a machine with 48 threads should have 144 GB of RAM. This provides the recommended coverage for 24 Heat workers and 12 workers for other services.

2.4. Undercloud disk sizing

The recommended minimum undercloud disk size is 100 GB of available disk space on the root disk:

  • 20 GB for container images
  • 10 GB to accommodate QCOW2 image conversion and caching during the node provisioning process
  • 70 GB+ for general usage, logging, metrics, and growth

2.5. Undercloud repositories

Enable the following repositories for the installation and configuration of the undercloud.

Table 2.1. Core repositories

NameRepositoryDescription of Requirement

Red Hat Enterprise Linux 7 Server (RPMs)

rhel-7-server-rpms

Base operating system repository for x86_64 systems.

Red Hat Enterprise Linux 7 Server - Extras (RPMs)

rhel-7-server-extras-rpms

Contains Red Hat OpenStack Platform dependencies.

Red Hat Enterprise Linux 7 Server - RH Common (RPMs)

rhel-7-server-rh-common-rpms

Contains tools for deploying and configuring Red Hat OpenStack Platform.

Red Hat Satellite Tools for RHEL 7 Server RPMs x86_64

rhel-7-server-satellite-tools-6.3-rpms

Tools for managing hosts with Red Hat Satellite 6.

Red Hat Enterprise Linux High Availability (for RHEL 7 Server) (RPMs)

rhel-ha-for-rhel-7-server-rpms

High availability tools for Red Hat Enterprise Linux. Used for Controller node high availability.

Red Hat OpenStack Platform 14 for RHEL 7 (RPMs)

rhel-7-server-openstack-14-rpms

Core Red Hat OpenStack Platform repository, which contains packages for Red Hat OpenStack Platform director.

Table 2.2. Ceph repositories

NameRepositoryDescription of Requirement

Red Hat Ceph Storage Tools 3 for Red Hat Enterprise Linux 7 Server (RPMs)

rhel-7-server-rhceph-3-tools-rpms

Provides tools for nodes to communicate with the Ceph Storage cluster. The undercloud requires the ceph-ansible package from this repository if you plan to use Ceph Storage in your overcloud.

IBM POWER repositories

These repositories are used for Openstack Platform on POWER PC architecture. Use these repositories in place of equivalents in the Core repositories.

NameRepositoryDescription of Requirement

Red Hat Enterprise Linux for IBM Power, little endian

rhel-7-for-power-le-rpms

Base operating system repository for ppc64le systems.

Red Hat OpenStack Platform 14 for RHEL 7 (RPMs)

rhel-7-server-openstack-14-for-power-le-rpms

Core Red Hat OpenStack Platform repository for ppc64le systems.



[1] In this instance, thread count refers to the number of CPU cores multiplied by the hyper-threading value