Chapter 1. Introduction to director

The Red Hat OpenStack Platform (RHOSP) director is a toolset for installing and managing a complete OpenStack environment. Director is based primarily on the OpenStack project TripleO. With director you can install a fully-operational, lean, and robust RHOSP environment that can provision and control bare metal systems to use as OpenStack nodes.

Director uses two main concepts: an undercloud and an overcloud. First you install the undercloud, and then use the undercloud as a tool to install and configure the overcloud.

Basic Layout of undercloud and overcloud

1.1. Undercloud

The undercloud is the main management node that contains the Red Hat OpenStack Platform director toolset. It is a single-system OpenStack installation that includes components for provisioning and managing the OpenStack nodes that form your OpenStack environment (the overcloud). The components that form the undercloud have multiple functions:

Environment planning
The undercloud includes planning functions that you can use to create and assign certain node roles. The undercloud includes a default set of nodes: Compute, Controller, and various Storage roles. You can also design custom roles. Additionally, you can select which OpenStack Platform services to include on each node role, which provides a method to model new node types or isolate certain components on their own host.
Bare metal system control
The undercloud uses the out-of-band management interface, usually Intelligent Platform Management Interface (IPMI), of each node for power management control and a PXE-based service to discover hardware attributes and install OpenStack on each node. You can use this feature to provision bare metal systems as OpenStack nodes. For a full list of power management drivers, see Appendix A, Power management drivers.
Orchestration
The undercloud contains a set of YAML templates that represent a set of plans for your environment. The undercloud imports these plans and follows their instructions to create the resulting OpenStack environment. The plans also include hooks that you can use to incorporate your own customizations as certain points in the environment creation process.
Undercloud components

The undercloud uses OpenStack components as its base tool set. Each component operates within a separate container on the undercloud:

  • OpenStack Identity (keystone) - Provides authentication and authorization for the director components.
  • OpenStack Bare Metal (ironic) and OpenStack Compute (nova) - Manages bare metal nodes.
  • OpenStack Networking (neutron) and Open vSwitch - Control networking for bare metal nodes.
  • OpenStack Image Service (glance) - Stores images that director writes to bare metal machines.
  • OpenStack Orchestration (heat) and Puppet - Provides orchestration of nodes and configuration of nodes after director writes the overcloud image to disk.
  • OpenStack Telemetry (ceilometer) - Performs monitoring and data collection. Telemetry also includes the following components:

    • OpenStack Telemetry Metrics (gnocchi) - Provides a time series database for metrics.
    • OpenStack Telemetry Alarming (aodh) - Provide an alarming component for monitoring.
    • OpenStack Telemetry Event Storage (panko) - Provides event storage for monitoring.
  • OpenStack Workflow Service (mistral) - Provides a set of workflows for certain director-specific actions, such as importing and deploying plans.
  • OpenStack Messaging Service (zaqar) - Provides a messaging service for the OpenStack Workflow Service.
  • OpenStack Object Storage (swift) - Provides object storage for various OpenStack Platform components, including:

    • Image storage for OpenStack Image Service
    • Introspection data for OpenStack Bare Metal
    • Deployment plans for OpenStack Workflow Service

1.2. Understanding the overcloud

The overcloud is the resulting Red Hat OpenStack Platform (RHOSP) environment that the undercloud creates. The overcloud consists of multiple nodes with different roles that you define based on the OpenStack Platform environment that you want to create. The undercloud includes a default set of overcloud node roles:

Controller

Controller nodes provide administration, networking, and high availability for the OpenStack environment. A recommended OpenStack environment contains three Controller nodes together in a high availability cluster.

A default Controller node role supports the following components. Not all of these services are enabled by default. Some of these components require custom or pre-packaged environment files to enable:

  • OpenStack Dashboard (horizon)
  • OpenStack Identity (keystone)
  • OpenStack Compute (nova) API
  • OpenStack Networking (neutron)
  • OpenStack Image Service (glance)
  • OpenStack Block Storage (cinder)
  • OpenStack Object Storage (swift)
  • OpenStack Orchestration (heat)
  • OpenStack Telemetry Metrics (gnocchi)
  • OpenStack Telemetry Alarming (aodh)
  • OpenStack Telemetry Event Storage (panko)
  • OpenStack Shared File Systems (manila)
  • OpenStack Bare Metal (ironic)
  • MariaDB
  • Open vSwitch
  • Pacemaker and Galera for high availability services.
Compute

Compute nodes provide computing resources for the OpenStack environment. You can add more Compute nodes to scale out your environment over time. A default Compute node contains the following components:

  • OpenStack Compute (nova)
  • KVM/QEMU
  • OpenStack Telemetry (ceilometer) agent
  • Open vSwitch
Storage

Storage nodes provide storage for the OpenStack environment. The following list contains information about the various types of Storage node in RHOSP:

  • Ceph Storage nodes - Used to form storage clusters. Each node contains a Ceph Object Storage Daemon (OSD). Additionally, director installs Ceph Monitor onto the Controller nodes in situations where you deploy Ceph Storage nodes as part of your environment.
  • Block storage (cinder) - Used as external block storage for highly available Controller nodes. This node contains the following components:

    • OpenStack Block Storage (cinder) volume
    • OpenStack Telemetry agents
    • Open vSwitch.
  • Object storage (swift) - These nodes provide an external storage layer for OpenStack Swift. The Controller nodes access object storage nodes through the Swift proxy. Object storage nodes contain the following components:

    • OpenStack Object Storage (swift) storage
    • OpenStack Telemetry agents
    • Open vSwitch.

1.3. Understanding high availability in Red Hat OpenStack Platform

The Red Hat OpenStack Platform (RHOSP) director uses a Controller node cluster to provide highly available services to your OpenStack Platform environment. For each service, director installs the same components on all Controller nodes and manages the Controller nodes together as a single service. This type of cluster configuration provides a fallback in the event of operational failures on a single Controller node. This provides OpenStack users with a certain degree of continuous operation.

The OpenStack Platform director uses some key pieces of software to manage components on the Controller node:

  • Pacemaker - Pacemaker is a cluster resource manager. Pacemaker manages and monitors the availability of OpenStack components across all nodes in the cluster.
  • HAProxy - Provides load balancing and proxy services to the cluster.
  • Galera - Replicates the RHOSP database across the cluster.
  • Memcached - Provides database caching.
Note
  • From version 13 and later, you can use director to deploy High Availability for Compute Instances (Instance HA). With Instance HA you can automate evacuating instances from a Compute node when the Compute node fails.

1.4. Understanding containerization in Red Hat OpenStack Platform

Each OpenStack Platform service on the undercloud and overcloud runs inside an individual Linux container on their respective node. This containerization provides a method to isolate services, maintain the environment, and upgrade Red Hat OpenStack Platform (RHOSP).

Red Hat OpenStack Platform 16.0 supports installation on the Red Hat Enterprise Linux 8.1 operating system. Red Hat Enterprise Linux 8.1 no longer includes Docker and provides a new set of tools to replace the Docker ecosystem. This means OpenStack Platform 16.0 replaces Docker with these new tools for OpenStack Platform deployment and upgrades.

Podman

Pod Manager (Podman) is a container management tool. It implements almost all Docker CLI commands, not including commands related to Docker Swarm. Podman manages pods, containers, and container images. One of the major differences between Podman and Docker is that Podman can manage resources without a daemon running in the background.

For more information about Podman, see the Podman website.

Buildah

Buildah specializes in building Open Containers Initiative (OCI) images, which you use in conjunction with Podman. Buildah commands replicate the contents of a Dockerfile. Buildah also provides a lower-level coreutils interface to build container images, so that you do not require a Dockerfile to build containers. Buildah also uses other scripting languages to build container images without requiring a daemon.

For more information about Buildah, see the Buildah website.

Skopeo
Skopeo provides operators with a method to inspect remote container images, which helps director collect data when it pulls images. Additional features include copying container images from one registry to another and deleting images from registries.

Red Hat supports the following methods for managing container images for your overcloud:

  • Pulling container images from the Red Hat Container Catalog to the image-serve registry on the undercloud and then pulling the images from the image-serve registry. When you pull images to the undercloud first, you avoid multiple overcloud nodes simultaneously pulling container images over an external connection.
  • Pulling container images from your Satellite 6 server. You can pull these images directly from the Satellite because the network traffic is internal.

This guide contains information about configuring your container image registry details and performing basic container operations.

1.5. Working with Ceph Storage in Red Hat OpenStack Platform

It is common for large organizations that use Red Hat OpenStack Platform (RHOSP) to serve thousands of clients or more. Each OpenStack client is likely to have their own unique needs when consuming block storage resources. Deploying glance (images), cinder (volumes), and nova (Compute) on a single node can become impossible to manage in large deployments with thousands of clients. Scaling OpenStack externally resolves this challenge.

However, there is also a practical requirement to virtualize the storage layer with a solution like Red Hat Ceph Storage so that you can scale the RHOSP storage layer from tens of terabytes to petabytes, or even exabytes of storage. Red Hat Ceph Storage provides this storage virtualization layer with high availability and high performance while running on commodity hardware. While virtualization might seem like it comes with a performance penalty, Ceph stripes block device images as objects across the cluster, meaning that large Ceph Block Device images have better performance than a standalone disk. Ceph Block devices also support caching, copy-on-write cloning, and copy-on-read cloning for enhanced performance.

For more information about Red Hat Ceph Storage, see Red Hat Ceph Storage.

Note

For multi-architecture clouds, Red Hat supports only pre-installed or external Ceph implementation. For more information, see Integrating an Overcloud with an Existing Red Hat Ceph Cluster and Appendix B, Red Hat OpenStack Platform for POWER.