Chapter 1. Introduction to director

The Red Hat OpenStack Platform (RHOSP) director is a toolset for installing and managing a complete RHOSP environment. Director is based primarily on the OpenStack project TripleO. With director you can install a fully-operational, lean, and robust RHOSP environment that can provision and control bare metal systems to use as OpenStack nodes.

Director uses two main concepts: an undercloud and an overcloud. First you install the undercloud, and then use the undercloud as a tool to install and configure the overcloud.

Basic Layout of undercloud and overcloud

1.1. Understanding the undercloud

The undercloud is the main management node that contains the Red Hat OpenStack Platform director toolset. It is a single-system OpenStack installation that includes components for provisioning and managing the OpenStack nodes that form your OpenStack environment (the overcloud). The components that form the undercloud have multiple functions:

Environment planning
The undercloud includes planning functions that you can use to create and assign certain node roles. The undercloud includes a default set of node roles that you can assign to specific nodes: Compute, Controller, and various Storage roles. You can also design custom roles. Additionally, you can select which Red Hat OpenStack Platform services to include on each node role, which provides a method to model new node types or isolate certain components on their own host.
Bare metal system control
The undercloud uses the out-of-band management interface, usually Intelligent Platform Management Interface (IPMI), of each node for power management control and a PXE-based service to discover hardware attributes and install OpenStack on each node. You can use this feature to provision bare metal systems as OpenStack nodes. For a full list of power management drivers, see Chapter 30, Power management drivers.
The undercloud contains a set of YAML templates that represent a set of plans for your environment. The undercloud imports these plans and follows their instructions to create the resulting OpenStack environment. The plans also include hooks that you can use to incorporate your own customizations as certain points in the environment creation process.
Undercloud components

The undercloud uses OpenStack components as its base tool set. Each component operates within a separate container on the undercloud:

  • OpenStack Identity (keystone) - Provides authentication and authorization for the director components.
  • OpenStack Bare Metal (ironic) - Manages bare metal nodes.
  • OpenStack Networking (neutron) and Open vSwitch - Control networking for bare metal nodes.
  • OpenStack Orchestration (Ephemeral Heat) - Provides the orchestration of nodes after director writes the overcloud image to disk.

1.2. Understanding the overcloud

The overcloud is the resulting Red Hat OpenStack Platform (RHOSP) environment that the undercloud creates. The overcloud consists of multiple nodes with different roles that you define based on the OpenStack Platform environment that you want to create. The undercloud includes a default set of overcloud node roles:


Controller nodes provide administration, networking, and high availability for the OpenStack environment. A recommended OpenStack environment contains three Controller nodes together in a high availability cluster.

A default Controller node role supports the following components. Not all of these services are enabled by default. Some of these components require custom or pre-packaged environment files to enable:

  • OpenStack Dashboard (horizon)
  • OpenStack Identity (keystone)
  • OpenStack Compute (nova) API
  • OpenStack Networking (neutron)
  • OpenStack Image Service (glance)
  • OpenStack Block Storage (cinder)
  • OpenStack Object Storage (swift)
  • OpenStack Orchestration (heat)
  • OpenStack Shared File Systems (manila)
  • OpenStack Bare Metal (ironic)
  • OpenStack Load Balancing-as-a-Service (octavia)
  • OpenStack Key Manager (barbican)
  • MariaDB
  • Open vSwitch
  • Pacemaker and Galera for high availability services.

Compute nodes provide computing resources for the OpenStack environment. You can add more Compute nodes to scale out your environment over time. A default Compute node contains the following components:

  • OpenStack Compute (nova)
  • Open vSwitch

Storage nodes provide storage for the OpenStack environment. The following list contains information about the various types of Storage node in RHOSP:

  • Ceph Storage nodes - Used to form storage clusters. Each node contains a Ceph Object Storage Daemon (OSD). Additionally, director installs Ceph Monitor onto the Controller nodes in situations where you deploy Ceph Storage nodes as part of your environment.
  • Block storage (cinder) - Used as external block storage for highly available Controller nodes. This node contains the following components:

    • OpenStack Block Storage (cinder) volume
    • OpenStack Telemetry agents
    • Open vSwitch.
  • Object storage (swift) - These nodes provide an external storage layer for OpenStack Swift. The Controller nodes access object storage nodes through the Swift proxy. Object storage nodes contain the following components:

    • OpenStack Object Storage (swift) storage
    • OpenStack Telemetry agents
    • Open vSwitch.

1.3. Understanding high availability in Red Hat OpenStack Platform

The Red Hat OpenStack Platform (RHOSP) director uses a Controller node cluster to provide highly available services to your OpenStack Platform environment. For each service, director installs the same components on all Controller nodes and manages the Controller nodes together as a single service. This type of cluster configuration provides a fallback in the event of operational failures on a single Controller node. This provides OpenStack users with a certain degree of continuous operation.

The OpenStack Platform director uses some key pieces of software to manage components on the Controller node:

  • Pacemaker - Pacemaker is a cluster resource manager. Pacemaker manages and monitors the availability of OpenStack components across all nodes in the cluster.
  • HAProxy - Provides load balancing and proxy services to the cluster.
  • Galera - Replicates the RHOSP database across the cluster.
  • Memcached - Provides database caching.
  • From version 13 and later, you can use director to deploy High Availability for Compute Instances (Instance HA). With Instance HA you can automate evacuating instances from a Compute node when the Compute node fails.

1.4. Understanding containerization in Red Hat OpenStack Platform

Each OpenStack Platform service on the undercloud and overcloud runs inside an individual Linux container on their respective node. This containerization provides a method to isolate services, maintain the environment, and upgrade Red Hat OpenStack Platform (RHOSP).

Red Hat OpenStack Platform 17.0 supports installation on the Red Hat Enterprise Linux 9.0 operating system. Red Hat OpenStack Platform previously used Docker to manage containerization. OpenStack Platform 17.0 uses these tools for OpenStack Platform deployment and upgrades.


Pod Manager (Podman) is a container management tool. It implements almost all Docker CLI commands, not including commands related to Docker Swarm. Podman manages pods, containers, and container images. Podman can manage resources without a daemon running in the background.

For more information about Podman, see the Podman website.


Buildah specializes in building Open Containers Initiative (OCI) images, which you use in conjunction with Podman. Buildah commands replicate the contents of a Dockerfile. Buildah also provides a lower-level coreutils interface to build container images, so that you do not require a Dockerfile to build containers. Buildah also uses other scripting languages to build container images without requiring a daemon.

For more information about Buildah, see the Buildah website.

Skopeo provides operators with a method to inspect remote container images, which helps director collect data when it pulls images. Additional features include copying container images from one registry to another and deleting images from registries.

Red Hat supports the following methods for managing container images for your overcloud:

  • Pulling container images from the Red Hat Container Catalog to the image-serve registry on the undercloud and then pulling the images from the image-serve registry. When you pull images to the undercloud first, you avoid multiple overcloud nodes simultaneously pulling container images over an external connection.
  • Pulling container images from your Satellite 6 server. You can pull these images directly from the Satellite because the network traffic is internal.

This guide contains information about configuring your container image registry details and performing basic container operations.

1.5. Working with Ceph Storage in Red Hat OpenStack Platform

It is common for large organizations that use Red Hat OpenStack Platform (RHOSP) to serve thousands of clients or more. Each OpenStack client is likely to have their own unique needs when consuming block storage resources. Deploying glance (images), cinder (volumes), and nova (Compute) on a single node can become impossible to manage in large deployments with thousands of clients. Scaling OpenStack externally resolves this challenge.

However, there is also a practical requirement to virtualize the storage layer with a solution like Red Hat Ceph Storage so that you can scale the RHOSP storage layer from tens of terabytes to petabytes, or even exabytes of storage. Red Hat Ceph Storage provides this storage virtualization layer with high availability and high performance while running on commodity hardware. While virtualization might seem like it comes with a performance penalty, Ceph stripes block device images as objects across the cluster, meaning that large Ceph Block Device images have better performance than a standalone disk. Ceph Block devices also support caching, copy-on-write cloning, and copy-on-read cloning for enhanced performance.

For more information about Red Hat Ceph Storage, see Red Hat Ceph Storage.

1.6. Default file locations

In RHOSP 17, you can find all the configuration files in a single directory. The name of the directory is a combination of the openstack command used and the name of the stack. The directories have default locations but you can change the default locations by using the --working-dir option. You can use this option with any tripleoclient command that reads or creates files used with the deployment.

Default locationCommand


undercloud install, which is based on tripleo deploy


tripleo deploy, <stack> is standalone by default


overcloud deploy, <stack> is overcloud by default

1.6.1. Description of the contents of the undercloud directory

You can find the following files and directories in the ~/tripleo-deploy/undercloud directory. They are a subset of what is available in the ~/overcloud-deploy directory:


1.6.2. Description of the contents of the overcloud directory

You can find the following files and directories in the ~/overcloud-deploy/overcloud directory, where overcloud is the name of the stack:


The following table describes the content of those files and directories:



Directories used by ansible-runner for CLI ansible-based workflows.


The config-download directory, it was previously called ~/config-download or /var/lib/mistral/<stack>.


Contains the saved stack environment generated with the openstack stack environment show <stack> command.


The ephemeral Heat working directory containing the ephemeral Heat configuration and database backups.


Contains saved stack outputs generated with the openstack stack output show <stack> <output> command.


Contains the saved stack status.


Contains stack export information, generated with the openstack overcloud export <stack> command.


A tarball of the working directory.


Contains the stack passwords.


Stack rc credential file required to use the overcloud APIs.


Contains the Tempest configuration.


Ansible inventory for the overcloud.


Contains a copy of rendered jinja2 templates. You can find the source templates in /usr/share/openstack-tripleo-heat-templates or you can specify the templates with the --templates options on the CLI.


Baremetal deployment input for provisioning overcloud nodes.


Network deployment input for provisioning overcloud networks.


Roles data which is specified with the -r option on the CLI.


VIP deployment input for provisioning overcloud network VIPs.