Chapter 1. Introduction to the Red Hat OpenStack Platform director Operator
OpenShift Container Platform (OCP) uses a modular system of operators to extend the functions of your OCP cluster. The Red Hat OpenStack Platform (RHOSP) director Operator adds the ability to install and run a RHOSP cloud within OCP. This operator manages a set of Custom Resource Definitions (CRDs) for managing and deploying the infrastructure and configuration of RHOSP nodes. The basic architecture of an operator-deployed RHOSP cloud includes the following features:
- Virtualized control plane
- The director Operator creates a set of virtual machines in OpenShift Virtualization to act as Controller nodes.
- Bare metal machine provisioning
- The director Operator uses OCP bare metal machine management to provision Compute nodes in an operator-deployed RHOSP cloud.
- Networking
- The director Operator configures the underlying networks for RHOSP services.
- Heat and Ansible-based configuration
-
The director Operator stores custom Heat configuration in OCP and uses the
config-download
functionality in director to convert the configuration into Ansible playbooks. If you change the stored heat configuration, the director Operator automatically regenerates the Ansible playbooks. - CLI client
- The director Operator creates a pod for users to run RHOSP CLI commands and interact with their RHOSP cloud.
Support for Red Hat OpenStack Platform director Operator will only be granted if your architecture is approved by Red Hat Services or by a Technical Account Manager. Please contact Red Hat before deploying this feature.
Additional resources
1.1. Prerequisites for the director Operator
Before you install the Red Hat OpenStack Platform (RHOSP) director Operator, you must complete the following prerequisite tasks.
Install an Openshift Container Platform LTS version (OCP) 4.10 or later cluster that contains a
baremetal
cluster operator that has been enabled and aprovisioning
network.NoteOCP clusters that you install with the installer-provisioned infrastructure (IPI) or assisted installation (AI) use the
baremetal
platform type and have thebaremetal
cluster Operator enabled. OCP clusters that you install with user-provisioned infrastructure (UPI) use thenone
platform type and might have thebaremetal
cluster Operator disabled.If the cluster is of type AI or IPI, it uses
metal3
, a Kubernetes API for the management of baremetal hosts. It maintains an inventory of available hosts as instances of the BareMetalHost custom resource definition (CRD). The bare metal operator knows how to:- Inspect the host’s hardware details and report them to the corresponding BareMetalHost. This includes information about CPUs, RAM, disks, and NICs.
- Provision hosts with a specific image.
- Clean a host’s disk contents before or after provisioning.
To check if the
baremetal
cluster Operator is enabled, navigate to Administration > Cluster Settings > ClusterOperators > baremetal, scroll to the Conditions section, and view the Disabled status.To check the platform type of the OCP cluster, navigate to Administration > Global Configuration > Infrastructure, switch to YAML view, scroll to the Conditions section, and view the
status.platformStatus
value.Install the following Operators from OperatorHub on your OCP cluster:
- OpenShift Virtualization Operator
- SR-IOV Network Operator
- For OCP 4.11+ clusters: Kubernetes NMState Operator
For OCP 4.11+ clusters: Create an NMState instance to finish installing all the NMState CRDs:
cat <<EOF | oc apply -f - apiVersion: nmstate.io/v1 kind: NMState metadata: name: nmstate namespace: openshift-nmstate EOF
- Configure a remote Git repository for the director Operator to store the generated configuration for your overcloud.
Create the following persistent volumes to fulfil the following persistent volume claims that the director Operator creates:
-
4G for
openstackclient-cloud-admin
-
1G for
openstackclient-hosts
- 50G for the base image that the director Operator clones for each Controller virtual machine
- A minimum of 50G for each Controller virtual machine. For more information see, Controller node requirements
-
4G for
Additional resources
1.2. Installing the director Operator
To install the director Operator, you must create a namespace for the Operator and create the following three resources within the namespace:
-
A
CatalogSource
, which identifies the index image to use for the director Operator catalog. -
A
Subscription
, which tracks changes in the director Operator catalog. -
An
OperatorGroup
, which defines the Operator group for the director Operator and restricts the director Operator to a target namespace.
Prerequisites
- Ensure your OpenShift Container Platform cluster is operational.
Install the following prerequisite Operators from OperatorHub:
- OpenShift Virtualization 4.10
- SR-IOV Network Operator 4.10
-
Ensure that you have installed the
oc
command line tool on your workstation.
Procedure
Create the
openstack
namespace:$ oc new-project openstack
-
Obtain the latest
osp-director-operator-bundle
image from https://catalog.redhat.com/software/containers/search. -
Download the Operator Package Manager (
opm
) tool from https://console.redhat.com/openshift/downloads. Use the
opm
tool to create an index image:$ BUNDLE_IMG="registry.redhat.io/rhosp-rhel8/osp-director-operator-bundle@sha256:c19099ac3340d364307a43e0ae2be949a588fefe8fcb17663049342e7587f055" $ INDEX_IMG="quay.io/<account>/osp-director-operator-index:x.y.z-a" $ opm index add --bundles ${BUNDLE_IMG} --tag ${INDEX_IMG} -u podman --pull-tool podman
Push the index image to your registry:
$ podman push ${INDEX_IMG}
Create a file named
osp-director-operator.yaml
and include the following YAML content that configures the three resources to install the director Operator:apiVersion: operators.coreos.com/v1alpha1 kind: CatalogSource metadata: name: osp-director-operator-index namespace: openstack spec: sourceType: grpc image: quay.io/<account>/osp-director-operator-index:x.y.z-a 1 --- apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: "osp-director-operator-group" namespace: openstack spec: targetNamespaces: - openstack --- apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: osp-director-operator-subscription namespace: openstack spec: config: env: - name: WATCH_NAMESPACE value: openstack,openshift-machine-api,openshift-sriov-network-operator source: osp-director-operator-index sourceNamespace: openstack name: osp-director-operator
- 1
- For information about how to apply the Quay authentication so that the Operator deployment can pull the image, see Accessing images for Operators from private registries.
Create the three new resources within the
openstack
namespace:$ oc apply -f osp-director-operator.yaml
Verification
Confirm that you have successfully installed the director Operator:
$ oc get operators NAME AGE osp-director-operator.openstack 5m
Additional resources
1.3. Custom resource definitions for the director Operator
The director Operator includes a set of custom resource definitions (CRDs) that you can use to manage overcloud resources. There are two types of CRDs: hardware provisioning and software configuration.
Hardware Provisioning CRDs
openstacknetattachment
(internal)- Manages NodeNetworkConfigurationPolicy and NodeSriovConfigurationPolicy used to attach networks to virtual machines
openstacknetconfig
- High level CRD to specify openstacknetattachments and openstacknets to describe the full network configuration. The set of reserved IP/MAC addresses per node are reflected in the status.
openstackbaremetalset
- Create sets of baremetal hosts for a specific TripleO role (Compute, Storage, etc.)
openstackcontrolplane
- A CRD used to create the OpenStack control plane and manage associated openstackvmsets
openstacknet
(internal)- Create networks which are used to assign IPs to the vmset and baremetalset resources below
openstackipset
(internal)- Contains a set of IPs for a given network and role. Used internally to manage IP addresses.
openstackprovisionservers
- Used to serve custom images for baremetal provisioning with Metal3
openstackvmset
- Create sets of VMs using OpenShift Virtualization for a specific TripleO role (Controller, Database, NetworkController, etc.)
Software Configuration CRDs
openstackconfiggenerator
- Automatically generate Ansible playbooks for deployment when you scale up or make changes to custom ConfigMaps for deployment
openstackconfigversion
- Represents a set of executable Ansible playbooks
openstackdeploy
- Executes a set of Ansible playbooks (openstackconfigversion)
openstackclient
- Creates a pod used to run TripleO deployment commands
Viewing the director Operator CRDs
View a list of these CRDs with the
oc get crd
command:$ oc get crd | grep "^openstack"
View the definition for a specific CRD with the
oc describe crd
command:$ oc describe crd openstackbaremetalset Name: openstackbaremetalsets.osp-director.openstack.org Namespace: Labels: operators.coreos.com/osp-director-operator.openstack= Annotations: cert-manager.io/inject-ca-from: $(CERTIFICATE_NAMESPACE)/$(CERTIFICATE_NAME) controller-gen.kubebuilder.io/version: v0.3.0 API Version: apiextensions.k8s.io/v1 Kind: CustomResourceDefinition ...
CRD naming conventions
Each CRD contains multiple names in the spec.names
section. Use these names depending on the context of your actions:
Use
kind
when you create and interact with resource manifests:apiVersion: osp-director.openstack.org/v1beta1 kind: OpenStackBaremetalSet ....
The
kind
name in the resource manifest correlates to thekind
name in the respective CRD.Use
plural
when you interact with multiple resources:$ oc get openstackbaremetalsets
Use
singular
when you interact with a single resource:$ oc describe openstackbaremetalset/compute
Use
shortName
for any CLI interactions:$ oc get osbmset
Additional resources
1.4. Features not supported by director Operator
- Fiber Channel back end
-
Block Storage (cinder) image-to-volume is not supported for back ends that use Fiber Channel. Red Hat OpenShift Virtualization does not support N_Port ID Virtualization (NPIV). Therefore, Block Storage drivers that need to map LUNs from a storage back end to the controllers, where
cinder-volume
runs by default, do not work. You must create a dedicated role forcinder-volume
and use the role to create physical nodes instead of including it on the virtualized controllers. For more information, see Composable Services and Custom Roles.
1.5. Workflow for overcloud deployment with the director Operator
After you have installed the Red Hat OpenStack Platform director Operator, you can use the resources specific to the director Operator to provision your overcloud infrastructure, generate your overcloud configuration, and create an overcloud.
The following workflow outlines the general process for creating an overcloud:
-
Create the overcloud networks using the
openstacknetconfig
CRD, including the control plane and any isolated networks. - Create ConfigMaps to store any custom heat templates and environment files for your overcloud.
- Create a control plane, which includes three virtual machines for Controller nodes and a pod to perform client operations.
- Create bare metal Compute nodes.
-
Create an
openstackconfiggenerator
to render Ansible playbooks for overcloud configuration. -
Apply the Ansible playbook configuration to your overcloud nodes using
openstackdeploy
.