Chapter 5. Creating overcloud nodes with the director Operator

A Red Hat OpenStack Platform overcloud consists of multiple nodes, such as Controller nodes to provide control plane services and Compute nodes to provide computing resources. For a functional overcloud with high availability, you must have 3 Controller nodes and at least one Compute node. You can create Controller nodes with the OpenStackControlPlane resource and Compute nodes with OpenStackBaremetalSet resource.

By default, there is no automatic discovery and acting on OpenShift worker nodes where the virtual machines are hosted. For more information on how to auto discover issues on OpenShift worker nodes, see Deploying machine health checks.

5.1. Creating a control plane with OpenStackControlPlane

The overcloud control plane contains the main Red Hat OpenStack Platform services that manage overcloud functionality. The control plane usually consists of 3 Controller nodes and can scale to other control plane-based composable roles. When you use composable roles, each service must run on exactly 3 additional dedicated nodes and the total number of nodes in the control plane must be odd to maintain Pacemaker quorum.

The OpenStackControlPlane custom resource creates control plane-based nodes as virtual machines within OpenShift Virtualization.

Prerequisites

  • Ensure your OpenShift Container Platform cluster is operational and you have installed the director Operator correctly.
  • Ensure that you have installed the oc command line tool on your workstation.
  • Use the OpenStackNetConfig resource to create a control plane network and any additional isolated networks.

Procedure

  1. Create a file named openstack-controller.yaml on your workstation. Include the resource specification for the Controller nodes. For example, the specification for a control plane that consists of 3 Controller nodes is as follows:

    apiVersion: osp-director.openstack.org/v1beta2
    kind: OpenStackControlPlane
    metadata:
      name: overcloud
      namespace: openstack
    spec:
      openStackClientNetworks:
            - ctlplane
            - internal_api
            - external
      openStackClientStorageClass: host-nfs-storageclass
      passwordSecret: userpassword
      virtualMachineRoles:
        Controller:
          roleName: Controller
          roleCount: 3
          networks:
            - ctlplane
            - internal_api
            - external
            - tenant
            - storage
            - storage_mgmt
          cores: 12
          memory: 64
          rootDisk:
            diskSize: 500
            baseImageVolumeName: openstack-base-img
            # storageClass must support RWX to be able to live migrate VMs
            storageClass: host-nfs-storageclass
            storageAccessMode:  ReadWriteMany
            # When using OpenShift Virtualization with OpenShift Container Platform Container Storage,
            # specify RBD block mode persistent volume claims (PVCs) when creating virtual machine disks.
            # With virtual machine disks, RBD block mode volumes are more efficient and provide better
            # performance than Ceph FS or RBD filesystem-mode PVCs.
            # To specify RBD block mode PVCs, use the 'ocs-storagecluster-ceph-rbd' storage class and
            # VolumeMode: Block.
            storageVolumeMode: Filesystem
          # optional configure additional discs to be attached to the VMs,
          # need to be configured manually inside the VMs where to be used.
          additionalDisks:
            - name: datadisk
              diskSize: 500
              storageClass: host-nfs-storageclass
              storageAccessMode:  ReadWriteMany
              storageVolumeMode: Filesystem
      openStackRelease: "16.2"

    Set the following values in the resource specification:

    metadata.name
    Set to the name of the overcloud control plane, which is overcloud.
    metadata.namespace
    Set to the director Operator namespace, which is openstack.
    spec

    Set the configuration for the control plane. For descriptions of the values you can use in this section, view the specification schema in the custom resource definition for the openstackcontrolplane CRD:

    $ oc describe crd openstackcontrolplane

    Save the file when you have finished configuring the control plane specification.

  2. Create the control plane:

    $ oc create -f openstack-controller.yaml -n openstack

    Wait until OCP creates the resources related to OpenStackControlPlane resource.

    As a part of the OpenStackControlPlane resource, the director Operator also creates an OpenStackClient pod that you can access through a remote shell and run RHOSP commands.

Verification

  1. View the resource for the control plane:

    $ oc get openstackcontrolplane/overcloud -n openstack
  2. View the OpenStackVMSet resources to verify the creation of the control plane virtual machine set:

    $ oc get openstackvmsets -n openstack
  3. View the virtual machine resources to verify the creation of the control plane virtual machines in OpenShift Virtualization:

    $ oc get virtualmachines
  4. Test access to the openstackclient remote shell:

    $ oc rsh -n openstack openstackclient

5.2. Creating a provisioning server with OpenStackProvisionServer (Optional)

Provisioning servers provide a specific Red Hat Enterprise Linux (RHEL) QCOW2 image for provisioning Compute nodes for the Red Hat OpenStack Platform (RHOSP). An OpenStackProvisionServer is automatically created for any OpenStackBaremetalSets you create. However, you can decide to create the OpenStackProvisionServer manually and later provide the name to any future OpenStackBaremetalSets.

The OpenStackProvisionServer creates an Apache server on the OpenShift Container Platform provisioning network for a specific RHEL QCOW2 image.

Prerequisites

  • Ensure your OpenShift Container Platform cluster is operational and you have installed the director Operator correctly.
  • Ensure that you have installed the oc command line tool on your workstation.

Procedure

  1. Create a file named openstack-provision.yaml on your workstation. Include the resource specification for the Provisioning server. For example, the specification for a Provisioning server using a specific RHEL 8.4 QCOW2 images:

    apiVersion: osp-director.openstack.org/v1beta1
    kind: OpenStackProvisionServer
    metadata:
      name: openstack-provision-server
      namespace: openstack
    spec:
      baseImageUrl: http://<source_host>/rhel-guest-image-8.4-992.x86_64.qcow2
      port: 8080

    Set the following values in the resource specification:

    metadata.name
    Set a name to identify the OpenStackProvisionServer.
    metadata.namespace
    Set to the director Operator namespace, which is openstack.
    spec
    baseImageURL
    Set the initial source of the RHEL QCOW2 image for the Provisioing server. The image is download from this remote source when the server is created.
    port
    By default, set to 8080. You can change it for a specific port configuration.

    For further descriptions of the values you can use in this section, view the specification schema in the custom resource definition for the OpenStackProvisionServer CRD:

    $ oc describe crd openstackprovisionserver

    Save the file when you have finished configuring the Provisioning server specification.

  2. Create the Provisioning Server:

    $ oc create -f openstack-provision-server.yaml -n openstack

Verification

  1. View the resource for the Provisioning server:

    $ oc get openstackprovisionserver/openstack-provision-server -n openstack

5.3. Creating Compute nodes with OpenStackBaremetalSet

Compute nodes provide computing resources to your Red Hat OpenStack Platform environment. You must have at least one Compute node in your overcloud and you can scale the number of Compute nodes after deployment.

The OpenStackBaremetalSet custom resource creates Compute nodes from bare metal machines that OpenShift Container Platform manages.

Prerequisites

  • Ensure your OpenShift Container Platform cluster is operational and you have installed the director Operator correctly.
  • Ensure that you have installed the oc command line tool on your workstation.
  • Use the OpenStackNetConfig resource to create a control plane network and any additional isolated networks.

Procedure

  1. Create a file named openstack-compute.yaml on your workstation. Include the resource specification for the Compute nodes. For example, the specification for 1 Compute node is as follows:

    apiVersion: osp-director.openstack.org/v1beta1
    kind: OpenStackBaremetalSet
    metadata:
      name: compute
      namespace: openstack
    spec:
      count: 1
      baseImageUrl: http://host/images/rhel-image-8.4.x86_64.qcow2
      deploymentSSHSecret: osp-controlplane-ssh-keys
      # If you manually created an OpenStackProvisionServer, you can use it here,
      # otherwise the director Operator will create one for you (with `baseImageUrl` as the image that it server)
      # to use with this OpenStackBaremetalSet
      # provisionServerName: openstack-provision-server
      ctlplaneInterface: enp2s0
      networks:
        - ctlplane
        - internal_api
        - tenant
        - storage
      roleName: Compute
      passwordSecret: userpassword

    Set the following values in the resource specification:

    metadata.name
    Set to the name of the Compute node bare metal set, which is overcloud.
    metadata.namespace
    Set to the director Operator namespace, which is openstack.
    spec

    Set the configuration for the Compute nodes. For descriptions of the values you can use in this section, view the specification schema in the custom resource definition for the openstackbaremetalset CRD:

    $ oc describe crd openstackbaremetalset

    Save the file when you have finished configuring the Compute node specification.

  2. Create the Compute nodes:

    $ oc create -f openstack-compute.yaml -n openstack

Verification

  1. View the resource for the Compute nodes:

    $ oc get openstackbaremetalset/compute -n openstack
  2. View the bare metal machines that OpenShift manages to verify the creation of the Compute nodes:

    $ oc get baremetalhosts -n openshift-machine-api