Chapter 15. Creating virtualized control planes

This chapter explains how to virtualize the control plane using Red Hat OpenStack Platform and Red Hat Virtualization.

15.1. Virtualized control planes

A virtualized control plane is a control plane located on virtual machines (VMs) rather than on bare metal. A virtualized control plane reduces the number of bare metal machines required for the control plane.

You can virtualize your Red Hat OpenStack Platform control plane for the overcloud using Red Hat Virtualization, by deploying virtualized controllers as the control plane nodes. The OpenStack Platform director supports provisioning an overcloud using Controller nodes deployed in a Red Hat Virtualization cluster.

Note

Virtualized Controller nodes are supported only on Red Hat Virtualization.

To deploy a virtualized control plane, distribute the overcloud with the Controller nodes running on VMs on Red Hat Virtualization, and Compute and storage nodes on bare metal, as illustrated in the following architecture diagram.

Note

Red Hat supports a virtualized undercloud on Red Hat Virtualization. Red Hat recommends also installing the undercloud on Red Hat Virtualization.

Virtualized control plane architecture

Virtualized control plane architecture

The OpenStack Bare Metal Provisioning (ironic) service includes a driver for Red Hat Virtualization VMs, staging-ovirt, that you can use to manage virtual nodes within a Red Hat Virtualization environment. Use this driver to deploy overcloud controllers as virtual machines within a Red Hat Virtualization environment.

Benefits of virtualizing your Red Hat OpenStack Platform overcloud control plane

  • You can allocate resources to the virtualized controllers dynamically, using hot add and hot remove to scale CPU and memory as required, preventing downtime and facilitating increased capacity as the platform grows.
  • You can deploy additional infrastructure virtual machines on the same Red Hat Virtualization cluster, thereby minimizing the server footprint in the data center and maximizing efficiency of the physical nodes.
  • You can use composable roles to define more complex Red Hat OpenStack Platform control planes, allowing you to allocate resources to specific components of the control plane.
  • You can leverage the virtual machine live migration feature, and maintain systems without service interruption.
  • You can integrate third party or custom tools supported by Red Hat Virtualization.

Limitations of virtualizing your Red Hat OpenStack Platform overcloud control plane

  • Virtualized Ceph Storage nodes and Compute nodes are not supported.
  • Block Storage (cinder) image-to-volume is not supported for back ends that use Fiber Channel. Red Hat Virtualization does not support N_Port ID Virtualization (NPIV), therefore Block Storage (cinder) drivers that need to map LUNs from a storage back end to the controllers, where cinder-volume runs by default, will not work. Red Hat recommends creating a dedicated role for cinder-volume rather than including it on the virtualized controllers. See Composable Services and Custom Roles for details on how to do this.

15.2. Provisioning virtualized controllers using the Red Hat Virtualization driver

Prerequisites

Recommendations

  • To avoid performance bottlenecks, use composable roles and keep the data plane services on the bare-metal Controller nodes.
  • Set the internal BIOS clock of each node to UTC. This prevents issues with future-dated file timestamps when hwclock synchronizes the BIOS clock before applying the timezone offset.
  • To deploy overcloud Compute nodes on POWER (ppc64le) hardware, see Appendix G. Red Hat OpenStack Platform for POWER.

Procedure

  1. Enable the staging-ovirt driver in the director undercloud by adding the driver to enabled_hardware_types in the undercloud.conf configuration file:

    enabled_hardware_types = ipmi,redfish,ilo,idrac,staging-ovirt
  2. Verify that the undercloud contains the staging-ovirt driver:

    (undercloud) [stack@undercloud ~]$ openstack baremetal driver list

    You should see the following result:

     +---------------------+-----------------------+
     | Supported driver(s) | Active host(s)        |
     +---------------------+-----------------------+
     | idrac               | localhost.localdomain |
     | ilo                 | localhost.localdomain |
     | ipmi                | localhost.localdomain |
     | pxe_drac            | localhost.localdomain |
     | pxe_ilo             | localhost.localdomain |
     | pxe_ipmitool        | localhost.localdomain |
     | redfish             | localhost.localdomain |
     | staging-ovirt       | localhost.localdomain |
  3. Register the VMs hosted on Red Hat Virtualization with director by specifying them in the overcloud node definition template, for instance, nodes.json. See Registering Nodes for the Overcloud for details. Use the following key:value pairs to define aspects of the virtual machines that you want to deploy with your overcloud:

    KeyValue

    pm_type

    Set to the OpenStack Bare Metal Provisioning (ironic) service driver for oVirt/RHV VMs, staging-ovirt.

    pm_user

    Set to the Red Hat Virtualization Manager username.

    pm_password

    Set to the Red Hat Virtualization Manager password.

    pm_addr

    Set to the hostname or IP of the Red Hat Virtualization Manager server.

    pm_vm_name

    Set to the name of the virtual machine in Red Hat Virtualization Manager where the controller is created.

    For example:

    {
          "nodes": [
              {
                  "name":"osp13-controller-0",
                  "pm_type":"staging-ovirt",
                  "mac":[
                      "00:1a:4a:16:01:56"
                  ],
                  "cpu":"2",
                  "memory":"4096",
                  "disk":"40",
                  "arch":"x86_64",
                  "pm_user":"admin@internal",
                  "pm_password":"password",
                  "pm_addr":"rhvm.example.com",
                  "pm_vm_name":"{vernum}-controller-0",
                  "capabilities": "profile:control,boot_option:local"
              },
      }

    Configure one controller on each Red Hat Virtualization Host

  4. Configure an affinity group in Red Hat Virtualization with "soft negative affinity" to ensure high availability is implemented for your controller VMs. See Affinity Groups for details.
  5. Map each VLAN to a separate logical vNIC in the controller VMs using the Red Hat Virtualization Manager interface.
  6. Disable the MAC spoofing filter on the networks attached to the controller VMs by setting no_filter in the vNIC of the director and controller VMs, and restarting the VMs. See Virtual Network Interface Cards for further details.
  7. Deploy the overcloud to include the new virtualized controller nodes in your environment:

    (undercloud) [stack@undercloud ~]$ openstack overcloud deploy --templates