Chapter 26. Creating virtualized control planes

A virtualized control plane is a control plane located on virtual machines (VMs) rather than on bare metal. Use a virtualized control plane reduce the number of bare metal machines that you require for the control plane.

This chapter explains how to virtualize your Red Hat OpenStack Platform (RHOSP) control plane for the overcloud using RHOSP and Red Hat Virtualization.

26.1. Virtualized control plane architecture

Use director to provision an overcloud using Controller nodes that are deployed in a Red Hat Virtualization cluster. You can then deploy these virtualized controllers as the virtualized control plane nodes.

Note

Virtualized Controller nodes are supported only on Red Hat Virtualization.

The following architecture diagram illustrates how to deploy a virtualized control plane. Distribute the overcloud with the Controller nodes running on VMs on Red Hat Virtualization and run the Compute and Storage nodes on bare metal.

Note

Run the OpenStack virtualized undercloud on Red Hat Virtualization.

Virtualized control plane architecture

Virtualized control plane architecture

The OpenStack Bare Metal Provisioning service (ironic) includes a driver for Red Hat Virtualization VMs, staging-ovirt. You can use this driver to manage virtual nodes within a Red Hat Virtualization environment. You can also use it to deploy overcloud controllers as virtual machines within a Red Hat Virtualization environment.

Benefits and limitations of virtualizing your RHOSP overcloud control plane

Although there are a number of benefits to virtualizing your RHOSP overcloud control plane, this is not an option in every configuration.

Benefits

Virtualizing the overcloud control plane has a number of benefits that prevent downtime and improve performance.

  • You can allocate resources to the virtualized controllers dynamically, using hot add and hot remove to scale CPU and memory as required. This prevents downtime and facilitates increased capacity as the platform grows.
  • You can deploy additional infrastructure VMs on the same Red Hat Virtualization cluster. This minimizes the server footprint in the data center and maximizes the efficiency of the physical nodes.
  • You can use composable roles to define more complex RHOSP control planes and allocate resources to specific components of the control plane.
  • You can maintain systems without service interruption with the VM live migration feature.
  • You can integrate third-party or custom tools that Red Hat Virtualization supports.

Limitations

Virtualized control planes limit the types of configurations that you can use.

  • Virtualized Ceph Storage nodes and Compute nodes are not supported.
  • Block Storage (cinder) image-to-volume is not supported for back ends that use Fiber Channel. Red Hat Virtualization does not support N_Port ID Virtualization (NPIV). Therefore, Block Storage (cinder) drivers that need to map LUNs from a storage back end to the controllers, where cinder-volume runs by default, do not work. You must create a dedicated role for cinder-volume and use the role to create physical nodes instead of including it on the virtualized controllers. For more information, see Composable Services and Custom Roles.

26.2. Provisioning virtualized controllers using the Red Hat Virtualization driver

Complete the following steps to provision a virtualized RHOSP control plane for the overcloud using RHOSP and Red Hat Virtualization.

Prerequisites

  • You must have a 64-bit x86 processor with support for the Intel 64 or AMD64 CPU extensions.
  • You must have the following software already installed and configured:

  • You must have the virtualized Controller nodes prepared in advance. These requirements are the same as for bare metal Controller nodes. For more information, see Controller Node Requirements.
  • You must have the bare metal nodes being used as overcloud Compute nodes, and the storage nodes, prepared in advance. For hardware specifications, see the Compute Node Requirements and Ceph Storage Node Requirements. To deploy overcloud Compute nodes on POWER (ppc64le) hardware, see Red Hat OpenStack Platform for POWER.
  • You must have the logical networks created, and your cluster of host networks ready to use network isolation with multiple networks. For more information, see Logical Networks.
  • You must have the internal BIOS clock of each node set to UTC to prevent issues with future-dated file timestamps when hwclock synchronizes the BIOS clock before applying the timezone offset.
Tip

To avoid performance bottlenecks, use composable roles and keep the data plane services on the bare metal Controller nodes.

Procedure

  1. To enable the staging-ovirt driver in director, add the driver to the enabled_hardware_types parameter in the undercloud.conf configuration file:

    enabled_hardware_types = ipmi,redfish,ilo,idrac,staging-ovirt
  2. Verify that the undercloud contains the staging-ovirt driver:

    (undercloud) [stack@undercloud ~]$ openstack baremetal driver list

    If you have configured the undercloud correctly, this command returns the following result:

     +---------------------+-----------------------+
     | Supported driver(s) | Active host(s)        |
     +---------------------+-----------------------+
     | idrac               | localhost.localdomain |
     | ilo                 | localhost.localdomain |
     | ipmi                | localhost.localdomain |
     | pxe_drac            | localhost.localdomain |
     | pxe_ilo             | localhost.localdomain |
     | pxe_ipmitool        | localhost.localdomain |
     | redfish             | localhost.localdomain |
     | staging-ovirt       | localhost.localdomain |
  3. Update the overcloud node definition template, for example, nodes.json, to register the VMs hosted on Red Hat Virtualization with director. For more information, see Registering Nodes for the Overcloud. Use the following key:value pairs to define aspects of the VMs that you want to deploy with your overcloud:

    Table 26.1. Configuring the VMs for the overcloud

    KeySet to this value

    pm_type

    OpenStack Bare Metal Provisioning (ironic) service driver for oVirt/RHV VMs, staging-ovirt.

    pm_user

    Red Hat Virtualization Manager username.

    pm_password

    Red Hat Virtualization Manager password.

    pm_addr

    Hostname or IP of the Red Hat Virtualization Manager server.

    pm_vm_name

    Name of the virtual machine in Red Hat Virtualization Manager where the controller is created.

    For example:

    {
          "nodes": [
              {
                  "name":"osp13-controller-0",
                  "pm_type":"staging-ovirt",
                  "mac":[
                      "00:1a:4a:16:01:56"
                  ],
                  "cpu":"2",
                  "memory":"4096",
                  "disk":"40",
                  "arch":"x86_64",
                  "pm_user":"admin@internal",
                  "pm_password":"password",
                  "pm_addr":"rhvm.example.com",
                  "pm_vm_name":"{osp_curr_ver}-controller-0",
                  "capabilities": "profile:control,boot_option:local"
              },
              ...
      }

    Configure one Controller on each Red Hat Virtualization Host

  4. Configure an affinity group in Red Hat Virtualization with "soft negative affinity" to ensure high availability is implemented for your controller VMs. For more information, see Affinity Groups.
  5. Open the Red Hat Virtualization Manager interface, and use it to map each VLAN to a separate logical vNIC in the controller VMs. For more information, see Logical Networks.
  6. Set no_filter in the vNIC of the director and controller VMs, and restart the VMs, to disable the MAC spoofing filter on the networks attached to the controller VMs. For more information, see Virtual Network Interface Cards.
  7. Deploy the overcloud to include the new virtualized controller nodes in your environment:

    (undercloud) [stack@undercloud ~]$ openstack overcloud deploy --templates