Chapter 7. Configuring a basic overcloud

An overcloud with a basic configuration contains no custom features. To configure a basic Red Hat OpenStack Platform (RHOSP) environment, you must perform the following tasks:

  • Register the bare-metal nodes for your overcloud.
  • Provide director with an inventory of the hardware of the bare-metal nodes.
  • Tag each bare metal node with a resource class that matches the node to its designated role.
Tip

You can add advanced configuration options to this basic overcloud and customize it to your specifications. For more information, see Advanced Overcloud Customization.

7.1. Registering nodes for the overcloud

Director requires a node definition template that specifies the hardware and power management details of your nodes. You can create this template in JSON format, nodes.json, or YAML format, nodes.yaml.

Procedure

  1. Create a template named nodes.json or nodes.yaml that lists your nodes. Use the following JSON and YAML template examples to understand how to structure your node definition template:

    Example JSON template

    {
      "nodes": [{
        "name": "node01",
        "ports": [{
          "address": "aa:aa:aa:aa:aa:aa",
          "physical_network": "ctlplane",
          "local_link_connection": {
            "switch_id": "52:54:00:00:00:00",
            "port_id": "p0"
          }
        }],
        "cpu": "4",
        "memory": "6144",
        "disk": "40",
        "arch": "x86_64",
        "pm_type": "ipmi",
        "pm_user": "admin",
        "pm_password": "p@55w0rd!",
        "pm_addr": "192.168.24.205"
      },
      {
        "name": "node02",
        "ports": [{
          "address": "bb:bb:bb:bb:bb:bb",
          "physical_network": "ctlplane",
          "local_link_connection": {
            "switch_id": "52:54:00:00:00:00",
            "port_id": "p0"
          }
        }],
        "cpu": "4",
        "memory": "6144",
        "disk": "40",
        "arch": "x86_64",
        "pm_type": "ipmi",
        "pm_user": "admin",
        "pm_password": "p@55w0rd!",
        "pm_addr": "192.168.24.206"
      }]
    }

    Example YAML template

    nodes:
      - name: "node01"
        ports:
          - address: "aa:aa:aa:aa:aa:aa"
            physical_network: ctlplane
            local_link_connection:
              switch_id: 52:54:00:00:00:00
              port_id: p0
        cpu: 4
        memory: 6144
        disk: 40
        arch: "x86_64"
        pm_type: "ipmi"
        pm_user: "admin"
        pm_password: "p@55w0rd!"
        pm_addr: "192.168.24.205"
      - name: "node02"
        ports:
          - address: "bb:bb:bb:bb:bb:bb"
            physical_network: ctlplane
            local_link_connection:
              switch_id: 52:54:00:00:00:00
              port_id: p0
        cpu: 4
        memory: 6144
        disk: 40
        arch: "x86_64"
        pm_type: "ipmi"
        pm_user: "admin"
        pm_password: "p@55w0rd!"
        pm_addr: "192.168.24.206"

    This template contains the following attributes:

    name
    The logical name for the node.
    ports

    The port to access the specific IPMI device. You can define the following optional port attributes:

    • address: The MAC address for the network interface on the node. Use only the MAC address for the Provisioning NIC of each system.
    • physical_network: The physical network that is connected to the Provisioning NIC.
    • local_link_connection: If you use IPv6 provisioning and LLDP does not correctly populate the local link connection during introspection, you must include fake data with the switch_id and port_id fields in the local_link_connection parameter. For more information on how to include fake data, see Using director introspection to collect bare metal node hardware information.
    cpu
    (Optional) The number of CPUs on the node.
    memory
    (Optional) The amount of memory in MB.
    disk
    (Optional) The size of the hard disk in GB.
    arch

    (Optional) The system architecture.

    Important

    When building a multi-architecture cloud, the arch key is mandatory to distinguish nodes using x86_64 and ppc64le architectures.

    pm_type

    The power management driver that you want to use. This example uses the IPMI driver (ipmi).

    Note

    IPMI is the preferred supported power management driver. For more information about supported power management types and their options, see Power management drivers. If these power management drivers do not work as expected, use IPMI for your power management.

    pm_user; pm_password
    The IPMI username and password.
    pm_addr
    The IP address of the IPMI device.
  2. After you create the template, run the following commands to verify the formatting and syntax:

    $ source ~/stackrc
    (undercloud)$ openstack overcloud node import --validate-only ~/nodes.json
    Important

    You must also include the --http-boot /var/lib/ironic/tftpboot/ option for multi-architecture nodes.

  3. Save the file to the home directory of the stack user (/home/stack/nodes.json).
  4. Import the template to director to register each node from the template into director:

    (undercloud)$ openstack overcloud node import ~/nodes.json
    Note

    If you use UEFI boot mode, you must also set the boot mode on each node. If you introspect your nodes without setting UEFI boot mode, the nodes boot in legacy mode. For more information, see Setting the boot mode to UEFI boot mode.

  5. Wait for the node registration and configuration to complete. When complete, confirm that director has successfully registered the nodes:

    (undercloud)$ openstack baremetal node list

7.2. Creating an inventory of the bare-metal node hardware

Director needs the hardware inventory of the nodes in your Red Hat OpenStack Platform (RHOSP) deployment for profile tagging, benchmarking, and manual root disk assignment.

You can provide the hardware inventory to director by using one of the following methods:

  • Automatic: You can use director’s introspection process, which collects the hardware information from each node. This process boots an introspection agent on each node. The introspection agent collects hardware data from the node and sends the data back to director. Director stores the hardware data in the Object Storage service (swift) running on the undercloud node.
  • Manual: You can manually configure a basic hardware inventory for each bare metal machine. This inventory is stored in the Bare Metal Provisioning service (ironic) and is used to manage and deploy the bare-metal machines.
Note

You must use director’s automatic introspection process if you use derive_params.yaml for your overcloud, which requires introspection data to be present. For more information on derive_params.yaml, see Workflows and derived parameters.

The director automatic introspection process provides the following advantages over the manual method for setting the Bare Metal Provisioning service ports:

  • Introspection records all of the connected ports in the hardware information, including the port to use for PXE boot if it is not already configured in nodes.yaml.
  • Introspection sets the local_link_connection attribute for each port if the attribute is discoverable using LLDP. When you use the manual method, you must configure local_link_connection for each port when you register the nodes.
  • Introspection sets the physical_network attribute for the Bare Metal Provisioning service ports when deploying a spine-and-leaf or DCN architecture.

7.2.1. Using director introspection to collect bare metal node hardware information

After you register a physical machine as a bare metal node, you can automatically add its hardware details and create ports for each of its Ethernet MAC addresses by using director introspection.

Tip

As an alternative to automatic introspection, you can manually provide director with the hardware information for your bare metal nodes. For more information, see Manually configuring bare metal node hardware information.

Prerequisites

  • You have registered the bare-metal nodes for your overcloud.

Procedure

  1. Log in to the undercloud host as the stack user.
  2. Source the stackrc undercloud credentials file:

    $ source ~/stackrc
  3. Run the pre-introspection validation group to check the introspection requirements:

    (undercloud)$ openstack tripleo validator run --group pre-introspection
  4. Review the results of the validation report.
  5. Optional: Review detailed output from a specific validation:

    (undercloud)$ openstack tripleo validator show run --full <validation>
    • Replace <validation> with the UUID of the specific validation from the report that you want to review.

      Important

      A FAILED validation does not prevent you from deploying or running Red Hat OpenStack Platform. However, a FAILED validation can indicate a potential issue with a production environment.

  6. Inspect the hardware attributes of each node. You can inspect the hardware attributes of all nodes, or specific nodes:

    • Inspect the hardware attributes of all nodes:

      (undercloud)$ openstack overcloud node introspect --all-manageable --provide
      • Use the --all-manageable option to introspect only the nodes that are in a managed state. In this example, all nodes are in a managed state.
      • Use the --provide option to reset all nodes to an available state after introspection.
    • Inspect the hardware attributes of specific nodes:

      (undercloud)$ openstack overcloud node introspect --provide <node1> [node2] [noden]
      • Use the --provide option to reset all the specified nodes to an available state after introspection.
      • Replace <node1>, [node2], and all nodes up to [noden] with the UUID of each node that you want to introspect.
  7. Monitor the introspection progress logs in a separate terminal window:

    (undercloud)$ sudo tail -f /var/log/containers/ironic-inspector/ironic-inspector.log
    Important

    Ensure that the introspection process runs to completion. Introspection usually takes 15 minutes for bare metal nodes. However, incorrectly sized introspection networks can cause it to take much longer, which can result in the introspection failing.

  8. Optional: If you have configured your undercloud for bare metal provisioning over IPv6, then you need to also check that LLDP has set the local_link_connection for Bare Metal Provisioning service (ironic) ports:

    (undercloud)$ openstack baremetal port list --long -c UUID -c "Node UUID" -c "Local Link Connection"
    • If the Local Link Connection field is empty for the port on your bare metal node, you must populate the local_link_connection value manually with fake data. The following example sets the fake switch ID to 52:54:00:00:00:00, and the fake port ID to p0:

      (undercloud)$ openstack baremetal port set <port_uuid> \
      --local-link-connection switch_id=52:54:00:00:00:00 \
      --local-link-connection port_id=p0
    • Verify that the Local Link Connection field contains the fake data:

      (undercloud)$ openstack baremetal port list --long -c UUID -c "Node UUID" -c "Local Link Connection"

After the introspection completes, all nodes change to an available state.

7.2.2. Manually configuring bare-metal node hardware information

After you register a physical machine as a bare metal node, you can manually add its hardware details and create bare-metal ports for each of its Ethernet MAC addresses. You must create at least one bare-metal port before deploying the overcloud.

Tip

As an alternative to manual introspection, you can use the automatic director introspection process to collect the hardware information for your bare metal nodes. For more information, see Using director introspection to collect bare metal node hardware information.

Prerequisites

  • You have registered the bare-metal nodes for your overcloud.
  • You have configured local_link_connection for each port on the registered nodes in nodes.json. For more information, see Registering nodes for the overcloud.

Procedure

  1. Log in to the undercloud host as the stack user.
  2. Source the stackrc undercloud credentials file:

    $ source ~/stackrc
  3. Set the boot option to local for each registered node by adding boot_option':'local to the capabilities of the node:

    (undercloud)$ openstack baremetal node set \
     --property capabilities="boot_option:local" <node>
    • Replace <node> with the ID of the bare metal node.
  4. Specify the deploy kernel and deploy ramdisk for the node driver:

    (undercloud)$ openstack baremetal node set <node> \
      --driver-info deploy_kernel=<kernel_file> \
      --driver-info deploy_ramdisk=<initramfs_file>
    • Replace <node> with the ID of the bare metal node.
    • Replace <kernel_file> with the path to the .kernel image, for example, file:///var/lib/ironic/httpboot/agent.kernel.
    • Replace <initramfs_file> with the path to the .initramfs image, for example, file:///var/lib/ironic/httpboot/agent.ramdisk.
  5. Update the node properties to match the hardware specifications on the node:

    (undercloud)$ openstack baremetal node set <node> \
      --property cpus=<cpu> \
      --property memory_mb=<ram> \
      --property local_gb=<disk> \
      --property cpu_arch=<arch>
    • Replace <node> with the ID of the bare metal node.
    • Replace <cpu> with the number of CPUs.
    • Replace <ram> with the RAM in MB.
    • Replace <disk> with the disk size in GB.
    • Replace <arch> with the architecture type.
  6. Optional: Specify the IPMI cipher suite for each node:

    (undercloud)$ openstack baremetal node set <node> \
     --driver-info ipmi_cipher_suite=<version>
    • Replace <node> with the ID of the bare metal node.
    • Replace <version> with the cipher suite version to use on the node. Set to one of the following valid values:

      • 3 - The node uses the AES-128 with SHA1 cipher suite.
      • 17 - The node uses the AES-128 with SHA256 cipher suite.
  7. Optional: If you have multiple disks, set the root device hints to inform the deploy ramdisk which disk to use for deployment:

    (undercloud)$ openstack baremetal node set <node> \
      --property root_device='{"<property>": "<value>"}'
    • Replace <node> with the ID of the bare metal node.
    • Replace <property> and <value> with details about the disk that you want to use for deployment, for example root_device='{"size": "128"}'

      RHOSP supports the following properties:

      • model (String): Device identifier.
      • vendor (String): Device vendor.
      • serial (String): Disk serial number.
      • hctl (String): Host:Channel:Target:Lun for SCSI.
      • size (Integer): Size of the device in GB.
      • wwn (String): Unique storage identifier.
      • wwn_with_extension (String): Unique storage identifier with the vendor extension appended.
      • wwn_vendor_extension (String): Unique vendor storage identifier.
      • rotational (Boolean): True for a rotational device (HDD), otherwise false (SSD).
      • name (String): The name of the device, for example: /dev/sdb1 Use this property only for devices with persistent names.

        Note

        If you specify more than one property, the device must match all of those properties.

  8. Inform the Bare Metal Provisioning service of the node network card by creating a port with the MAC address of the NIC on the provisioning network:

    (undercloud)$ openstack baremetal port create --node <node_uuid> <mac_address>
    • Replace <node_uuid> with the unique ID of the bare metal node.
    • Replace <mac_address> with the MAC address of the NIC used to PXE boot.
  9. Validate the configuration of the node:

    (undercloud)$ openstack baremetal node validate <node>
    +------------+--------+---------------------------------------------+
    | Interface  | Result | Reason                                      |
    +------------+--------+---------------------------------------------+
    | boot       | False  | Cannot validate image information for node  |
    |            |        | a02178db-1550-4244-a2b7-d7035c743a9b        |
    |            |        | because one or more parameters are missing  |
    |            |        | from its instance_info. Missing are:        |
    |            |        | ['ramdisk', 'kernel', 'image_source']       |
    | console    | None   | not supported                               |
    | deploy     | False  | Cannot validate image information for node  |
    |            |        | a02178db-1550-4244-a2b7-d7035c743a9b        |
    |            |        | because one or more parameters are missing  |
    |            |        | from its instance_info. Missing are:        |
    |            |        | ['ramdisk', 'kernel', 'image_source']       |
    | inspect    | None   | not supported                               |
    | management | True   |                                             |
    | network    | True   |                                             |
    | power      | True   |                                             |
    | raid       | True   |                                             |
    | storage    | True   |                                             |
    +------------+--------+---------------------------------------------+

    The validation output Result indicates the following:

    • False: The interface has failed validation. If the reason provided includes missing the instance_info parameters [\'ramdisk', \'kernel', and \'image_source'], this might be because the Compute service populates those missing parameters at the beginning of the deployment process, therefore they have not been set at this point. If you are using a whole disk image, then you might need to only set image_source to pass the validation.
    • True: The interface has passed validation.
    • None: The interface is not supported for your driver.

7.3. Tagging nodes into profiles

After you register and inspect the hardware of each node, tag the nodes into specific profiles. These profile tags match your nodes to flavors, which assigns the flavors to deployment roles. The following example shows the relationships across roles, flavors, profiles, and nodes for Controller nodes:

TypeDescription

Role

The Controller role defines how director configures Controller nodes.

Flavor

The control flavor defines the hardware profile for nodes to use as controllers. You assign this flavor to the Controller role so that director can decide which nodes to use.

Profile

The control profile is a tag you apply to the control flavor. This defines the nodes that belong to the flavor.

Node

You also apply the control profile tag to individual nodes, which groups them to the control flavor and, as a result, director configures them using the Controller role.

Default profile flavors compute, control, swift-storage, ceph-storage, and block-storage are created during undercloud installation and are usable without modification in most environments.

Procedure

  1. To tag a node into a specific profile, add a profile option to the properties/capabilities parameter for each node. For example, to tag a specific node to use a specific profile, use the following commands:

    (undercloud) $ NODE=<NODE NAME OR ID>
    (undercloud) $ PROFILE=<PROFILE NAME>
    (undercloud) $ openstack baremetal node set --property capabilities="profile:$PROFILE,boot_option:local" $NODE
    • Set the $NODE variable to the name or UUID of the node.
    • Set the $PROFILE variable to the specific profile, such as control or compute.
    • The profile option in properties/capabilities includes the $PROFILE variable to tag the node with the corresponding profile, such as profile:control or profile:compute.
    • Set the boot_option:local option to define how each node boots.

    You can also retain existing capabilities values using an additional openstack baremetal node show command and jq filtering:

    (undercloud) $ openstack baremetal node set --property capabilities="profile:$PROFILE,boot_option:local,$(openstack baremetal node show $NODE -f json -c properties | jq -r .properties.capabilities | sed "s/boot_mode:[^,]*,//g")" $NODE
  2. After you complete node tagging, check the assigned profiles or possible profiles:

    (undercloud) $ openstack overcloud profiles list

7.4. Setting the boot mode to UEFI mode

The default boot mode is Legacy BIOS mode. You can configure the nodes in your RHOSP deployment to use UEFI boot mode instead of Legacy BIOS boot mode.

Warning

Some hardware does not support Legacy BIOS boot mode. If you attempt to use Legacy BIOS boot mode on hardware that does not support Legacy BIOS boot mode your deployment might fail. To ensure that your hardware deploys successfully, use UEFI boot mode.

Note

If you enable UEFI boot mode, you must build your own whole-disk image that includes a partitioning layout and bootloader, along with the user image. For more information about creating whole-disk images, see Creating whole-disk images.

Procedure

  1. Set the following parameters in your undercloud.conf file:

    ipxe_enabled = True
  2. Save the undercloud.conf file and run the undercloud installation:

    $ openstack undercloud install

    Wait until the installation script completes.

  3. Check the existing capabilities of each registered node:

    $ openstack baremetal node show <node> -f json -c properties | jq -r .properties.capabilities
    • Replace <node> with the ID of the bare metal node.
  4. Set the boot mode to uefi for each registered node by adding boot_mode:uefi to the existing capabilities of the node:

    $ openstack baremetal node set --property capabilities="boot_mode:uefi,<capability_1>,...,<capability_n>" <node>
    • Replace <node> with the ID of the bare metal node.
    • Replace <capability_1>, and all capabilities up to <capability_n>, with each capability that you retrieved in step 3.

      For example, use the following command to set the boot mode to uefi with local boot:

    $ openstack baremetal node set --property capabilities="boot_mode:uefi,boot_option:local" <node>
  5. Set the boot mode to uefi for each bare metal flavor:

    $ openstack flavor set --property capabilities:boot_mode='uefi' <flavor>

7.5. Enabling virtual media boot

Important

This feature is available in this release as a Technology Preview, and therefore is not fully supported by Red Hat. It should only be used for testing, and should not be deployed in a production environment. For more information about Technology Preview features, see Scope of Coverage Details.

You can use Redfish virtual media boot to supply a boot image to the Baseboard Management Controller (BMC) of a node so that the BMC can insert the image into one of the virtual drives. The node can then boot from the virtual drive into the operating system that exists in the image.

Redfish hardware types support booting deploy, rescue, and user images over virtual media. The Bare Metal service (ironic) uses kernel and ramdisk images associated with a node to build bootable ISO images for UEFI or BIOS boot modes at the moment of node deployment. The major advantage of virtual media boot is that you can eliminate the TFTP image transfer phase of PXE and use HTTP GET, or other methods, instead.

To boot a node with the redfish hardware type over virtual media, set the boot interface to redfish-virtual-media and, for UEFI nodes, define the EFI System Partition (ESP) image. Then configure an enrolled node to use Redfish virtual media boot.

Prerequisites

  • Redfish driver enabled in the enabled_hardware_types parameter in the undercloud.conf file.
  • A bare metal node registered and enrolled.
  • IPA and instance images in the Image Service (glance).
  • For UEFI nodes, you must also have an EFI system partition image (ESP) available in the Image Service (glance).
  • A bare metal flavor.
  • A network for cleaning and provisioning.

Procedure

  1. Set the Bare Metal service (ironic) boot interface to redfish-virtual-media:

    $ openstack baremetal node set --boot-interface redfish-virtual-media $NODE_NAME
    • Replace $NODE_NAME with the name of the node.
  2. For UEFI nodes, set the boot mode to uefi:

    NODE=<NODE NAME OR ID> ; openstack baremetal node set --property capabilities="boot_mode:uefi,$(openstack baremetal node show $NODE -f json -c properties | jq -r .properties.capabilities | sed "s/boot_mode:[^,]*,//g")" $NODE
    • Replace $NODE with the name of the node.

      Note

      For BIOS nodes, do not complete this step.

  3. For UEFI nodes, define the EFI System Partition (ESP) image:

    $ openstack baremetal node set --driver-info bootloader=$ESP $NODE_NAME
    • Replace $ESP with the glance image UUID or URL for the ESP image, and replace $NODE_NAME with the name of the node.

      Note

      For BIOS nodes, do not complete this step.

  4. Create a port on the bare metal node and associate the port with the MAC address of the NIC on the bare metal node:

    $ openstack baremetal port create --pxe-enabled True --node $UUID $MAC_ADDRESS
    • Replace $UUID with the UUID of the bare metal node, and replace $MAC_ADDRESS with the MAC address of the NIC on the bare metal node.

7.6. Defining the root disk for multi-disk clusters

Most Ceph Storage nodes use multiple disks. When nodes use multiple disks, director must identify the root disk. By default, director writes the overcloud image to the root disk during the provisioning process.

Use this procedure to identify the root device by serial number. For more information about other properties you can use to identify the root disk, see Section 7.7, “Properties that identify the root disk”.

Procedure

  1. Verify the disk information from the hardware introspection of each node. The following command to displays the disk information of a node:

    (undercloud)$ openstack baremetal introspection data save 1a4e30da-b6dc-499d-ba87-0bd8a3819bc0 | jq ".inventory.disks"

    For example, the data for one node might show three disks:

    [
      {
        "size": 299439751168,
        "rotational": true,
        "vendor": "DELL",
        "name": "/dev/sda",
        "wwn_vendor_extension": "0x1ea4dcc412a9632b",
        "wwn_with_extension": "0x61866da04f3807001ea4dcc412a9632b",
        "model": "PERC H330 Mini",
        "wwn": "0x61866da04f380700",
        "serial": "61866da04f3807001ea4dcc412a9632b"
      }
      {
        "size": 299439751168,
        "rotational": true,
        "vendor": "DELL",
        "name": "/dev/sdb",
        "wwn_vendor_extension": "0x1ea4e13c12e36ad6",
        "wwn_with_extension": "0x61866da04f380d001ea4e13c12e36ad6",
        "model": "PERC H330 Mini",
        "wwn": "0x61866da04f380d00",
        "serial": "61866da04f380d001ea4e13c12e36ad6"
      }
      {
        "size": 299439751168,
        "rotational": true,
        "vendor": "DELL",
        "name": "/dev/sdc",
        "wwn_vendor_extension": "0x1ea4e31e121cfb45",
        "wwn_with_extension": "0x61866da04f37fc001ea4e31e121cfb45",
        "model": "PERC H330 Mini",
        "wwn": "0x61866da04f37fc00",
        "serial": "61866da04f37fc001ea4e31e121cfb45"
      }
    ]
  2. On the undercloud, set the root disk for a node. Include the most appropriate hardware attribute value to define the root disk.

    (undercloud)$ openstack baremetal node set --property root_device='{"serial":"<serial_number>"}' <node-uuid>

    For example, to set the root device to disk 2, which has the serial number 61866da04f380d001ea4e13c12e36ad6, enter the following command:

    (undercloud)$ openstack baremetal node set --property root_device='{"serial": "61866da04f380d001ea4e13c12e36ad6"}' 1a4e30da-b6dc-499d-ba87-0bd8a3819bc0
    Note

    Configure the BIOS of each node to boot from the root disk that you choose. Configure the boot order to boot from the network first, then from the root disk.

Director identifies the specific disk to use as the root disk. When you run the openstack overcloud deploy command, director provisions and writes the overcloud image to the root disk.

7.7. Properties that identify the root disk

There are several properties that you can define to help director identify the root disk:

  • model (String): Device identifier.
  • vendor (String): Device vendor.
  • serial (String): Disk serial number.
  • hctl (String): Host:Channel:Target:Lun for SCSI.
  • size (Integer): Size of the device in GB.
  • wwn (String): Unique storage identifier.
  • wwn_with_extension (String): Unique storage identifier with the vendor extension appended.
  • wwn_vendor_extension (String): Unique vendor storage identifier.
  • rotational (Boolean): True for a rotational device (HDD), otherwise false (SSD).
  • name (String): The name of the device, for example: /dev/sdb1.
Important

Use the name property only for devices with persistent names. Do not use name to set the root disk for any other devices because this value can change when the node boots.

7.8. Using the overcloud-minimal image to avoid using a Red Hat subscription entitlement

By default, director writes the QCOW2 overcloud-full image to the root disk during the provisioning process. The overcloud-full image uses a valid Red Hat subscription. However, you can also use the overcloud-minimal image, for example, to provision a bare OS where you do not want to run any other OpenStack services and consume your subscription entitlements.

A common use case for this occurs when you want to provision nodes with only Ceph daemons. For this and similar use cases, you can use the overcloud-minimal image option to avoid reaching the limit of your paid Red Hat subscriptions. For information about how to obtain the overcloud-minimal image, see Obtaining images for overcloud nodes.

Note

A Red Hat OpenStack Platform (RHOSP) subscription contains Open vSwitch (OVS), but core services, such as OVS, are not available when you use the overcloud-minimal image. OVS is not required to deploy Ceph Storage nodes. Use linux_bond instead of ovs_bond to define bonds. For more information about linux_bond, see Linux bonding options.

Procedure

  1. To configure director to use the overcloud-minimal image, create an environment file that contains the following image definition:

    parameter_defaults:
      <roleName>Image: overcloud-minimal
  2. Replace <roleName> with the name of the role and append Image to the name of the role. The following example shows an overcloud-minimal image for Ceph storage nodes:

    parameter_defaults:
      CephStorageImage: overcloud-minimal
  3. In the roles_data.yaml role definition file, set the rhsm_enforce parameter to False.

    rhsm_enforce: False
  4. Pass the environment file to the openstack overcloud deploy command.
Note

The overcloud-minimal image supports only standard Linux bridges and not OVS because OVS is an OpenStack service that requires a Red Hat OpenStack Platform subscription entitlement.

7.9. Creating architecture specific roles

When building a multi-architecture cloud, you must add any architecture specific roles to the roles_data.yaml file. The following example includes the ComputePPC64LE role along with the default roles:

openstack overcloud roles generate \
    --roles-path /usr/share/openstack-tripleo-heat-templates/roles -o ~/templates/roles_data.yaml \
    Controller Compute ComputePPC64LE BlockStorage ObjectStorage CephStorage

The Creating a Custom Role File section has information on roles.

7.10. Environment files

The undercloud includes a set of heat templates that form the plan for your overcloud creation. You can customize aspects of the overcloud with environment files, which are YAML-formatted files that override parameters and resources in the core heat template collection. You can include as many environment files as necessary. However, the order of the environment files is important because the parameters and resources that you define in subsequent environment files take precedence. Use the following list as an example of the environment file order:

  • The number of nodes and the flavors for each role. It is vital to include this information for overcloud creation.
  • The location of the container images for containerized OpenStack services.
  • Any network isolation files, starting with the initialization file (environments/network-isolation.yaml) from the heat template collection, then your custom NIC configuration file, and finally any additional network configurations. For more information, see the following chapters in the Advanced Overcloud Customization guide:

  • Any external load balancing environment files if you are using an external load balancer. For more information, see External Load Balancing for the Overcloud.
  • Any storage environment files such as Ceph Storage, NFS, or iSCSI.
  • Any environment files for Red Hat CDN or Satellite registration.
  • Any other custom environment files.
Note

Open Virtual Networking (OVN) is the default networking mechanism driver in Red Hat OpenStack Platform 16.2. If you want to use OVN with distributed virtual routing (DVR), you must include the environments/services/neutron-ovn-dvr-ha.yaml file in the openstack overcloud deploy command. If you want to use OVN without DVR, you must include the environments/services/neutron-ovn-ha.yaml file in the openstack overcloud deploy command.

Red Hat recommends that you organize your custom environment files in a separate directory, such as the templates directory.

For more information about customizing advanced features for your overcloud, see the Advanced Overcloud Customization guide.

Important

A basic overcloud uses local LVM storage for block storage, which is not a supported configuration. It is recommended to use an external storage solution, such as Red Hat Ceph Storage, for block storage.

Note

The environment file extension must be .yaml or .template, or it will not be treated as a custom template resource.

The next few sections contain information about creating some environment files necessary for your overcloud.

7.11. Creating an environment file that defines node counts and flavors

By default, director deploys an overcloud with 1 Controller node and 1 Compute node using the baremetal flavor. However, this is only suitable for a proof-of-concept deployment. You can override the default configuration by specifying different node counts and flavors. For a small-scale production environment, deploy at least 3 Controller nodes and 3 Compute nodes, and assign specific flavors to ensure that the nodes have the appropriate resource specifications. Complete the following steps to create an environment file named node-info.yaml that stores the node counts and flavor assignments.

Procedure

  1. Create a node-info.yaml file in the /home/stack/templates/ directory:

    (undercloud) $ touch /home/stack/templates/node-info.yaml
  2. Edit the file to include the node counts and flavors that you need. This example contains 3 Controller nodes and 3 Compute nodes:

    parameter_defaults:
      OvercloudControllerFlavor: control
      OvercloudComputeFlavor: compute
      ControllerCount: 3
      ComputeCount: 3

7.12. Creating an environment file for undercloud CA trust

If your undercloud uses TLS and the Certificate Authority (CA) is not publicly trusted, you can use the CA for SSL endpoint encryption that the undercloud operates. To ensure that the undercloud endpoints are accessible to the rest of your deployment, configure your overcloud nodes to trust the undercloud CA.

Note

For this approach to work, your overcloud nodes must have a network route to the public endpoint on the undercloud. It is likely that you must apply this configuration for deployments that rely on spine-leaf networking.

There are two types of custom certificates you can use in the undercloud:

  • User-provided certificates - This definition applies when you have provided your own certificate. This can be from your own CA, or it can be self-signed. This is passed using the undercloud_service_certificate option. In this case, you must either trust the self-signed certificate, or the CA (depending on your deployment).
  • Auto-generated certificates - This definition applies when you use certmonger to generate the certificate using its own local CA. Enable auto-generated certificates with the generate_service_certificate option in the undercloud.conf file. In this case, director generates a CA certificate at /etc/pki/ca-trust/source/anchors/cm-local-ca.pem and the director configures the undercloud’s HAProxy instance to use a server certificate. Add the CA certificate to the inject-trust-anchor-hiera.yaml file to present the certificate to OpenStack Platform.

This example uses a self-signed certificate located in /home/stack/ca.crt.pem. If you use auto-generated certificates, use /etc/pki/ca-trust/source/anchors/cm-local-ca.pem instead.

Procedure

  1. Open the certificate file and copy only the certificate portion. Do not include the key:

    $ vi /home/stack/ca.crt.pem

    The certificate portion you need looks similar to this shortened example:

    -----BEGIN CERTIFICATE-----
    MIIDlTCCAn2gAwIBAgIJAOnPtx2hHEhrMA0GCSqGSIb3DQEBCwUAMGExCzAJBgNV
    BAYTAlVTMQswCQYDVQQIDAJOQzEQMA4GA1UEBwwHUmFsZWlnaDEQMA4GA1UECgwH
    UmVkIEhhdDELMAkGA1UECwwCUUUxFDASBgNVBAMMCzE5Mi4xNjguMC4yMB4XDTE3
    -----END CERTIFICATE-----
  2. Create a new YAML file called /home/stack/inject-trust-anchor-hiera.yaml with the following contents, and include the certificate you copied from the PEM file:

    parameter_defaults:
      CAMap:
        undercloud-ca:
          content: |
            -----BEGIN CERTIFICATE-----
            MIIDlTCCAn2gAwIBAgIJAOnPtx2hHEhrMA0GCSqGSIb3DQEBCwUAMGExCzAJBgNV
            BAYTAlVTMQswCQYDVQQIDAJOQzEQMA4GA1UEBwwHUmFsZWlnaDEQMA4GA1UECgwH
            UmVkIEhhdDELMAkGA1UECwwCUUUxFDASBgNVBAMMCzE5Mi4xNjguMC4yMB4XDTE3
            -----END CERTIFICATE-----
Note

The certificate string must follow the PEM format.

Note

The CAMap parameter might contain other certificates relevant to SSL/TLS configuration.

Director copies the CA certificate to each overcloud node during the overcloud deployment. As a result, each node trusts the encryption presented by the undercloud’s SSL endpoints. For more information about environment files, see Section 7.16, “Including environment files in an overcloud deployment”.

7.13. Disabling TSX on new deployments

From Red Hat Enterprise Linux 8.3 onwards, the kernel disables support for the Intel Transactional Synchronization Extensions (TSX) feature by default.

You must explicitly disable TSX for new overclouds unless you strictly require it for your workloads or third party vendors.

Set the KernelArgs heat parameter in an environment file.

parameter_defaults:
    ComputeParameters:
       KernelArgs: "tsx=off"

Include the environment file when you run your openstack overcloud deploy command.

7.14. Deployment command

The final stage in creating your OpenStack environment is to run the openstack overcloud deploy command to create the overcloud. Before you run this command, familiarize yourself with key options and how to include custom environment files.

Warning

Do not run openstack overcloud deploy as a background process. The overcloud creation might hang mid-deployment if you run it as a background process.

7.15. Deployment command options

The following table lists the additional parameters for the openstack overcloud deploy command.

Important

Some options are available in this release as a Technology Preview and therefore are not fully supported by Red Hat. They should only be used for testing and should not be used in a production environment. For more information about Technology Preview features, see Scope of Coverage Details.

Table 7.1. Deployment command options

ParameterDescription

--templates [TEMPLATES]

The directory that contains the heat templates that you want to deploy. If blank, the deployment command uses the default template location at /usr/share/openstack-tripleo-heat-templates/

--stack STACK

The name of the stack that you want to create or update

-t [TIMEOUT], --timeout [TIMEOUT]

The deployment timeout duration in minutes

--libvirt-type [LIBVIRT_TYPE]

The virtualization type that you want to use for hypervisors

--ntp-server [NTP_SERVER]

The Network Time Protocol (NTP) server that you want to use to synchronize time. You can also specify multiple NTP servers in a comma-separated list, for example: --ntp-server 0.centos.pool.org,1.centos.pool.org. For a high availability cluster deployment, it is essential that your Controller nodes are consistently referring to the same time source. Note that a typical environment might already have a designated NTP time source with established practices.

--no-proxy [NO_PROXY]

Defines custom values for the environment variable no_proxy, which excludes certain host names from proxy communication.

--overcloud-ssh-user OVERCLOUD_SSH_USER

Defines the SSH user to access the overcloud nodes. Normally SSH access occurs through the heat-admin user.

--overcloud-ssh-key OVERCLOUD_SSH_KEY

Defines the key path for SSH access to overcloud nodes.

--overcloud-ssh-network OVERCLOUD_SSH_NETWORK

Defines the network name that you want to use for SSH access to overcloud nodes.

-e [EXTRA HEAT TEMPLATE], --environment-file [ENVIRONMENT FILE]

Extra environment files that you want to pass to the overcloud deployment. You can specify this option more than once. Note that the order of environment files that you pass to the openstack overcloud deploy command is important. For example, parameters from each sequential environment file override the same parameters from earlier environment files.

--environment-directory

A directory that contains environment files that you want to include in deployment. The deployment command processes these environment files in numerical order, then alphabetical order.

-r ROLES_FILE

Defines the roles file and overrides the default roles_data.yaml in the --templates directory. The file location can be an absolute path or the path relative to --templates.

-n NETWORKS_FILE

Defines the networks file and overrides the default network_data.yaml in the --templates directory. The file location can be an absolute path or the path relative to --templates.

-p PLAN_ENVIRONMENT_FILE

Defines the plan Environment file and overrides the default plan-environment.yaml in the --templates directory. The file location can be an absolute path or the path relative to --templates.

--no-cleanup

Use this option if you do not want to delete temporary files after deployment, and log their location.

--update-plan-only

Use this option if you want to update the plan without performing the actual deployment.

--validation-errors-nonfatal

The overcloud creation process performs a set of pre-deployment checks. This option exits if any non-fatal errors occur from the pre-deployment checks. It is advisable to use this option as any errors can cause your deployment to fail.

--validation-warnings-fatal

The overcloud creation process performs a set of pre-deployment checks. This option exits if any non-critical warnings occur from the pre-deployment checks. openstack-tripleo-validations

--dry-run

Use this option if you want to perform a validation check on the overcloud without creating the overcloud.

--run-validations

Use this option to run external validations from the openstack-tripleo-validations package.

--skip-postconfig

Use this option to skip the overcloud post-deployment configuration.

--force-postconfig

Use this option to force the overcloud post-deployment configuration.

--skip-deploy-identifier

Use this option if you do not want the deployment command to generate a unique identifier for the DeployIdentifier parameter. The software configuration deployment steps only trigger if there is an actual change to the configuration. Use this option with caution and only if you are confident that you do not need to run the software configuration, such as scaling out certain roles.

--answers-file ANSWERS_FILE

The path to a YAML file with arguments and parameters.

--disable-password-generation

Use this option if you want to disable password generation for the overcloud services.

--deployed-server

Use this option if you want to deploy pre-provisioned overcloud nodes. Used in conjunction with --disable-validations.

--no-config-download, --stack-only

Use this option if you want to disable the config-download workflow and create only the stack and associated OpenStack resources. This command applies no software configuration to the overcloud.

--config-download-only

Use this option if you want to disable the overcloud stack creation and only run the config-download workflow to apply the software configuration.

--output-dir OUTPUT_DIR

The directory that you want to use for saved config-download output. The directory must be writeable by the mistral user. When not specified, director uses the default, which is /var/lib/mistral/overcloud.

--override-ansible-cfg OVERRIDE_ANSIBLE_CFG

The path to an Ansible configuration file. The configuration in the file overrides any configuration that config-download generates by default.

--config-download-timeout CONFIG_DOWNLOAD_TIMEOUT

The timeout duration in minutes that you want to use for config-download steps. If unset, director sets the default to the amount of time remaining from the --timeout parameter after the stack deployment operation.

--limit NODE1,NODE2

Use this option with a comma-separated list of nodes to limit the config-download playbook execution to a specific node or set of nodes. For example, the --limit option can be useful for scale-up operations, when you want to run config-download only on new nodes. This argument might cause live migration of instances between hosts to fail, see Running config-download with the ansible-playbook-command.sh script

--tags TAG1,TAG2

(Technology Preview) Use this option with a comma-separated list of tags from the config-download playbook to run the deployment with a specific set of config-download tasks.

--skip-tags TAG1,TAG2

(Technology Preview) Use this option with a comma-separated list of tags that you want to skip from the config-download playbook.

Run the following command to view a full list of options:

(undercloud) $ openstack help overcloud deploy

Some command line parameters are outdated or deprecated in favor of using heat template parameters, which you include in the parameter_defaults section in an environment file. The following table maps deprecated parameters to their heat template equivalents.

Table 7.2. Mapping deprecated CLI parameters to heat template parameters

ParameterDescriptionHeat template parameter

--control-scale

The number of Controller nodes to scale out

ControllerCount

--compute-scale

The number of Compute nodes to scale out

ComputeCount

--ceph-storage-scale

The number of Ceph Storage nodes to scale out

CephStorageCount

--block-storage-scale

The number of Block Storage (cinder) nodes to scale out

BlockStorageCount

--swift-storage-scale

The number of Object Storage (swift) nodes to scale out

ObjectStorageCount

--control-flavor

The flavor that you want to use for Controller nodes

OvercloudControllerFlavor

--compute-flavor

The flavor that you want to use for Compute nodes

OvercloudComputeFlavor

--ceph-storage-flavor

The flavor that you want to use for Ceph Storage nodes

OvercloudCephStorageFlavor

--block-storage-flavor

The flavor that you want to use for Block Storage (cinder) nodes

OvercloudBlockStorageFlavor

--swift-storage-flavor

The flavor that you want to use for Object Storage (swift) nodes

OvercloudSwiftStorageFlavor

--validation-errors-fatal

The overcloud creation process performs a set of pre-deployment checks. This option exits if any fatal errors occur from the pre-deployment checks. It is advisable to use this option because any errors can cause your deployment to fail.

No parameter mapping

--disable-validations

Disable the pre-deployment validations entirely. These validations were built-in pre-deployment validations, which have been replaced with external validations from the openstack-tripleo-validations package.

No parameter mapping

--config-download

Run deployment using the config-download mechanism. This is now the default and this CLI options may be removed in the future.

No parameter mapping

--rhel-reg

Use this option to register overcloud nodes to the Customer Portal or Satellite 6.

RhsmVars

--reg-method

Use this option to define the registration method that you want to use for the overcloud nodes. satellite for Red Hat Satellite 6 or Red Hat Satellite 5, portal for Customer Portal.

RhsmVars

--reg-org [REG_ORG]

The organization that you want to use for registration.

RhsmVars

--reg-force

Use this option to register the system even if it is already registered.

RhsmVars

--reg-sat-url [REG_SAT_URL]

The base URL of the Satellite server to register overcloud nodes. Use the Satellite HTTP URL and not the HTTPS URL for this parameter. For example, use http://satellite.example.com and not https://satellite.example.com. The overcloud creation process uses this URL to determine whether the server is a Red Hat Satellite 5 or Red Hat Satellite 6 server. If the server is a Red Hat Satellite 6 server, the overcloud obtains the katello-ca-consumer-latest.noarch.rpm file, registers with subscription-manager, and installs katello-agent. If the server is a Red Hat Satellite 5 server, the overcloud obtains the RHN-ORG-TRUSTED-SSL-CERT file and registers with rhnreg_ks.

RhsmVars

--reg-activation-key [REG_ACTIVATION_KEY]

Use this option to define the activation key that you want to use for registration.

RhsmVars

These parameters are scheduled for removal in a future version of Red Hat OpenStack Platform.

7.16. Including environment files in an overcloud deployment

Use the -e option to include an environment file to customize your overcloud. You can include as many environment files as necessary. However, the order of the environment files is important because the parameters and resources that you define in subsequent environment files take precedence.

Any environment files that you add to the overcloud using the -e option become part of the stack definition of the overcloud.

The following command is an example of how to start the overcloud creation using environment files defined earlier in this scenario:

(undercloud) $ openstack overcloud deploy --templates \
  -e /home/stack/templates/node-info.yaml\
  -e /home/stack/containers-prepare-parameter.yaml \
  -e /home/stack/inject-trust-anchor-hiera.yaml \
  -r /home/stack/templates/roles_data.yaml \

This command contains the following additional options:

--templates
Creates the overcloud using the heat template collection in /usr/share/openstack-tripleo-heat-templates as a foundation.
-e /home/stack/templates/node-info.yaml
Adds an environment file to define how many nodes and which flavors to use for each role.
-e /home/stack/containers-prepare-parameter.yaml
Adds the container image preparation environment file. You generated this file during the undercloud installation and can use the same file for your overcloud creation.
-e /home/stack/inject-trust-anchor-hiera.yaml
Adds an environment file to install a custom certificate in the undercloud.
-r /home/stack/templates/roles_data.yaml
(Optional) The generated roles data if you use custom roles or want to enable a multi architecture cloud. For more information, see Section 7.9, “Creating architecture specific roles”.

Director requires these environment files for re-deployment and post-deployment functions. Failure to include these files can result in damage to your overcloud.

To modify the overcloud configuration at a later stage, perform the following actions:

  1. Modify parameters in the custom environment files and heat templates.
  2. Run the openstack overcloud deploy command again with the same environment files.

Do not edit the overcloud configuration directly because director overrides any manual configuration when you update the overcloud stack.

7.17. Running the pre-deployment validation

Run the pre-deployment validation group to check the deployment requirements.

Procedure

  1. Source the stackrc file.

    $ source ~/stackrc
  2. This validation requires a copy of your overcloud plan. Upload your overcloud plan with all necessary environment files. To upload your plan only, run the openstack overcloud deploy command with the --update-plan-only option:

    $ openstack overcloud deploy --templates \
        -e environment-file1.yaml \
        -e environment-file2.yaml \
        ...
        --update-plan-only
  3. Run the openstack tripleo validator run command with the --group pre-deployment option:

    $ openstack tripleo validator run --group pre-deployment
  4. If the overcloud uses a plan name that is different to the default overcloud name, set the plan name with the --plan option:

    $ openstack tripleo validator run --group pre-deployment \
        --plan myovercloud
  5. Review the results of the validation report. To view detailed output from a specific validation, run the openstack tripleo validator show run --full command against the UUID of the specific validation from the report:

    $ openstack tripleo validator show run --full <UUID>
Important

A FAILED validation does not prevent you from deploying or running Red Hat OpenStack Platform. However, a FAILED validation can indicate a potential issue with a production environment.

7.18. Overcloud deployment output

When the overcloud creation completes, director provides a recap of the Ansible plays that were executed to configure the overcloud:

PLAY RECAP *************************************************************
overcloud-compute-0     : ok=160  changed=67   unreachable=0    failed=0
overcloud-controller-0  : ok=210  changed=93   unreachable=0    failed=0
undercloud              : ok=10   changed=7    unreachable=0    failed=0

Tuesday 15 October 2018  18:30:57 +1000 (0:00:00.107) 1:06:37.514 ******
========================================================================

Director also provides details to access your overcloud.

Ansible passed.
Overcloud configuration completed.
Overcloud Endpoint: http://192.168.24.113:5000
Overcloud Horizon Dashboard URL: http://192.168.24.113:80/dashboard
Overcloud rc file: /home/stack/overcloudrc
Overcloud Deployed

7.19. Accessing the overcloud

Director generates a script to configure and help authenticate interactions with your overcloud from the undercloud. Director saves this file, overcloudrc, in the home directory of the stack user. Run the following command to use this file:

(undercloud) $ source ~/overcloudrc

This command loads the environment variables that are necessary to interact with your overcloud from the undercloud CLI. The command prompt changes to indicate this:

(overcloud) $

To return to interacting with the undercloud, run the following command:

(overcloud) $ source ~/stackrc
(undercloud) $

7.20. Running the post-deployment validation

Run the post-deployment validation group to check the post-deployment state.

Procedure

  1. Source the stackrc file.

    $ source ~/stackrc
  2. Run the openstack tripleo validator run command with the --group post-deployment option:

    $ openstack tripleo validator run --group post-deployment
  3. If the overcloud uses a plan name that is different to the default overcloud name, set the plan name with the --plan option:

    $ openstack tripleo validator run --group post-deployment \
        --plan myovercloud
  4. Review the results of the validation report. To view detailed output from a specific validation, run the openstack tripleo validator show run --full command against the UUID of the specific validation from the report:

    $ openstack tripleo validator show run --full <UUID>
Important

A FAILED validation does not prevent you from deploying or running Red Hat OpenStack Platform. However, a FAILED validation can indicate a potential issue with a production environment.