Red Hat Training

A Red Hat training course is available for Red Hat OpenStack Platform

Chapter 5. Configuring Basic Overcloud Requirements

This chapter provides the basic configuration steps for an enterprise-level OpenStack Platform environment. An Overcloud with a basic configuration contains no custom features. However, you can add advanced configuration options to this basic Overcloud and customize it to your specifications using the instructions in Chapter 6, Configuring Advanced Customizations for the Overcloud.

For the examples in this chapter, all nodes in this chapter are bare metal systems using IPMI for power management. For more supported power management types and their options, see Appendix B, Power Management Drivers.

Workflow

  1. Create a node definition template and register blank nodes in the director.
  2. Inspect hardware of all nodes.
  3. Tag nodes into roles.
  4. Define additional node properties.

Requirements

  • The director node created in Chapter 4, Installing the Undercloud
  • A set of bare metal machines for your nodes. The number of node required depends on the type of Overcloud you intend to create (see Section 3.1, “Planning Node Deployment Roles” for information on Overcloud roles). These machines also must comply with the requirements set for each node type. For these requirements, see Section 2.4, “Overcloud Requirements”. These nodes do not require an operating system. The director copies a Red Hat Enterprise Linux 7 image to each node.
  • One network connection for our Provisioning network, which is configured as a native VLAN. All nodes must connect to this network and comply with the requirements set in Section 2.3, “Networking Requirements”. For the examples in this chapter, we use 192.0.2.0/24 as the Provisioning subnet with the following IP address assignments:

    Table 5.1. Provisioning Network IP Assignments

    Node Name

    IP Address

    MAC Address

    IPMI IP Address

    Director

    192.0.2.1

    aa:aa:aa:aa:aa:aa

    None required

    Controller

    DHCP defined

    bb:bb:bb:bb:bb:bb

    192.0.2.205

    Compute

    DHCP defined

    cc:cc:cc:cc:cc:cc

    192.0.2.206

  • All other network types use the Provisioning network for OpenStack services. However, you can create additional networks for other network traffic types. For more information, see Section 6.2, “Isolating Networks”.

5.1. Registering Nodes for the Overcloud

The director requires a node definition template, which you create manually. This file (instackenv.json) uses the JSON format file, and contains the hardware and power management details for your nodes. For example, a template for registering two nodes might look like this:

{
    "nodes":[
        {
            "mac":[
                "bb:bb:bb:bb:bb:bb"
            ],
            "cpu":"4",
            "memory":"6144",
            "disk":"40",
            "arch":"x86_64",
            "pm_type":"pxe_ipmitool",
            "pm_user":"admin",
            "pm_password":"p@55w0rd!",
            "pm_addr":"192.0.2.205"
        },
        {
            "mac":[
                "cc:cc:cc:cc:cc:cc"
            ],
            "cpu":"4",
            "memory":"6144",
            "disk":"40",
            "arch":"x86_64",
            "pm_type":"pxe_ipmitool",
            "pm_user":"admin",
            "pm_password":"p@55w0rd!",
            "pm_addr":"192.0.2.206"
        }
    ]
}

This template uses the following attributes:

pm_type
The power management driver to use. This example uses the IPMI driver (pxe_ipmitool), which is the preferred driver for power management.
pm_user; pm_password
The IPMI username and password.
pm_addr
The IP address of the IPMI device.
mac
(Optional) A list of MAC addresses for the network interfaces on the node. Use only the MAC address for the Provisioning NIC of each system.
cpu
(Optional) The number of CPUs on the node.
memory
(Optional) The amount of memory in MB.
disk
(Optional) The size of the hard disk in GB.
arch
(Optional) The system architecture.
Note

IPMI is the preferred supported power management driver. For more supported power management types and their options, see Appendix B, Power Management Drivers. If these power management drivers do not work as expected, use IPMI for your power management.

After creating the template, save the file to the stack user’s home directory (/home/stack/instackenv.json), then import it into the director using the following command:

$ openstack baremetal import --json ~/instackenv.json

This imports the template and registers each node from the template into the director.

Assign the kernel and ramdisk images to all nodes:

$ openstack baremetal configure boot

The nodes are now registered and configured in the director. View a list of these nodes in the CLI:

$ ironic node-list

5.2. Inspecting the Hardware of Nodes

The director can run an introspection process on each node. This process causes each node to boot an introspection agent over PXE. This agent collects hardware data from the node and sends it back to the director. The director then stores this introspection data in the OpenStack Object Storage (swift) service running on the director. The director uses hardware information for various purposes such as profile tagging, benchmarking, and manual root disk assignment.

Note

You can also create policy files to automatically tag nodes into profiles immediately after introspection. For more information on creating policy files and including them in the introspection process, see Appendix C, Automatic Profile Tagging. Alternatively, you can manually tag nodes into profiles as per the instructions in Section 5.3, “Tagging Nodes into Profiles”.

Run the following command to inspect the hardware attributes of each node:

$ openstack baremetal introspection bulk start

Monitor the progress of the introspection using the following command in a separate terminal window:

$ sudo journalctl -l -u openstack-ironic-inspector -u openstack-ironic-inspector-dnsmasq -u openstack-ironic-conductor -f
Important

Make sure this process runs to completion. This process usually takes 15 minutes for bare metal nodes.

Alternatively, perform a single introspection on each node individually. Set the node to management mode, perform the introspection, then move the node out of management mode:

$ ironic node-set-provision-state [NODE UUID] manage
$ openstack baremetal introspection start [NODE UUID]
$ ironic node-set-provision-state [NODE UUID] provide

5.3. Tagging Nodes into Profiles

After registering and inspecting the hardware of each node, you will tag them into specific profiles. These profile tags match your nodes to flavors, and in turn the flavors are assigned to a deployment role. Default profile flavors compute, control, swift-storage, ceph-storage, and block-storage are created during Undercloud installation and are usable without modification in most environments.

Note

For a large number of nodes, use automatic profile tagging. See Appendix C, Automatic Profile Tagging for more details.

To tag a node into a specific profile, add a profile option to the properties/capabilities parameter for each node. For example, to tag your nodes to use Controller and Compute profiles respectively, use the following commands:

$ ironic node-update 58c3d07e-24f2-48a7-bbb6-6843f0e8ee13 add properties/capabilities='profile:compute,boot_option:local'
$ ironic node-update 1a4e30da-b6dc-499d-ba87-0bd8a3819bc0 add properties/capabilities='profile:control,boot_option:local'

The addition of the profile:compute and profile:control options tag the two nodes into each respective profiles.

These commands also set the boot_option:local parameter, which defines the boot mode for each node.

Important

The director currently does not support UEFI boot mode.

After completing node tagging, check the assigned profiles or possible profiles:

$ openstack overcloud profiles list

5.4. Defining the Root Disk for Nodes

Some nodes might use multiple disks. This means the director needs to identify the disk to use for the root disk during provisioning. There are several properties you can use to help the director identify the root disk:

  • model (String): Device identifier.
  • vendor (String): Device vendor.
  • serial (String): Disk serial number.
  • wwn (String): Unique storage identifier.
  • hctl (String): Host:Channel:Target:Lun for SCSI.
  • size (Integer): Size of the device in GB.

In this example, you specify the drive to deploy the Overcloud image using the serial number of the disk to determine the root device.

First, collect a copy of each node’s hardware information that the director obtained from the introspection. This information is stored in the OpenStack Object Storage server (swift). Download this information to a new directory:

$ mkdir swift-data
$ cd swift-data
$ export IRONIC_DISCOVERD_PASSWORD=`sudo grep admin_password /etc/ironic-inspector/inspector.conf | awk '! /^#/ {print $NF}' | awk -F'=' '{print $2}'`
$ for node in $(ironic node-list | awk '!/UUID/ {print $2}'); do swift -U service:ironic -K $IRONIC_DISCOVERD_PASSWORD download ironic-inspector inspector_data-$node; done

This downloads the data from each inspector_data object from introspection. All objects use the node UUID as part of the object name:

$ ls -1
inspector_data-15fc0edc-eb8d-4c7f-8dc0-a2a25d5e09e3
inspector_data-46b90a4d-769b-4b26-bb93-50eaefcdb3f4
inspector_data-662376ed-faa8-409c-b8ef-212f9754c9c7
inspector_data-6fc70fe4-92ea-457b-9713-eed499eda206
inspector_data-9238a73a-ec8b-4976-9409-3fcff9a8dca3
inspector_data-9cbfe693-8d55-47c2-a9d5-10e059a14e07
inspector_data-ad31b32d-e607-4495-815c-2b55ee04cdb1
inspector_data-d376f613-bc3e-4c4b-ad21-847c4ec850f8

Check the disk information for each node. The following command displays each node ID and the disk information:

$ for node in $(ironic node-list | awk '!/UUID/ {print $2}'); do echo "NODE: $node" ; cat inspector_data-$node | jq '.inventory.disks' ; echo "-----" ; done

For example, the data for one node might show three disks:

NODE: 46b90a4d-769b-4b26-bb93-50eaefcdb3f4
[
  {
    "size": 1000215724032,
    "vendor": "ATA",
    "name": "/dev/sda",
    "model": "WDC WD1002F9YZ",
    "wwn": "0x0000000000000001",
    "serial": "WD-000000000001"
  },
  {
    "size": 1000215724032,
    "vendor": "ATA",
    "name": "/dev/sdb",
    "model": "WDC WD1002F9YZ",
    "wwn": "0x0000000000000002",
    "serial": "WD-000000000002"
  },
  {
    "size": 1000215724032,
    "vendor": "ATA",
    "name": "/dev/sdc",
    "model": "WDC WD1002F9YZ",
    "wwn": "0x0000000000000003",
    "serial": "WD-000000000003"
  },
]

For this example, set the root device to disk 2, which has WD-000000000002 as the serial number. This requires a change to the root_device parameter for the node definition:

$ ironic node-update 97e3f7b3-5629-473e-a187-2193ebe0b5c7 add properties/root_device='{"serial": "WD-000000000002"}'

This helps the director identify the specific disk to use as the root disk. When we initiate our Overcloud creation, the director provisions this node and writes the Overcloud image to this disk.

Note

Make sure to configure the BIOS of each node to include booting from the chosen root disk. The recommended boot order is network boot, then root disk boot.

Important

Do not use name to set the root disk as this value can change when the node boots.

5.5. Completing Basic Configuration

This concludes the required steps for basic configuration of your Overcloud. You can now either:

Important

A basic Overcloud uses local LVM storage for block storage, which is not a supported configuration. It is recommended to use an external storage solution for block storage. For example, see Section 6.7, “Configuring NFS Storage” for configuring an NFS share for block storage.