Chapter 2. Preparing overcloud nodes

All nodes in this scenario are bare metal systems using IPMI for power management. These nodes do not require an operating system because the director copies a Red Hat Enterprise Linux 8 image to each node. Additionally, the Ceph Storage services on these nodes are containerized. The director communicates to each node through the Provisioning network during the introspection and provisioning processes. All nodes connect to this network through the native VLAN.

2.1. Cleaning Ceph Storage node disks

The Ceph Storage OSDs and journal partitions require GPT disk labels. This means the additional disks on Ceph Storage require conversion to GPT before installing the Ceph OSD services. You must delete all metadata from the disks to allow the director to set GPT labels on them.

You can configure the director to delete all disk metadata by default by adding the following setting to your /home/stack/undercloud.conf file:

clean_nodes=true

With this option, the Bare Metal Provisioning service runs an additional step to boot the nodes and clean the disks each time the node is set to available. This process adds an additional power cycle after the first introspection and before each deployment. The Bare Metal Provisioning service uses the wipefs --force --all command to perform the clean.

After setting this option, run the openstack undercloud install command to execute this configuration change.

Warning

The wipefs --force --all command deletes all data and metadata on the disk, but does not perform a secure erase. A secure erase takes much longer.

2.2. Registering nodes

Import a node inventory file (instackenv.json) in JSON format to the director so that the director can communicate with the nodes. This inventory file contains hardware and power management details that the director can use to register nodes:

{
    "nodes":[
        {
            "mac":[
                "b1:b1:b1:b1:b1:b1"
            ],
            "cpu":"4",
            "memory":"6144",
            "disk":"40",
            "arch":"x86_64",
            "pm_type":"ipmi",
            "pm_user":"admin",
            "pm_password":"p@55w0rd!",
            "pm_addr":"192.0.2.205"
        },
        {
            "mac":[
                "b2:b2:b2:b2:b2:b2"
            ],
            "cpu":"4",
            "memory":"6144",
            "disk":"40",
            "arch":"x86_64",
            "pm_type":"ipmi",
            "pm_user":"admin",
            "pm_password":"p@55w0rd!",
            "pm_addr":"192.0.2.206"
        },
        {
            "mac":[
                "b3:b3:b3:b3:b3:b3"
            ],
            "cpu":"4",
            "memory":"6144",
            "disk":"40",
            "arch":"x86_64",
            "pm_type":"ipmi",
            "pm_user":"admin",
            "pm_password":"p@55w0rd!",
            "pm_addr":"192.0.2.207"
        },
        {
            "mac":[
                "c1:c1:c1:c1:c1:c1"
            ],
            "cpu":"4",
            "memory":"6144",
            "disk":"40",
            "arch":"x86_64",
            "pm_type":"ipmi",
            "pm_user":"admin",
            "pm_password":"p@55w0rd!",
            "pm_addr":"192.0.2.208"
        },
        {
            "mac":[
                "c2:c2:c2:c2:c2:c2"
            ],
            "cpu":"4",
            "memory":"6144",
            "disk":"40",
            "arch":"x86_64",
            "pm_type":"ipmi",
            "pm_user":"admin",
            "pm_password":"p@55w0rd!",
            "pm_addr":"192.0.2.209"
        },
        {
            "mac":[
                "c3:c3:c3:c3:c3:c3"
            ],
            "cpu":"4",
            "memory":"6144",
            "disk":"40",
            "arch":"x86_64",
            "pm_type":"ipmi",
            "pm_user":"admin",
            "pm_password":"p@55w0rd!",
            "pm_addr":"192.0.2.210"
        },
        {
            "mac":[
                "d1:d1:d1:d1:d1:d1"
            ],
            "cpu":"4",
            "memory":"6144",
            "disk":"40",
            "arch":"x86_64",
            "pm_type":"ipmi",
            "pm_user":"admin",
            "pm_password":"p@55w0rd!",
            "pm_addr":"192.0.2.211"
        },
        {
            "mac":[
                "d2:d2:d2:d2:d2:d2"
            ],
            "cpu":"4",
            "memory":"6144",
            "disk":"40",
            "arch":"x86_64",
            "pm_type":"ipmi",
            "pm_user":"admin",
            "pm_password":"p@55w0rd!",
            "pm_addr":"192.0.2.212"
        },
        {
            "mac":[
                "d3:d3:d3:d3:d3:d3"
            ],
            "cpu":"4",
            "memory":"6144",
            "disk":"40",
            "arch":"x86_64",
            "pm_type":"ipmi",
            "pm_user":"admin",
            "pm_password":"p@55w0rd!",
            "pm_addr":"192.0.2.213"
        }
    ]
}

Procedure

  1. After you create the inventory file, save the file to the home directory of the stack user (/home/stack/instackenv.json).
  2. Initialize the stack user, then import the instackenv.json inventory file into the director:

    $ source ~/stackrc
    $ openstack overcloud node import ~/instackenv.json

    The openstack overcloud node import command imports the inventory file and registers each node with the director.

  3. Assign the kernel and ramdisk images to each node:
$ openstack overcloud node configure <node>

The nodes are now registered and configured in the director.

2.3. Manually tagging nodes into profiles

After you register each node, you must inspect the hardware and tag the node into a specific profile. Use profile tags to match your nodes to flavors, and then assign flavors to deployment roles.

To inspect and tag new nodes, complete the following steps:

  1. Trigger hardware introspection to retrieve the hardware attributes of each node:

    $ openstack overcloud node introspect --all-manageable --provide
    • The --all-manageable option introspects only the nodes that are in a managed state. In this example, all nodes are in a managed state.
    • The --provide option resets all nodes to an active state after introspection.

      Important

      Ensure that this process completes successfully. This process usually takes 15 minutes for bare metal nodes.

  2. Retrieve a list of your nodes to identify their UUIDs:

    $ openstack baremetal node list
  3. Add a profile option to the properties/capabilities parameter for each node to manually tag a node to a specific profile. The addition of the profile option tags the nodes into each respective profile.

    Note

    As an alternative to manual tagging, use the Automated Health Check (AHC) Tools to automatically tag larger numbers of nodes based on benchmarking data.

    For example, a typical deployment contains three profiles: control, compute, and ceph-storage. Enter the following commands to tag three nodes for each profile:

    $ openstack baremetal node set 1a4e30da-b6dc-499d-ba87-0bd8a3819bc0 --property capabilities="profile:control,boot_option:local"
    $ openstack baremetal node set 6faba1a9-e2d8-4b7c-95a2-c7fbdc12129a --property capabilities="profile:control,boot_option:local"
    $ openstack baremetal node set 5e3b2f50-fcd9-4404-b0a2-59d79924b38e --property capabilities="profile:control,boot_option:local"
    $ openstack baremetal node set 484587b2-b3b3-40d5-925b-a26a2fa3036f --property capabilities="profile:compute,boot_option:local"
    $ openstack baremetal node set d010460b-38f2-4800-9cc4-d69f0d067efe --property capabilities="profile:compute,boot_option:local"
    $ openstack baremetal node set d930e613-3e14-44b9-8240-4f3559801ea6 --property capabilities="profile:compute,boot_option:local"
    $ openstack baremetal node set da0cc61b-4882-45e0-9f43-fab65cf4e52b --property capabilities="profile:ceph-storage,boot_option:local"
    $ openstack baremetal node set b9f70722-e124-4650-a9b1-aade8121b5ed --property capabilities="profile:ceph-storage,boot_option:local"
    $ openstack baremetal node set 68bf8f29-7731-4148-ba16-efb31ab8d34f --property capabilities="profile:ceph-storage,boot_option:local"
    Tip

    You can also configure a new custom profile to tag a node for the Ceph MON and Ceph MDS services, see Chapter 3, Deploying Ceph services on dedicated nodes.

2.4. Defining the root disk for multi-disk clusters

Director must identify the root disk during provisioning in the case of nodes with multiple disks. For example, most Ceph Storage nodes use multiple disks. By default, the director writes the overcloud image to the root disk during the provisioning process

There are several properties that you can define to help the director identify the root disk:

  • model (String): Device identifier.
  • vendor (String): Device vendor.
  • serial (String): Disk serial number.
  • hctl (String): Host:Channel:Target:Lun for SCSI.
  • size (Integer): Size of the device in GB.
  • wwn (String): Unique storage identifier.
  • wwn_with_extension (String): Unique storage identifier with the vendor extension appended.
  • wwn_vendor_extension (String): Unique vendor storage identifier.
  • rotational (Boolean): True for a rotational device (HDD), otherwise false (SSD).
  • name (String): The name of the device, for example: /dev/sdb1.
Important

Use the name property only for devices with persistent names. Do not use name to set the root disk for any other devices because this value can change when the node boots.

Complete the following steps to specify the root device using its serial number.

Procedure

  1. Check the disk information from the hardware introspection of each node. Run the following command to display the disk information of a node:

    (undercloud) $ openstack baremetal introspection data save 1a4e30da-b6dc-499d-ba87-0bd8a3819bc0 | jq ".inventory.disks"

    For example, the data for one node might show three disks:

    [
      {
        "size": 299439751168,
        "rotational": true,
        "vendor": "DELL",
        "name": "/dev/sda",
        "wwn_vendor_extension": "0x1ea4dcc412a9632b",
        "wwn_with_extension": "0x61866da04f3807001ea4dcc412a9632b",
        "model": "PERC H330 Mini",
        "wwn": "0x61866da04f380700",
        "serial": "61866da04f3807001ea4dcc412a9632b"
      }
      {
        "size": 299439751168,
        "rotational": true,
        "vendor": "DELL",
        "name": "/dev/sdb",
        "wwn_vendor_extension": "0x1ea4e13c12e36ad6",
        "wwn_with_extension": "0x61866da04f380d001ea4e13c12e36ad6",
        "model": "PERC H330 Mini",
        "wwn": "0x61866da04f380d00",
        "serial": "61866da04f380d001ea4e13c12e36ad6"
      }
      {
        "size": 299439751168,
        "rotational": true,
        "vendor": "DELL",
        "name": "/dev/sdc",
        "wwn_vendor_extension": "0x1ea4e31e121cfb45",
        "wwn_with_extension": "0x61866da04f37fc001ea4e31e121cfb45",
        "model": "PERC H330 Mini",
        "wwn": "0x61866da04f37fc00",
        "serial": "61866da04f37fc001ea4e31e121cfb45"
      }
    ]
  2. Run the openstack baremetal node set --property root_device= command to set the root disk for a node. Include the most appropriate hardware attribute value to define the root disk.

    (undercloud) $ openstack baremetal node set --property root_device=’{“serial”:”<serial_number>”}' <node-uuid>

    For example, to set the root device to disk 2, which has the serial number 61866da04f380d001ea4e13c12e36ad6 run the following command:

(undercloud) $ openstack baremetal node set --property root_device='{"serial": "61866da04f380d001ea4e13c12e36ad6"}' 1a4e30da-b6dc-499d-ba87-0bd8a3819bc0

+

Note

Ensure that you configure the BIOS of each node to include booting from the root disk that you choose. Configure the boot order to boot from the network first, then to boot from the root disk.

Director identifies the specific disk to use as the root disk. When you run the openstack overcloud deploy command, director provisions and writes the overcloud image to the root disk.

2.5. Using the overcloud-minimal image to avoid using a Red Hat subscription entitlement

By default, director writes the QCOW2 overcloud-full image to the root disk during the provisioning process. The overcloud-full image uses a valid Red Hat subscription. However, you can also use the overcloud-minimal image, for example, to provision a bare OS where you do not want to run any other OpenStack services and consume your subscription entitlements.

A common use case for this occurs when you want to provision nodes with only Ceph daemons. For this and similar use cases, you can use the overcloud-minimal image option to avoid reaching the limit of your paid Red Hat subscriptions. For information about how to obtain the overcloud-minimal image, see Obtaining images for overcloud nodes.

Note

A Red Hat OpenStack Platform subscription contains Open vSwitch (OVS), but core services, such as OVS, are not available when you use the overcloud-minimal image. OVS is not required to deploy Ceph Storage nodes. Instead of using 'ovs_bond' to define bonds, use 'linux_bond'. For more information about linux_bond, see Linux bonding options.

Procedure

  1. To configure director to use the overcloud-minimal image, create an environment file that contains the following image definition:

    parameter_defaults:
      <roleName>Image: overcloud-minimal
  2. Replace <roleName> with the name of the role and append Image to the name of the role. The following example shows an overcloud-minimal image for Ceph storage nodes:

    parameter_defaults:
      CephStorageImage: overcloud-minimal
  3. Pass the environment file to the openstack overcloud deploy command.
Note

The overcloud-minimal image supports only standard Linux bridges and not OVS because OVS is an OpenStack service that requires a Red Hat OpenStack Platform subscription entitlement.