Chapter 4. Prerequisites

Prior to deploying the overcloud, the undercloud needs to be deployed and the hardware to host the overcloud needs to be introspected by OpenStack’s bare metal provisioning service, Ironic.

4.1. Deploy the Undercloud

To deploy Red Hat OpenStack Platform director, also known as the undercloud, complete Chapter 4, Installing the undercloud, of the Red Hat document Director Installation and Usage. Be sure to complete the following sections of the referenced document before registering and introspecting hardware.

  • 4.1. Creating a Director Installation User
  • 4.2. Creating Directories for Templates and Images
  • 4.3. Setting the Hostname for the System
  • 4.4. Registering your System
  • 4.5. Installing the Director Packages
  • 4.6. Configuring the Director
  • 4.7. Obtaining Images for Overcloud Nodes
  • 4.8. Setting a Nameserver on the Undercloud’s Neutron Subnet

This reference architecture used the following undercloud.conf when completing section 4.6 of the above.

local_ip =
undercloud_public_vip =
undercloud_admin_vip =
local_interface = eth0
masquerade_network =
dhcp_start =
dhcp_end =
network_cidr =
network_gateway =

inspection_iprange =,
inspection_interface = br-ctlplane
inspection_runbench = true
inspection_extras = false
inspection_enable_uefi = false

4.2. Register and Introspect Hardware

The registration and introspection of hardware requires a host definition file to provide the information that the OpenStack Ironic service needs to manage the hosts. The following host definition file, instackenv.json, provides an example of the servers being deployed in this reference architecture:

  "nodes": [
         "pm_password": "PASSWORD",
         "name": "m630_slot14",
         "pm_user": "root",
         "pm_addr": "",
         "pm_type": "pxe_ipmitool",
         "mac": [
         "arch": "x86_64",
          "capabilities": "node:controller-0,boot_option:local"

As shown in the example above, the capabilities entry contains both the server’s role and server’s number within that role, e.g. controller-0. This is done in order to predictably control node placement.

For this reference architecture, a custom role is created called osd-compute because servers in that role host both Ceph OSD and Nova Compute services. All servers used in the reference architecture are preassigned as a node in Ironic of either a controller or osd-compute. The host definition file contains the following capabilities entries:

$ grep capabilities instackenv.json
	 "capabilities": "node:controller-0,boot_option:local"
	 "capabilities": "node:controller-1,boot_option:local"
	 "capabilities": "node:controller-2,boot_option:local"
	 "capabilities": "node:osd-compute-0,boot_option:local"
	 "capabilities": "node:osd-compute-1,boot_option:local"
	 "capabilities": "node:osd-compute-2,boot_option:local"

For more information on assigning node specific identification, see section 8.1. Assigning Specific Node IDs of the Red Hat document Advanced Overcloud Customization.

As an optional parameter, a descriptive name of the server may be provided in the JSON file. The name shown in the following indicates that the server is in a blade chassis in slot 14.

         "name": "m630_slot14",

To import the hosts described in ~/instackenv.json, complete the following steps:

  1. Populate the Ironic database with the file
  openstack baremetal import ~/instackenv.json
  1. Verify that the Ironic database was populated with all of the servers
 openstack baremetal node list
  1. Assign the kernel and ramdisk images to the imported servers
 openstack baremetal configure boot
  1. Via Ironic, use IPMI to turn the servers on, collect their properties, and record them in the Ironic database
 openstack baremetal introspection bulk start

Bulk introspection time may vary based on node count and boot time. If inspection_runbench = false is set in ~/undercloud.conf, then the introspection process shall not run and store the results of a sysbench and fio benchmark for each server. Though this makes the introspection take less time, e.g. less than five minutes for seven nodes in this reference implementation, Red Hat OpenStack Platform director will not capture additional hardware metrics that may be deemed useful.

  1. Verify the nodes completed introspection without errors
[stack@hci-director ~]$ openstack baremetal introspection bulk status
| Node UUID                            | Finished | Error |
| a94b75e3-369f-4b2d-b8cc-8ab272e23e89 | True     | None  |
| 7ace7b2b-b549-414f-b83e-5f90299b4af3 | True     | None  |
| 8be1d83c-19cb-4605-b91d-928df163b513 | True     | None  |
| e8411659-bc2b-4178-b66f-87098a1e6920 | True     | None  |
| 04679897-12e9-4637-9998-af8bee30b414 | True     | None  |
| 48b4987d-e778-48e1-ba74-88a08edf7719 | True     | None  |
[stack@hci-director ~]$

4.2.1. Set the Root Device

By default, Ironic images the first block device, idenified a /dev/sda, with the operating system during deployment. This section covers how to change the block device to be imaged, known as the root device, by using Root Device Hints.

The Compute/OSD servers used for this reference architecture have the following hard disks with the following device file names as seen by the operating system:

  • Twelve 1117GB SAS hard disks presented as /dev/{sda, sdb, …​, sdl}
  • Three 400GB SATA SSD disks presented as /dev/{sdm, sdn, sdo}
  • Two 277GB SAS hard disks configured in RAID1 presented as /dev/sdp

The RAID1 pair hosts the OS, while the twelve larger drives are configured as OSDs that journal to the SSDs. Since /dev/sda should be used for an OSD, Ironic needs to store which root device it should use instead of the default.

After introspection, Ironic stores the WWN and size of each server’s block device. Since the RAID1 pair is both the smallest disk and the disk that should be used for the root device, the openstack baremetal configure boot command may be run a second time, after introspection, as below:

 openstack baremetal configure boot --root-device=smallest

The above makes Ironic find the WWN of the smallest disk and then store a directive in its database to use that WWN for the root device when the server is deployed. Ironic does this for every server in its database. To verify that the directive was set for any particular server, run a command like the following:

[stack@hci-director ~]$ openstack baremetal node show r730xd_u33 | grep wwn
| properties             | {u'cpu_arch': u'x86_64', u'root_device': {u'wwn': u'0x614187704e9c7700'}, u'cpus': u'56', u'capabilities': u'node:osd-compute-2,cpu_hugepages:true,cpu_txt:true,boot_option:local,cpu_aes:true,cpu_vt:true,cpu_hugepages_1g:true', u'memory_mb': u'262144', u'local_gb': 277}                          |
[stack@hci-director ~]$

In the above example u’root_device': {u’wwn': u'0x614187704e9c7700'} indicates that the root device is set to a specific WWN. The same command produces a similar result for each server. The server may be referred to by its name, as in the above example, but if the server does not have a name, then the UUID is used.

For the hardware used in this reference architecture, the size was a simple way to tell Ironic how to set the root device. For other hardware, other root device hints may be set using the vendor or model. If necessary, these values, in addition to the WWN and serial number, may be downloaded directly from Ironic’s Swift container and be explicitly to set the root device for each node. An example of how to do this may be found in section 5.4. Defining the Root Disk for Nodes of the Red Hat document Director Installation and Usage. If the root device of each node needs to be set explicitly, then a script may be written to automate setting this value for a large deployment. Though in the example above, a simple root device hint abstracts this automation so that Ironic may handle it, even for a large number of nodes.