Chapter 8. Provisioning bare metal nodes before deploying the overcloud

Important

This feature is available in this release as a Technology Preview, and therefore is not fully supported by Red Hat. It should only be used for testing, and should not be deployed in a production environment. For more information about Technology Preview features, see Scope of Coverage Details.

The overcloud deployment process contains two primary operations:

  • Provisioning nodes
  • Deploying the overcloud

You can mitigate some of the risk involved with this process and identify points of failure more efficiently if you separate these operations into distinct processes:

  1. Provision your bare metal nodes.

    1. Create a node definition file in yaml format.
    2. Run the provisioning command, including the node definition file.
  2. Deploy your overcloud.

    1. Run the deployment command, including the heat environment file that the provisioning command generates.

The provisioning process provisions your nodes and generates a heat environment file that contains various node specifications, including node count, predictive node placement, custom images, and custom NICs. When you deploy your overcloud, include this file in the deployment command.

Important

You cannot combine pre-provisioned nodes with director-provisioned nodes.

8.1. Registering nodes for the overcloud

Director requires a node definition template that specifies the hardware and power management details of your nodes. You can create this template in JSON format, nodes.json, or YAML format, nodes.yaml.

Procedure

  1. Create a template named nodes.json or nodes.yaml that lists your nodes. Use the following JSON and YAML template examples to understand how to structure your node definition template:

    Example JSON template

    {
    	"nodes": [{
    			"ports": [{
    				"address": "aa:aa:aa:aa:aa:aa",
    				"physical_network": "ctlplane",
    				"local_link_connection": {
    					"switch_id": "52:54:00:00:00:00",
    					"port_id": "p0"
    				}
    			}],
    			"name": "node01",
    			"cpu": "4",
    			"memory": "6144",
    			"disk": "40",
    			"arch": "x86_64",
    			"pm_type": "ipmi",
    			"pm_user": "admin",
    			"pm_password": "p@55w0rd!",
    			"pm_addr": "192.168.24.205"
    		},
    		{
    			"ports": [{
    				"address": "bb:bb:bb:bb:bb:bb",
    				"physical_network": "ctlplane",
    				"local_link_connection": {
    					"switch_id": "52:54:00:00:00:00",
    					"port_id": "p0"
    				}
    			}],
    			"name": "node02",
    			"cpu": "4",
    			"memory": "6144",
    			"disk": "40",
    			"arch": "x86_64",
    			"pm_type": "ipmi",
    			"pm_user": "admin",
    			"pm_password": "p@55w0rd!",
    			"pm_addr": "192.168.24.206"
    		}
    	]
    }

    Example YAML template

    nodes:
      - ports:
          - address: aa:aa:aa:aa:aa:aa
            physical_network: ctlplane
            local_link_connection:
              switch_id: 52:54:00:00:00:00
              port_id: p0
        name: "node01"
        cpu: 4
        memory: 6144
        disk: 40
        arch: "x86_64"
        pm_type: "ipmi"
        pm_user: "admin"
        pm_password: "p@55w0rd!"
        pm_addr: "192.168.24.205"
      - ports:
          - address: bb:bb:bb:bb:bb:bb
            physical_network: ctlplane
            local_link_connection:
              switch_id: 52:54:00:00:00:00
              port_id: p0
        name: "node02"
        cpu: 4
        memory: 6144
        disk: 40
        arch: "x86_64"
        pm_type: "ipmi"
        pm_user: "admin"
        pm_password: "p@55w0rd!"
        pm_addr: "192.168.24.206"

    This template contains the following attributes:

    name
    The logical name for the node.
    pm_type

    The power management driver that you want to use. This example uses the IPMI driver (ipmi).

    Note

    IPMI is the preferred supported power management driver. For more information about supported power management types and their options, see Chapter 30, Power management drivers. If these power management drivers do not work as expected, use IPMI for your power management.

    pm_user; pm_password
    The IPMI username and password.
    pm_addr
    The IP address of the IPMI device.
    pm_port (Optional)
    The port to access the specific IPMI device.
    address
    (Optional) A list of MAC addresses for the network interfaces on the node. Use only the MAC address for the Provisioning NIC of each system.
    physical_network
    (Optional) The physical network that is connected to the Provisioning NIC.
    local_link_connection
    (Optional) If you use IPv6 provisioning and LLDP does not correctly populate the local link connection during introspection, you must include fake data with the switch_id and port_id fields in the local_link_connection parameter. For more information on how to include fake data, see Using director introspection to collect bare metal node hardware information.
    cpu
    (Optional) The number of CPUs on the node.
    memory
    (Optional) The amount of memory in MB.
    disk
    (Optional) The size of the hard disk in GB.
    arch

    (Optional) The system architecture.

    Important

    When building a multi-architecture cloud, the arch key is mandatory to distinguish nodes using x86_64 and ppc64le architectures.

  2. After you create the template, run the following commands to verify the formatting and syntax:

    $ source ~/stackrc
    (undercloud)$ openstack overcloud node import --validate-only ~/nodes.json
    Important

    You must also include the --http-boot /var/lib/ironic/tftpboot/ option for multi-architecture nodes.

  3. Save the file to the home directory of the stack user (/home/stack/nodes.json), then run the following commands to import the template to director:

    (undercloud)$ openstack overcloud node import ~/nodes.json

    This command registers each node from the template into director. If you use UEFI boot mode, you must also set the boot mode on each node. If you introspect your nodes without setting UEFI boot mode, the nodes boot in legacy mode. For more information, see Setting the boot mode to UEFI boot mode.

  4. Wait for the node registration and configuration to complete. When complete, confirm that director has successfully registered the nodes:

    (undercloud)$ openstack baremetal node list

8.2. Creating an inventory of the bare-metal node hardware

Director needs the hardware inventory of the nodes in your Red Hat OpenStack Platform (RHOSP) deployment for profile tagging, benchmarking, and manual root disk assignment.

You can provide the hardware inventory to director by using one of the following methods:

  • Automatic: You can use director’s introspection process, which collects the hardware information from each node. This process boots an introspection agent on each node. The introspection agent collects hardware data from the node and sends the data back to director. Director stores the hardware data in the OpenStack internal database.
  • Manual: You can manually configure a basic hardware inventory for each bare metal machine. This inventory is stored in the Bare Metal Provisioning service (ironic) and is used to manage and deploy the bare-metal machines.

The director automatic introspection process provides the following advantages over the manual method for setting the Bare Metal Provisioning service ports:

  • Introspection records all of the connected ports in the hardware information, including the port to use for PXE boot if it is not already configured in nodes.yaml.
  • Introspection sets the local_link_connection attribute for each port if the attribute is discoverable using LLDP. When you use the manual method, you must configure local_link_connection for each port when you register the nodes.
  • Introspection sets the physical_network attribute for the Bare Metal Provisioning service ports when deploying a spine-and-leaf or DCN architecture.

8.2.1. Using director introspection to collect bare metal node hardware information

After you register a physical machine as a bare metal node, you can automatically add its hardware details and create ports for each of its Ethernet MAC addresses by using director introspection.

Tip

As an alternative to automatic introspection, you can manually provide director with the hardware information for your bare metal nodes. For more information, see Manually configuring bare metal node hardware information.

Prerequisites

  • You have registered the bare-metal nodes for your overcloud.

Procedure

  1. Log in to the undercloud host as the stack user.
  2. Source the stackrc undercloud credentials file:

    $ source ~/stackrc
  3. Run the pre-introspection validation group to check the introspection requirements:

    (undercloud)$ openstack tripleo validator run --group pre-introspection
  4. Review the results of the validation report.
  5. Optional: Review detailed output from a specific validation:

    (undercloud)$ openstack tripleo validator show run --full <validation>
    • Replace <validation> with the UUID of the specific validation from the report that you want to review.

      Important

      A FAILED validation does not prevent you from deploying or running Red Hat OpenStack Platform. However, a FAILED validation can indicate a potential issue with a production environment.

  6. Inspect the hardware attributes of each node. You can inspect the hardware attributes of all nodes, or specific nodes:

    • Inspect the hardware attributes of all nodes:

      (undercloud)$ openstack overcloud node introspect --all-manageable --provide
      • Use the --all-manageable option to introspect only the nodes that are in a managed state. In this example, all nodes are in a managed state.
      • Use the --provide option to reset all nodes to an available state after introspection.
    • Inspect the hardware attributes of specific nodes:

      (undercloud)$ openstack overcloud node introspect --provide <node1> [node2] [noden]
      • Use the --provide option to reset all the specified nodes to an available state after introspection.
      • Replace <node1>, [node2], and all nodes up to [noden] with the UUID of each node that you want to introspect.
  7. Monitor the introspection progress logs in a separate terminal window:

    (undercloud)$ sudo tail -f /var/log/containers/ironic-inspector/ironic-inspector.log
    Important

    Ensure that the introspection process runs to completion. Introspection usually takes 15 minutes for bare metal nodes. However, incorrectly sized introspection networks can cause it to take much longer, which can result in the introspection failing.

  8. Optional: If you have configured your undercloud for bare metal provisioning over IPv6, then you need to also check that LLDP has set the local_link_connection for Bare Metal Provisioning service (ironic) ports:

    (undercloud)$ openstack baremetal port list --long -c UUID -c "Node UUID" -c "Local Link Connection"
    • If the Local Link Connection field is empty for the port on your bare metal node, you must populate the local_link_connection value manually with fake data. The following example sets the fake switch ID to 52:54:00:00:00:00, and the fake port ID to p0:

      (undercloud)$ openstack baremetal port set <port_uuid> \
      --local-link-connection switch_id=52:54:00:00:00:00 \
      --local-link-connection port_id=p0
    • Verify that the Local Link Connection field contains the fake data:

      (undercloud)$ openstack baremetal port list --long -c UUID -c "Node UUID" -c "Local Link Connection"

After the introspection completes, all nodes change to an available state.

8.2.2. Manually configuring bare-metal node hardware information

After you register a physical machine as a bare metal node, you can manually add its hardware details and create bare-metal ports for each of its Ethernet MAC addresses. You must create at least one bare-metal port before deploying the overcloud.

Tip

As an alternative to manual introspection, you can use the automatic director introspection process to collect the hardware information for your bare metal nodes. For more information, see Using director introspection to collect bare metal node hardware information.

Prerequisites

  • You have registered the bare-metal nodes for your overcloud.
  • You have configured local_link_connection for each port on the registered nodes in nodes.json. For more information, see Registering nodes for the overcloud.

Procedure

  1. Log in to the undercloud host as the stack user.
  2. Source the stackrc undercloud credentials file:

    $ source ~/stackrc
  3. Set the boot option to local for each registered node by adding boot_option':'local to the capabilities of the node:

    (undercloud)$ openstack baremetal node set \
     --property capabilities="boot_option:local" <node>
    • Replace <node> with the ID of the bare metal node.
  4. Specify the deploy kernel and deploy ramdisk for the node driver:

    (undercloud)$ openstack baremetal node set <node> \
      --driver-info deploy_kernel=<kernel_file> \
      --driver-info deploy_ramdisk=<initramfs_file>
    • Replace <node> with the ID of the bare metal node.
    • Replace <kernel_file> with the path to the .kernel image, for example, file:///var/lib/ironic/httpboot/agent.kernel.
    • Replace <initramfs_file> with the path to the .initramfs image, for example, file:///var/lib/ironic/httpboot/agent.ramdisk.
  5. Update the node properties to match the hardware specifications on the node:

    (undercloud)$ openstack baremetal node set <node> \
      --property cpus=<cpu> \
      --property memory_mb=<ram> \
      --property local_gb=<disk> \
      --property cpu_arch=<arch>
    • Replace <node> with the ID of the bare metal node.
    • Replace <cpu> with the number of CPUs.
    • Replace <ram> with the RAM in MB.
    • Replace <disk> with the disk size in GB.
    • Replace <arch> with the architecture type.
  6. Optional: Specify the IPMI cipher suite for each node:

    (undercloud)$ openstack baremetal node set <node> \
     --driver-info ipmi_cipher_suite=<version>
    • Replace <node> with the ID of the bare metal node.
    • Replace <version> with the cipher suite version to use on the node. Set to one of the following valid values:

      • 3 - The node uses the AES-128 with SHA1 cipher suite.
      • 17 - The node uses the AES-128 with SHA256 cipher suite.
  7. Optional: If you have multiple disks, set the root device hints to inform the deploy ramdisk which disk to use for deployment:

    (undercloud)$ openstack baremetal node set <node> \
      --property root_device='{"<property>": "<value>"}'
    • Replace <node> with the ID of the bare metal node.
    • Replace <property> and <value> with details about the disk that you want to use for deployment, for example root_device='{"size": "128"}'

      RHOSP supports the following properties:

      • model (String): Device identifier.
      • vendor (String): Device vendor.
      • serial (String): Disk serial number.
      • hctl (String): Host:Channel:Target:Lun for SCSI.
      • size (Integer): Size of the device in GB.
      • wwn (String): Unique storage identifier.
      • wwn_with_extension (String): Unique storage identifier with the vendor extension appended.
      • wwn_vendor_extension (String): Unique vendor storage identifier.
      • rotational (Boolean): True for a rotational device (HDD), otherwise false (SSD).
      • name (String): The name of the device, for example: /dev/sdb1 Use this property only for devices with persistent names.

        Note

        If you specify more than one property, the device must match all of those properties.

  8. Inform the Bare Metal Provisioning service of the node network card by creating a port with the MAC address of the NIC on the provisioning network:

    (undercloud)$ openstack baremetal port create --node <node_uuid> <mac_address>
    • Replace <node_uuid> with the unique ID of the bare metal node.
    • Replace <mac_address> with the MAC address of the NIC used to PXE boot.
  9. Validate the configuration of the node:

    (undercloud)$ openstack baremetal node validate <node>
    +------------+--------+---------------------------------------------+
    | Interface  | Result | Reason                                      |
    +------------+--------+---------------------------------------------+
    | boot       | False  | Cannot validate image information for node  |
    |            |        | a02178db-1550-4244-a2b7-d7035c743a9b        |
    |            |        | because one or more parameters are missing  |
    |            |        | from its instance_info. Missing are:        |
    |            |        | ['ramdisk', 'kernel', 'image_source']       |
    | console    | None   | not supported                               |
    | deploy     | False  | Cannot validate image information for node  |
    |            |        | a02178db-1550-4244-a2b7-d7035c743a9b        |
    |            |        | because one or more parameters are missing  |
    |            |        | from its instance_info. Missing are:        |
    |            |        | ['ramdisk', 'kernel', 'image_source']       |
    | inspect    | None   | not supported                               |
    | management | True   |                                             |
    | network    | True   |                                             |
    | power      | True   |                                             |
    | raid       | True   |                                             |
    | storage    | True   |                                             |
    +------------+--------+---------------------------------------------+

    The validation output Result indicates the following:

    • False: The interface has failed validation. If the reason provided includes missing the instance_info parameters [\'ramdisk', \'kernel', and \'image_source'], this might be because the Compute service populates those missing parameters at the beginning of the deployment process, therefore they have not been set at this point. If you are using a whole disk image, then you might need to only set image_source to pass the validation.
    • True: The interface has passed validation.
    • None: The interface is not supported for your driver.

8.3. Provisioning bare metal nodes

Create a new YAML file ~/overcloud-baremetal-deploy.yaml, define the quantity and attributes of the bare metal nodes that you want to deploy, and assign overcloud roles to these nodes. The provisioning process creates a heat environment file that you can include in your openstack overcloud deploy command.

Prerequisites

Procedure

  1. Source the stackrc undercloud credential file:

    $ source ~/stackrc
  2. Create a new ~/overcloud-baremetal-deploy.yaml file and define the node count for each role that you want to provision. For example, to provision three Controller nodes and three Compute nodes, use the following syntax:

    - name: Controller
      count: 3
    - name: Compute
      count: 3
  3. In the ~/overcloud-baremetal-deploy.yaml file, define any predictive node placements, custom images, custom NICs, or other attributes that you want to assign to your nodes. For example, use the following example syntax to provision three Controller nodes on nodes node00, node01, and node02, and three Compute nodes on node04, node05, and node06:

    - name: Controller
      count: 3
      instances:
      - hostname: overcloud-controller-0
        name: node00
      - hostname: overcloud-controller-1
        name: node01
      - hostname: overcloud-controller-2
        name: node02
    - name: Compute
      count: 3
      instances:
      - hostname: overcloud-novacompute-0
        name: node04
      - hostname: overcloud-novacompute-1
        name: node05
      - hostname: overcloud-novacompute-2
        name: node06

    By default, the provisioning process uses the overcloud-full image. You can use the image attribute in the instances parameter to define a custom image:

    - name: Controller
      count: 3
      instances:
      - hostname: overcloud-controller-0
        name: node00
        image:
          href: overcloud-custom

    You can also override the default parameter values with the defaults parameter to avoid manual node definitions for each node entry:

    - name: Controller
      count: 3
      defaults:
        image:
          href: overcloud-custom
      instances:
      - hostname :overcloud-controller-0
        name: node00
      - hostname: overcloud-controller-1
        name: node01
      - hostname: overcloud-controller-2
        name: node02

    For more information about the parameters, attributes, and values that you can use in your node definition file, see Bare metal node provisioning attributes.

  4. Run the provisioning command, specifying the ~/overcloud-baremetal-deploy.yaml file and defining an output file with the --output option:

    (undercloud)$ openstack overcloud node provision \
    --stack stack \
    --output ~/overcloud-baremetal-deployed.yaml \
    ~/overcloud-baremetal-deploy.yaml

    The provisioning process generates a heat environment file with the name that you specify in the --output option. This file contains your node definitions. When you deploy the overcloud, include this file in the deployment command.

  5. In a separate terminal, monitor your nodes to verify that they provision successfully. The provisioning process changes the node state from available to active:

    (undercloud)$ watch openstack baremetal node list

    Use the metalsmith tool to obtain a unified view of your nodes, including allocations and neutron ports:

    (undercloud)$ metalsmith list

    You can also use the openstack baremetal allocation command to verify association of nodes to hostnames:

    (undercloud)$ openstack baremetal allocation list

When your nodes are provisioned successfully, you can deploy the overcloud. For more information, see Configuring a basic overcloud with pre-provisioned nodes.

8.4. Scaling up bare metal nodes

To increase the count of bare metal nodes in an existing overcloud, increment the node count in the ~/overcloud-baremetal-deploy.yaml file and redeploy the overcloud.

Prerequisites

Procedure

  1. Source the stackrc undercloud credential file:

    $ source ~/stackrc
  2. Edit the ~/overcloud-baremetal-deploy.yaml file that you used to provision your bare metal nodes, and increment the count parameter for the roles that you want to scale up. For example, if your overcloud contains three Compute nodes, use the following snippet to increase the Compute node count to 10:

    - name: Controller
      count: 3
    - name: Compute
      count: 10

    You can also add predictive node placement with the instances parameter. For more information about the parameters and attributes that are available, see Bare metal node provisioning attributes.

  3. Run the provisioning command, specifying the ~/overcloud-baremetal-deploy.yaml file and defining an output file with the --output option:

    (undercloud)$ openstack overcloud node provision \
    --stack stack \
    --output ~/overcloud-baremetal-deployed.yaml \
    ~/overcloud-baremetal-deploy.yaml
  4. Monitor the provisioning progress with the openstack baremetal node list command.
  5. Deploy the overcloud, including the ~/overcloud-baremetal-deployed.yaml file that the provisioning command generates, along with any other environment files relevant to your deployment:

    (undercloud)$ openstack overcloud deploy \
      ...
      -e /usr/share/openstack-tripleo-heat-templates/environments/deployed-server-environment.yaml \
      -e ~/overcloud-baremetal-deployed.yaml \
      --deployed-server \
      --disable-validations \
      ...

8.5. Scaling down bare metal nodes

Tag the nodes that you want to delete from the stack in the ~/overcloud-baremetal-deploy.yaml file, redeploy the overcloud, and then include this file in the openstack overcloud node delete command with the --baremetal-deployment option.

Prerequisites

Procedure

  1. Source the stackrc undercloud credential file:

    $ source ~/stackrc
  2. Edit the ~/overcloud-baremetal-deploy.yaml file that you used to provision your bare metal nodes, and decrement the count parameter for the roles that you want to scale down. You must also define the following attributes for each node that you want to remove from the stack:

    • The name of the node.
    • The hostname that is associated with the node.
    • The attribute provisioned: false.

      For example, to remove the node overcloud-controller-1 from the stack, include the following snippet in your ~/overcloud-baremetal-deploy.yaml file:

      - name: Controller
        count: 2
        instances:
        - hostname: overcloud-controller-0
          name: node00
        - hostname: overcloud-controller-1
          name: node01
          # Removed from cluster due to disk failure
          provisioned: false
        - hostname: overcloud-controller-2
          name: node02
  3. Run the provisioning command, specifying the ~/overcloud-baremetal-deploy.yaml file and defining an output file with the --output option:

    (undercloud)$ openstack overcloud node provision \
    --stack stack \
    --output ~/overcloud-baremetal-deployed.yaml \
    ~/overcloud-baremetal-deploy.yaml
  4. Redeploy the overcloud and include the ~/overcloud-baremetal-deployed.yaml file that the provisioning command generates, along with any other environment files relevant to your deployment:

    (undercloud)$ openstack overcloud deploy \
      ...
      -e /usr/share/openstack-tripleo-heat-templates/environments/deployed-server-environment.yaml \
      -e ~/overcloud-baremetal-deployed.yaml \
      --deployed-server \
      --disable-validations \
      ...

    After you redeploy the overcloud, the nodes that you define with the provisioned: false attribute are no longer present in the stack. However, these nodes are still running in a provisioned state.

    Note

    If you want to remove a node from the stack temporarily, you can deploy the overcloud with the attribute provisioned: false and then redeploy the overcloud with the attribute provisioned: true to return the node to the stack.

  5. Run the openstack overcloud node delete command, including the ~/overcloud-baremetal-deploy.yaml file with the --baremetal-deployment option.

    (undercloud)$ openstack overcloud node delete \
    --stack stack \
    --baremetal-deployment ~/overcloud-baremetal-deploy.yaml
    Note

    Do not include the nodes that you want to remove from the stack as command arguments in the openstack overcloud node delete command.

8.6. Bare metal node provisioning attributes

Use the following tables to understand the parameters, attributes, and values that are available for you to use when you provision bare metal nodes with the openstack baremetal node provision command.

Table 8.1. Role parameters

ParameterValue

name

Mandatory role name

count

The number of nodes that you want to provision for this role. The default value is 1.

defaults

A dictionary of default values for instances entry properties. An instances entry property overrides any defaults that you specify in the defaults parameter.

instances

A dictionary of values that you can use to specify attributes for specific nodes. For more information about supported properties in the instances parameter, see Table 8.2, “instances and defaults parameters”. The length of this list must not be greater than the value of the count parameter.

hostname_format

Overrides the default hostname format for this role. The default format uses the lower case role name. For example, the default format for the Controller role is %stackname%-controller-%index%. Only the Compute role does not follow the role name rule. The Compute default format is %stackname%-novacompute-%index%

Example syntax

In the following example, the name refers to the logical name of the node, and the hostname refers to the generated hostname which is derived from the overcloud stack name, the role, and an incrementing index. All Controller servers use a default custom image overcloud-full-custom and are on predictive nodes. One of the Compute servers is placed predictively on node04 with custom host name overcloud-compute-special, and the other 99 Compute servers are on nodes allocated automatically from the pool of available nodes:

- name: Controller
  count: 3
  defaults:
    image:
      href: file:///var/lib/ironic/images/overcloud-full-custom.qcow2
  instances:
  - hostname: overcloud-controller-0
    name: node00
  - hostname: overcloud-controller-1
    name: node01
  - hostname: overcloud-controller-2
    name: node02
- name: Compute
  count: 100
  instances:
  - hostname: overcloud-compute-special
    name: node04

Table 8.2. instances and defaults parameters

ParameterValue

hostname

If the hostname complies with the hostname_format pattern then other properties apply to the node allocated to this hostname. Otherwise, you can use a custom hostname for this node.

name

The name of the node that you want to provision.

image

Details of the image that you want to provision onto the node. For more information about supported properties in the image parameter, see Table 8.3, “image parameters”.

capabilities

Selection criteria to match the node capabilities.

nics

List of dictionaries that represent requested NICs. For more information about supported properties in the nics parameter, see Table 8.4, “nic parameters”.

profile

Selection criteria to use Advanced Profile Matching.

provisioned

Boolean to determine whether this node is provisioned or unprovisioned. The default value is true. Use false to unprovision a node. For more information, see Scaling down bare metal nodes.

resource_class

Selection criteria to match the resource class of the node. The default value is baremetal.

root_size_gb

Size of the root partition in GiB. The default value is 49

swap_size_mb

Size of the swap partition in MiB.

traits

A list of traits as selection criteria to match the node traits.

Example syntax

In the following example, all Controller servers use a custom default overcloud image overcloud-full-custom. The Controller server overcloud-controller-0 is placed predictively on node00 and has custom root and swap sizes. The other two Controller servers are on nodes allocated automatically from the pool of available nodes, and have default root and swap sizes:

- name: Controller
  count: 3
  defaults:
    image:
      href: file:///var/lib/ironic/images/overcloud-full-custom.qcow2
  instances:
  - hostname: overcloud-controller-0
    name: node00
    root_size_gb: 140
    swap_size_mb: 600

Table 8.3. image parameters

ParameterValue

href

Glance image reference or URL of the root partition or whole disk image. URL schemes supported are file://, http://, and https://. If the value is not a valid URL, this value must be a valid glance image reference.

checksum

When the href is a URL, this value must be the SHA512 checksum of the root partition or whole disk image.

kernel

Glance image reference or URL of the kernel image. Use this property only for partition images.

ramdisk

Glance image reference or URL of the ramdisk image. Use this property only for partition images.

Example syntax

In the following example, all three Controller servers are on nodes allocated automatically from the pool of available nodes. All Controller servers in this environment use a default custom image overcloud-full-custom:

- name: Controller
  count: 3
  defaults:
    image:
      href: file:///var/lib/ironic/images/overcloud-full-custom.qcow2
      checksum: 1582054665
      kernel: file:///var/lib/ironic/images/overcloud-full-custom.vmlinuz
      ramdisk: file:///var/lib/ironic/images/overcloud-full-custom.initrd

Table 8.4. nic parameters

ParameterValue

fixed_ip

The specific IP address that you want to use for this NIC.

network

The neutron network where you want to create the port for this NIC.

subnet

The neutron subnet where you want to create the port for this NIC.

port

Existing Neutron port to use instead of creating a new port.

Example syntax

In the following example, all three Controller servers are on nodes allocated automatically from the pool of available nodes. All Controller servers in this environment use a default custom image overcloud-full-custom and have specific networking requirements:

- name: Controller
  count: 3
  defaults:
    image:
      href: file:///var/lib/ironic/images/overcloud-full-custom.qcow2
      nics:
        network: custom-network
        subnet: custom-subnet