Chapter 5. Configuring the Bare Metal Provisioning service after deployment

After you deploy an overcloud with the Bare Metal Provisioning service (ironic), you might need to complete some additional configuration to prepare your environment for your bare metal workloads:

  • Configure networking.
  • Configure node cleaning.
  • Create bare metal flavors and images for your bare metal nodes.
  • Configure the deploy interface.
  • Configure virtual media boot.
  • Separate virtual and physical machine provisioning.

Prerequisites

5.1. Configuring OpenStack networking

Configure OpenStack Networking to communicate with the Bare Metal Provisioning service for DHCP, PXE boot, and other requirements. You can configure the bare metal network in two ways:

  • Use a flat bare metal network for Ironic Conductor services. This network must route to the Ironic services on the control plane network.
  • Use a custom composable network to implement Ironic services in the overcloud.

Follow the procedures in this section to configure OpenStack Networking for a single flat network for provisioning onto bare metal, or to configure a new composable network that does not rely on an unused isolated network or a flat network. The configuration uses the ML2 plug-in and the Open vSwitch agent.

5.1.1. Configuring OpenStack Networking to communicate with the Bare Metal Provisioning service on a flat bare metal network

Perform all steps in the following procedure as the root user on the server that hosts the OpenStack Networking service.

Prerequisites

Procedure

  1. Configure the shell to access Identity as the administrative user:

    $ source ~/overcloudrc
  2. Create the flat network over which to provision bare metal instances:

    $ openstack network create \
      --provider-network-type flat \
      --provider-physical-network baremetal \
      --share NETWORK_NAME

    Replace NETWORK_NAME with a name for this network. The name of the physical network over which you implement the virtual network (in this case baremetal) was set earlier in the ~/templates/network-environment.yaml file, with the parameter NeutronBridgeMappings.

  3. Create the subnet on the flat network:

    $ openstack subnet create \
      --network NETWORK_NAME \
      --subnet-range NETWORK_CIDR \
      --ip-version 4 \
      --gateway GATEWAY_IP \
      --allocation-pool start=START_IP,end=END_IP \
      --dhcp SUBNET_NAME

    Replace the following values:

    • Replace SUBNET_NAME with a name for the subnet.
    • Replace NETWORK_NAME with the name of the provisioning network that you created in the previous step.
    • Replace NETWORK_CIDR with the Classless Inter-Domain Routing (CIDR) representation of the block of IP addresses that the subnet represents. The block of IP addresses that you specify in the range starting with START_IP and ending with END_IP must be within the block of IP addresses specified by NETWORK_CIDR.
    • Replace GATEWAY_IP with the IP address or host name of the router interface that acts as the gateway for the new subnet. This address must be within the block of IP addresses specified by NETWORK_CIDR, but outside of the block of IP addresses specified by the range starting with START_IP and ending with END_IP.
    • Replace START_IP with the IP address that denotes the start of the range of IP addresses within the new subnet from which floating IP addresses will be allocated.
    • Replace END_IP with the IP address that denotes the end of the range of IP addresses within the new subnet from which floating IP addresses will be allocated.
  4. Create a router for the network and subnet to ensure that the OpenStack Networking Service serves metadata requests:

    $ openstack router create ROUTER_NAME

    Replace ROUTER_NAME with a name for the router.

  5. Attach the subnet to the new router:

    $ openstack router add subnet ROUTER_NAME BAREMETAL_SUBNET

    Replace ROUTER_NAME with the name of your router and BAREMETAL_SUBNET with the ID or name of the subnet that you created previously. This allows the metadata requests from cloud-init to be served and the node configured.

5.1.2. Configuring OpenStack Networking to communicate with the Bare Metal Provisioning service on a custom composable bare metal network

Perform all steps in the following procedure as the root user on the server that hosts the OpenStack Networking service.

Prerequisites

Procedure

  1. Create a vlan network with a VlanID that matches the OcProvisioning network that you create during deployment. Name the new network provisioning to match the default name of the cleaning network.

    (overcloud) [stack@host01 ~]$ openstack network create \
      --share \
      --provider-network-type vlan \
      --provider-physical-network datacentre \
      --provider-segment 205 provisioning

    If the name of the overcloud network is not provisioning, log in to the controller and run the following commands to rename and restart the network:

    heat-admin@overcloud-controller-0 ~]$ sudo vi /var/lib/config-data/puppet-generated/ironic/etc/ironic/ironic.conf
    heat-admin@overcloud-controller-0 ~]$ sudo podman restart ironic_conductor

5.2. Configuring node cleaning

By default, the Bare Metal Provisioning service uses a network named provisioning for node cleaning. However, network names are not unique in OpenStack Networking, so it is possible for a tenant to create a network with the same name, which causes a conflict with the Bare Metal Provisioning service. To avoid the conflict, use the network UUID instead.

Prerequisites

Procedure

  1. To configure node cleaning, provide the provider network UUID on the Controller that hosts the Bare Metal Provisioning service:

    ~/templates/ironic.yaml

    parameter_defaults:
        IronicCleaningNetwork: UUID

    Replace UUID with the UUID of the bare metal network that you create in the previous steps.

    You can find the UUID with the openstack network show command:

    openstack network show NETWORK_NAME -f value -c id
    Note

    You must perform this configuration after the initial overcloud deployment, because the UUID for the network is not available beforehand.

  2. To apply the changes, redeploy the overcloud with the openstack overcloud deploy. For more information about the deployment command, see Section 3.4, “Deploying the overcloud”.
  3. Uncomment the following line and replace <None> with the UUID of the bare metal network:

    cleaning_network = <None>
  4. Restart the Bare Metal Provisioning service:

    # systemctl restart openstack-ironic-conductor.service

Redeploying the overcloud with openstack overcloud deploy reverts any manual changes, so ensure that you have added the cleaning configuration to ~/templates/ironic.yaml before you next use the openstack overcloud deploy command.

5.2.1. Cleaning nodes manually

To initiate node cleaning manually, the node must be in the manageable state.

Node cleaning has two modes:

Metadata only clean - Removes partitions from all disks on a given node. This is a faster clean cycle, but less secure because it erases only partition tables. Use this mode only on trusted tenant environments.

Full clean - Removes all data from all disks, using either ATA secure erase or by shredding. This can take several hours to complete.

Prerequisites

Procedure

To initiate a metadata clean:

$ openstack baremetal node clean _UUID_ \
    --clean-steps '[{"interface": "deploy", "step": "erase_devices_metadata"}]'

To initiate a full clean:

$ openstack baremetal node clean _UUID_ \
    --clean-steps '[{"interface": "deploy", "step": "erase_devices"}]'

Replace UUID with the UUID of the node that you want to clean.

After a successful cleaning, the node state returns to manageable. If the state is clean failed, inspect the last_error field for the cause of failure.

5.3. Creating the bare metal flavor

You must create a flavor to use as a part of the deployment. The memory, CPU, and disk specifications of this flavor must be equal to or less than the hardware specifications of your bare metal node.

Prerequisites

Procedure

  1. Configure the shell to access Identity as the administrative user:

    $ source ~/overcloudrc
  2. List existing flavors:

    $ openstack flavor list
  3. Create a new flavor for the Bare Metal Provisioning service:

    $ openstack flavor create \
      --id auto --ram RAM \
      --vcpus VCPU --disk DISK \
      --property baremetal=true \
      --public baremetal

    Replace RAM with the amount of memory, VCPU with the number of vCPUs and DISK with the disk storage value. Include the property baremetal to distinguish bare metal from virtual instances.

  4. Verify that the new flavor has the correct values:

    $ openstack flavor list

5.4. Creating the bare metal images

An overcloud that includes the Bare Metal Provisioning service (ironic) requires two sets of images. During deployment, the Bare Metal Provisioning service boots bare metal nodes from the deploy image, and copies the user image onto nodes.

The deploy image
The Bare Metal Provisioning service uses the deploy image to boot the bare metal node and copy a user image onto the bare metal node. The deploy image consists of the kernel image and the ramdisk image.
The user image

The user image is the image that you deploy onto the bare metal node. The user image also has a kernel image and ramdisk image, but additionally, the user image contains a main image. The main image is either a root partition, or a whole-disk image.

  • A whole-disk image is an image that contains the partition table and boot loader. The Bare Metal Provisioning service does not control the subsequent reboot of a node deployed with a whole-disk image as the node supports localboot.
  • A root partition image contains only the root partition of the operating system. If you use a root partition, after the deploy image is loaded into the Image service, you can set the deploy image as the node boot image in the node properties. A subsequent reboot of the node uses netboot to pull down the user image.

The examples in this section use a root partition image to provision bare metal nodes.

5.4.1. Preparing the deploy images

You do not have to create the deploy image because it was already created when the overcloud was deployed by the undercloud. The deploy image consists of two images - the kernel image and the ramdisk image:

/tftpboot/agent.kernel
/tftpboot/agent.ramdisk

These images are often in the home directory, unless you have deleted them, or unpacked them elsewhere. If they are not in the home directory, and you still have the rhosp-director-images-ipa package installed, these images are in the /usr/share/rhosp-director-images/ironic-python-agent*.tar file.

Prerequisites

Procedure

Extract the images and upload them to the Image service:

$ openstack image create \
  --container-format aki \
  --disk-format aki \
  --public \
  --file ./tftpboot/agent.kernel bm-deploy-kernel
$ openstack image create \
  --container-format ari \
  --disk-format ari \
  --public \
  --file ./tftpboot/agent.ramdisk bm-deploy-ramdisk

5.4.2. Preparing the user image

The final image that you need is the user image that you deploy onto the bare metal node. User images also have a kernel and ramdisk, along with a main image. To download and install these packages, you must first configure whole disk image environment variables to suit your requirements.

5.4.2.1. Disk image environment variables

As a part of the disk image building process, the director requires a base image and registration details to obtain packages for the new overcloud image. Define these attributes with the following Linux environment variables.

Note

The image building process temporarily registers the image with a Red Hat subscription and unregisters the system when the image building process completes.

To build a disk image, set Linux environment variables that suit your environment and requirements:

DIB_LOCAL_IMAGE
Sets the local image that you want to use as the basis for your whole disk image.
REG_ACTIVATION_KEY
Use an activation key instead of login details as part of the registration process.
REG_AUTO_ATTACH
Defines whether to attach the most compatible subscription automatically.
REG_BASE_URL
The base URL of the content delivery server that contains packages for the image. The default Customer Portal Subscription Management process uses https://cdn.redhat.com. If you use a Red Hat Satellite 6 server, set this parameter to the base URL of your Satellite server.
REG_ENVIRONMENT
Registers to an environment within an organization.
REG_METHOD
Sets the method of registration. Use portal to register a system to the Red Hat Customer Portal. Use satellite to register a system with Red Hat Satellite 6.
REG_ORG
The organization where you want to register the images.
REG_POOL_ID
The pool ID of the product subscription information.
REG_PASSWORD
Sets the password for the user account that registers the image.
REG_RELEASE
Sets the Red Hat Enterprise Linux minor release version. You must use it with the REG_AUTO_ATTACH or the REG_POOL_ID environment variable.
REG_REPOS
A comma-separated string of repository names. Each repository in this string is enabled through subscription-manager.
REG_SAT_URL
The base URL of the Satellite server to register overcloud nodes. Use the Satellite HTTP URL and not the HTTPS URL for this parameter. For example, use http://satellite.example.com and not https://satellite.example.com.
REG_SERVER_URL
Sets the host name of the subscription service to use. The default host name is for the Red Hat Customer Portal at subscription.rhn.redhat.com. If you use a Red Hat Satellite 6 server, set this parameter to the host name of your Satellite server.
REG_USER
Sets the user name for the account that registers the image.

5.4.3. Installing the user image

Configure the user image and then upload the image to the Image service (glance).

Prerequisites

Procedure

  1. Download the Red Hat Enterprise Linux KVM guest image from the Customer Portal.
  2. Define DIB_LOCAL_IMAGE as the downloaded image:

    $ export DIB_LOCAL_IMAGE=rhel-8.0-x86_64-kvm.qcow2
  3. Set your registration information. If you use Red Hat Customer Portal, you must configure the following information:

    $ export REG_USER='USER_NAME'
    $ export REG_PASSWORD='PASSWORD'
    $ export REG_AUTO_ATTACH=true
    $ export REG_METHOD=portal
    $ export https_proxy='IP_address:port' (if applicable)
    $ export http_proxy='IP_address:port' (if applicable)

    If you use Red Hat Satellite, you must configure the following information:

    $ export REG_USER='USER_NAME'
    $ export REG_PASSWORD='PASSWORD'
    $ export REG_SAT_URL='<SATELLITE URL>'
    $ export REG_ORG='<SATELLITE ORG>'
    $ export REG_ENV='<SATELLITE ENV>'
    $ export REG_METHOD=<METHOD>

    If you have any offline repositories, you can define DIB_YUM_REPO_CONF as local repository configuration:

    $ export DIB_YUM_REPO_CONF=<path-to-local-repository-config-file>
  4. Create the user images with the diskimage-builder tool:

    $ disk-image-create rhel8 baremetal -o rhel-image

    This command extracts the kernel as rhel-image.vmlinuz and initial ramdisk as rhel-image.initrd.

  5. Upload the images to the Image service:

    $ KERNEL_ID=$(openstack image create \
      --file rhel-image.vmlinuz --public \
      --container-format aki --disk-format aki \
      -f value -c id rhel-image.vmlinuz)
    $ RAMDISK_ID=$(openstack image create \
      --file rhel-image.initrd --public \
      --container-format ari --disk-format ari \
      -f value -c id rhel-image.initrd)
    $ openstack image create \
      --file rhel-image.qcow2   --public \
      --container-format bare \
      --disk-format qcow2 \
      --property kernel_id=$KERNEL_ID \
      --property ramdisk_id=$RAMDISK_ID \
      rhel-image

5.5. Configuring deploy interfaces

When you provision bare metal nodes, the Bare Metal Provisioning service (ironic) on the overcloud writes a base operating system image to the disk on the bare metal node. By default, the deploy interface mounts the image on an iSCSI mount and then copies the image to disk on each node. Alternatively, you can use direct deploy, which writes disk images from a HTTP location directly to disk on bare metal nodes.

Deploy interfaces have a critical role in the provisioning process. Deploy interfaces orchestrate the deployment and define the mechanism for transferring the image to the target disk.

Prerequisites

  • Dependent packages configured on the bare metal service nodes that run ironic-conductor.
  • Configure OpenStack Compute (nova) to use the bare metal service endpoint.
  • Create flavors for the available hardware, and nova must boot the new node from the correct flavor.
  • Images must be available in the Image service (glance):

    • bm-deploy-kernel
    • bm-deploy-ramdisk
    • user-image
    • user-image-vmlinuz
    • user-image-initrd
  • Hardware to enroll with the Ironic API service.

Workflow

Use the following example workflow to understand the standard deploy process. Depending on the ironic driver interfaces that you use, some of the steps might differ:

  1. The Nova scheduler receives a boot instance request from the Nova API.
  2. The Nova scheduler identifies the relevant hypervisor and identifies the target physical node.
  3. The Nova compute manager claims the resources of the selected hypervisor.
  4. The Nova compute manager creates unbound tenant virtual interfaces (VIFs) in the Networking service according to the network interfaces that the nova boot request specifies.
  5. Nova compute invokes driver.spawn from the Nova compute virt layer to create a spawn task that contains all of the necessary information. During the spawn process, the virt driver completes the following steps.

    1. Updates the target ironic node with information about the deploy image, instance UUID, requested capabilities, and flavor properties.
    2. Calls the ironic API to validate the power and deploy interfaces of the target node.
    3. Attaches the VIFs to the node. Each neutron port can be attached to any ironic port or group. Port groups have higher priority than ports.
    4. Generates config drive.
  6. The Nova ironic virt driver issues a deploy request with the Ironic API to the Ironic conductor that services the bare metal node.
  7. Virtual interfaces are plugged in and the Neutron API updates DHCP to configure PXE/TFTP options.
  8. The ironic node boot interface prepares (i)PXE configuration and caches the deploy kernel and ramdisk.
  9. The ironic node management interface issues commands to enable network boot of the node.
  10. The ironic node deploy interface caches the instance image, kernel, and ramdisk, if necessary.
  11. The ironic node power interface instructs the node to power on.
  12. The node boots the deploy ramdisk.
  13. With iSCSI deployment, the conductor copies the image over iSCSI to the physical node. With direct deployment, the deploy ramdisk downloads the image from a temporary URL. This URL must be a Swift API compatible object store or a HTTP URL.
  14. The node boot interface switches PXE configuration to refer to instance images and instructs the ramdisk agent to soft power off the node. If the soft power off fails, the bare metal node is powered off with IPMI/BMC.
  15. The deploy interface instructs the network interface to remove any provisioning ports, binds the tenant ports to the node, and powers the node on.

The provisioning state of the new bare metal node is now active.

5.5.1. Configuring the direct deploy interface on the overcloud

The iSCSI deploy interface is the default deploy interface. However, you can enable the direct deploy interface to download an image from a HTTP location to the target disk.

Note

Your overcloud node memory tmpfs must have at least 8GB of RAM.

Procedure
  1. Create or modify a custom environment file /home/stack/templates/direct_deploy.yaml and specify the IronicEnabledDeployInterfaces and the IronicDefaultDeployInterface parameters.

    parameter_defaults:
      IronicEnabledDeployInterfaces: direct
      IronicDefaultDeployInterface: direct

    If you register your nodes with iscsi, retain the iscsi value in the IronicEnabledDeployInterfaces parameter:

    parameter_defaults:
      IronicEnabledDeployInterfaces: direct,iscsi
      IronicDefaultDeployInterface: direct
  2. By default, the Bare Metal Provisioning service (ironic) agent on each node obtains the image stored in the Object Storage Service (swift) through a HTTP link. Alternatively, ironic can stream this image directly to the node through the ironic-conductor HTTP server. To change the service that provides the image, set the IronicImageDownloadSource to http in the /home/stack/templates/direct_deploy.yaml file:

    parameter_defaults:
      IronicEnabledDeployInterfaces: direct
      IronicDefaultDeployInterface: direct
      IronicImageDownloadSource: http
  3. Include the custom environment with your overcloud deployment:

    $ openstack overcloud deploy \
      --templates \
      ...
      -e /usr/share/openstack-tripleo-heat-templates/environments/services/ironic.yaml \
      -e /home/stack/templates/direct_deploy.yaml \
      ...

    Wait until deployment completes.

Note

If you did not specify IronicDefaultDeployInterface or want to use a different deploy interface, specify the deploy interface when you create or update a node:

$ openstack baremetal node create --driver ipmi --deploy-interface direct
$ openstack baremetal node set <NODE> --deploy-interface direct

5.6. Adding physical machines as bare metal nodes

There are two methods to enroll a bare metal node:

  1. Prepare an inventory file with the node details, import the file into the Bare Metal Provisioning service, and make the nodes available.
  2. Register a physical machine as a bare metal node, then manually add its hardware details and create ports for each of its Ethernet MAC addresses. You can perform these steps on any node that has your overcloudrc file.

After you enroll the physical machines, Compute is not immediately notified of new resources, because the Compute resource tracker synchronizes periodically. You can view changes after the next periodic task runs. You can update the frequency of the periodic task with the scheduler_driver_task_period, in the /etc/nova/nova.conf file. The default period is 60 seconds.

5.6.1. Enrolling a bare metal node with an inventory file

Prepare an inventory file with the node details, import the file into the Bare Metal Provisioning service, and make the nodes available.

Prerequisites

Procedure

  1. Create a file overcloud-nodes.yaml, that includes the node details. You can enroll multiple nodes with one file.

    nodes:
        - name: node0
          driver: ipmi
          driver_info:
            ipmi_address: <IPMI_IP>
            ipmi_username: <USER>
            ipmi_password: <PASSWORD>
          properties:
            cpus: <CPU_COUNT>
            cpu_arch: <CPU_ARCHITECTURE>
            memory_mb: <MEMORY>
            local_gb: <ROOT_DISK>
            root_device:
                serial: <SERIAL>
          ports:
            - address: <PXE_NIC_MAC>

    Replace the following values:

    • <IPMI_IP> with the address of the Bare Metal controller.
    • <USER> with your username.
    • <PASSWORD> with your password.
    • <CPU_COUNT> with the number of CPUs.
    • <CPU_ARCHITECTURE> with the type of architecture of the CPUs.
    • <MEMORY> with the amount of memory in MiB.
    • <ROOT_DISK> with the size of the root disk in GiB.
    • <MAC_ADDRESS> with the MAC address of the NIC used to PXE boot.

      You must include root_device only if the machine has multiple disks. Replace <SERIAL> with the serial number of the disk that you want to use for deployment.

  2. Configure the shell to use Identity as the administrative user:

    $ source ~/overcloudrc
  3. Import the inventory file into ironic:

    $ openstack baremetal create overcloud-nodes.yaml

    The nodes are now in the enroll state.

  4. Specify the deploy kernel and deploy ramdisk on each node:

    $ openstack baremetal node set NODE_UUID \
      --driver-info deploy_kernel=KERNEL_UUID \
      --driver-info deploy_ramdisk=INITRAMFS_UUID

    Replace the following values:

    • Replace NODE_UUID with the unique identifier for the node. Alternatively, use the logical name of the node.
    • Replace KERNEL_UUID with the unique identifier for the kernel deploy image that was uploaded to the Image service. Find this value with the following command:

      $ openstack image show bm-deploy-kernel -f value -c id
    • Replace INITRAMFS_UUID with the unique identifier for the ramdisk image that was uploaded to the Image service. Find this value with the following command:

      $ openstack image show bm-deploy-ramdisk -f value -c id
  5. Set the provisioning state of the node to available:

    $ openstack baremetal node manage _NODE_UUID_
    $ openstack baremetal node provide _NODE_UUID_

    The Bare Metal Provisioning service cleans the node if you enabled node cleaning,

  6. Set the local boot option on the node:

    $ openstack baremetal node set _NODE_UUID_ --property capabilities="boot_option:local"
  7. Check that the nodes were successfully enrolled:

    $ openstack baremetal node list

    There might be a delay between enrolling a node and its state being shown.

5.7. Configuring Redfish virtual media boot

Important

This feature is available in this release as a Technology Preview, and therefore is not fully supported by Red Hat. It should only be used for testing, and should not be deployed in a production environment. For more information about Technology Preview features, see Scope of Coverage Details.

You can use Redfish virtual media boot to supply a boot image to the Baseboard Management Controller (BMC) of a node so that the BMC can insert the image into one of the virtual drives. The node can then boot from the virtual drive into the operating system that exists in the image.

Redfish hardware types support booting deploy, rescue, and user images over virtual media. The Bare Metal Provisioning service (ironic) uses kernel and ramdisk images associated with a node to build bootable ISO images for UEFI or BIOS boot modes at the moment of node deployment. The major advantage of virtual media boot is that you can eliminate the TFTP image transfer phase of PXE and use HTTP GET, or other methods, instead.

5.7.1. Deploying a bare metal server with Redfish virtual media boot

Important

This feature is available in this release as a Technology Preview, and therefore is not fully supported by Red Hat. It should only be used for testing, and should not be deployed in a production environment. For more information about Technology Preview features, see Scope of Coverage Details.

To boot a node with the redfish hardware type over virtual media, set the boot interface to redfish-virtual-media and, for UEFI nodes, define the EFI System Partition (ESP) image. Then configure an enrolled node to use Redfish virtual media boot.

Prerequisites

  • Redfish driver enabled in the enabled_hardware_types parameter in the undercloud.conf file.
  • A bare metal node registered and enrolled.
  • IPA and instance images in the Image Service (glance).
  • For UEFI nodes, you must also have an EFI system partition image (ESP) available in the Image Service (glance).
  • A bare metal flavor.
  • A network for cleaning and provisioning.
  • Sushy library installed:

    $ sudo yum install sushy

Procedure

  1. Set the Bare Metal service (ironic) boot interface to redfish-virtual-media:

    $ openstack baremetal node set --boot-interface redfish-virtual-media $NODE_NAME

    Replace $NODE_NAME with the name of the node.

  2. For UEFI nodes, set the boot mode to uefi:

    $ openstack baremetal node set --property capabilities="boot_mode:uefi" $NODE_NAME

    Replace $NODE_NAME with the name of the node.

    Note

    For BIOS nodes, do not complete this step.

  3. For UEFI nodes, define the EFI System Partition (ESP) image:

    $ openstack baremetal node set --driver-info bootloader=$ESP $NODE_NAME

    Replace $ESP with the glance image UUID or URL for the ESP image, and replace $NODE_NAME with the name of the node.

    Note

    For BIOS nodes, do not complete this step.

  4. Create a port on the bare metal node and associate the port with the MAC address of the NIC on the bare metal node:

    $ openstack baremetal port create --pxe-enabled True --node $UUID $MAC_ADDRESS

    Replace $UUID with the UUID of the bare metal node, and replace $MAC_ADDRESS with the MAC address of the NIC on the bare metal node.

  5. Create the new bare metal server:

    $ openstack server create \
        --flavor baremetal \
        --image $IMAGE \
        --network $NETWORK \
        test_instance

    Replace $IMAGE and $NETWORK with the names of the image and network that you want to use.

5.8. Using host aggregates to separate physical and virtual machine provisioning

OpenStack Compute uses host aggregates to partition availability zones, and group together nodes that have specific shared properties. When an instance is provisioned, the Compute scheduler compares properties on the flavor with the properties assigned to host aggregates, and ensures that the instance is provisioned in the correct aggregate and on the correct host: either on a physical machine or as a virtual machine.

Complete the steps in this section to perform the following operations:

  • Add the property baremetal to your flavors and set it to either true or false.
  • Create separate host aggregates for bare metal hosts and compute nodes with a matching baremetal property. Nodes grouped into an aggregate inherit this property.

Prerequisites

Procedure

  1. Set the baremetal property to true on the baremetal flavor.

    $ openstack flavor set baremetal --property baremetal=true
  2. Set the baremetal property to false on the flavors that virtual instances use:

    $ openstack flavor set FLAVOR_NAME --property baremetal=false
  3. Create a host aggregate called baremetal-hosts:

    $ openstack aggregate create --property baremetal=true baremetal-hosts
  4. Add each Controller node to the baremetal-hosts aggregate:

    $ openstack aggregate add host baremetal-hosts HOSTNAME
    Note

    If you have created a composable role with the NovaIronic service, add all the nodes with this service to the baremetal-hosts aggregate. By default, only the Controller nodes have the NovaIronic service.

  5. Create a host aggregate called virtual-hosts:

    $ openstack aggregate create --property baremetal=false virtual-hosts
  6. Add each Compute node to the virtual-hosts aggregate:

    $ openstack aggregate add host virtual-hosts HOSTNAME
  7. If you did not add the following Compute filter scheduler when you deployed the overcloud, add it now to the existing list under scheduler_default_filters in the _/etc/nova/nova.conf_ file:

    AggregateInstanceExtraSpecsFilter