Chapter 2. Configure Bare Metal Deployment

Configure Bare Metal Provisioning, the Image service, and Compute to enable bare metal deployment in the OpenStack environment. The following sections outline the additional configuration steps required to successfully deploy a bare metal node.

2.1. Create OpenStack Configurations for Bare Metal Provisioning Service

2.1.1. Configure the OpenStack Networking Configuration

Configure OpenStack Networking to communicate with Bare Metal Provisioning for DHCP, PXE boot, and other requirements. The procedure below configures OpenStack Networking for a single, flat network use case for provisioning onto bare metal. The configuration uses the ML2 plug-in and the Open vSwitch agent.

Ensure that the network interface used for provisioning is not the same network interface that is used for remote connectivity on the OpenStack Networking node. This procedure creates a bridge using the Bare Metal Provisioning Network interface, and drops any remote connections.

All steps in the following procedure must be performed on the server hosting OpenStack Networking, while logged in as the root user.

Configuring OpenStack Networking to Communicate with Bare Metal Provisioning

  1. Set up the shell to access Identity as the administrative user:

    # source ~stack/overcloudrc
  2. Create the flat network over which to provision bare metal instances:

    # neutron net-create --tenant-id TENANT_ID sharednet1 --shared \
    --provider:network_type flat --provider:physical_network PHYSNET

    Replace TENANT_ID with the unique identifier of the tenant on which to create the network. Replace PHYSNET with the name of the physical network.

  3. Create the subnet on the flat network:

    # neutron subnet-create sharednet1 NETWORK_CIDR --name SUBNET_NAME \
    --ip-version 4 --gateway GATEWAY_IP --allocation-pool \
    start=START_IP,end=END_IP --enable-dhcp

    Replace the following values:

    • Replace NETWORK_CIDR with the Classless Inter-Domain Routing (CIDR) representation of the block of IP addresses the subnet represents. The block of IP addresses specified by the range started by START_IP and ended by END_IP must fall within the block of IP addresses specified by NETWORK_CIDR.
    • Replace SUBNET_NAME with a name for the subnet.
    • Replace GATEWAY_IP with the IP address or host name of the system that will act as the gateway for the new subnet. This address must be within the block of IP addresses specified by NETWORK_CIDR, but outside of the block of IP addresses specified by the range started by START_IP and ended by END_IP.
    • Replace START_IP with the IP address that denotes the start of the range of IP addresses within the new subnet from which floating IP addresses will be allocated.
    • Replace END_IP with the IP address that denotes the end of the range of IP addresses within the new subnet from which floating IP addresses will be allocated.
  4. Attach the network and subnet to the router to ensure the metadata requests are served by the OpenStack Networking service.

    # neutron router-create ROUTER_NAME

    Replace ROUTER_NAME with a name for the router.

  5. Add the Bare Metal subnet as an interface on this router:

    # neutron router-interface-add ROUTER_NAME BAREMETAL_SUBNET

    Replace ROUTER_NAME with the name of your router and BAREMETAL_SUBNET with the ID or subnet name that you previously created. This allows the metadata requests from cloud-init to be served and the node configured.

  6. Update the /etc/ironic/ironic.conf file on the Compute node running the Bare Metal Provisioning service to utilize the same network for the cleaning service. Login to the Compute node where the Bare Metal Provisioning service is running and execute the following as a root user:

    # openstack-config --set /etc/ironic/ironic.conf neutron cleaning_network_uuid NETWORK_UUID

    Replace the NETWORK_UUID with the ID of the Bare Metal Provisioning Network created in the previous steps.

  7. Restart the Bare Metal Provisioning service:

    # systemctl restart openstack-ironic-conductor.service

2.1.2. Create the Bare Metal Provisioning Flavor

You need to create a flavor to use as a part of the deployment which should have the specifications (memory, CPU and disk) that is equal to or less than what your bare metal node provides.

  1. List existing flavors:

    # openstack flavor list
  2. Create a new flavor for the Bare Metal Provisioning service:

    # openstack flavor create --id auto --ram RAM --vcpus VCPU --disk DISK --public baremetal

    Replace RAM with the RAM memory, VCPU with the number of vCPUs and DISK with the disk storage value.

  3. Set the flavor to boot from the local disk, otherwise the default netboot method will be used.

    # openstack flavor set --property capabilities:boot_option='local' baremetal
  4. Verify that the new flavor is created with the respective values:

    # openstack flavor list

2.1.3. Create the Bare Metal Images

The Bare Metal Provisioning deployment requires two sets of images - deploy image and user image. The deploy image is a basic image with the sole purpose of booting the node and copying the user image on to the Bare Metal Provisioning node. After the deploy image is loaded into the Image service, you can update the Bare Metal Provisioning node to set it to use the deploy image as the boot images. You do not have to create the deploy image as it was already used when the overcloud was deployed by the undercloud. The deploy image can be divided into two parts - the kernel and the ramdisk as follows:

ironic-python-agent.kernel
ironic-python-agent.initramfs

These images should be in the ~/stack/images directory if you did not delete them. If not, and you still have the rhosp-director-images-ips package installed, these images will be in the /usr/share/rhosp-director-images/ironic-python-agent*.el7ost.tar file.

Extract the images and load them to the Image service:

# openstack image create --container-format aki --disk-format aki --public --file ./ironic-python-agent.kernel bm-deploy-kernel
# openstack image create --container-format ari --disk-format ari --public --file ./ironic-python-agent.initramfs bm-deploy-ramdisk

The final image that you need is the actual image that will be deployed on the Bare Metal Provisioning node. For example, you can download a Red Hat Enterprise Linux KVM image since it already has cloud-init.

Load the image to the Image service:

# openstack image create --container-format bare --disk-format qcow2 --public --file ./IMAGE_FILE rhel

2.1.4. Add the Bare Metal Provisioning Node to the Bare Metal Provisioning Service

In order to add the Bare Metal Provisioning node to the Bare Metal Provisioning service, copy the section of the instackenv.json file that was used to instantiate the cloud and modify it according to your needs.

  1. Source the overcloudrc file and import the .json file:

    # source ~stack/overcloudrc
    # openstack baremetal import --json ./baremetal.json
  2. Update the bare metal node in the Bare Metal Provisioning service to use the deployed images as the initial boot image by specifying the deploy_kernel and deploy_ramdisk in the driver_info section of the node:

    # ironic node-update NODE_UUID add driver_info/deploy_kernel=DEPLOY_KERNEL_ID driver_info/deploy_ramdisk=DEPLOY_RAMDISK_ID

Replace NODE_UUID with the UUID of the bare metal node. You can get this value by executing the ironic node-list command on the director node. Replace DEPLOY_KERNEL_ID with the ID of the deploy kernel image. You can get this value by executing the glance image-list command on the director node. Replace the DEPLOY_RAMDISK_ID with the ID of the deploy ramdisk image. You can get this value by executing the glance image-list command on the director node.

2.1.5. Deploy the Bare Metal Provisioning Node

Deploy the Bare Metal Provisioning node using the nova boot command:

# nova  boot --image BAREMETAL_USER_IMAGE --flavor BAREMETAL_FLAVOR --nic net-id=IRONIC_NETWORK_ID --key default MACHINE_HOSTNAME

Replace BAREMETAL_USER_IMAGE with image that was loaded to the Image service, BAREMETAL_FLAVOR with the flavor for the Bare Metal deployment, IRONIC_NETWORK_ID with the ID of the Bare Metal Provisioning Network in the OpenStack Networking service, and MACHINE_HOSTNAME with the hostname of the machine you want it to be after it is deployed.

2.2. Configure Hardware Introspection

Hardware introspection allows Bare Metal Provisioning to discover hardware information on a node. Introspection also creates ports for the discovered Ethernet MAC addresses. Alternatively, you can manually add hardware details to each node; see Section 2.3.2, “Add a Node Manually” for more information. All steps in the following procedure must be performed on the server hosting the Bare Metal Provisioning conductor service, while logged in as the root user.

Hardware introspection is supported in-band using the following drivers:

  • pxe_drac
  • pxe_ipmitool
  • pxe_ssh

Configuring Hardware Introspection

  1. Obtain the Ironic Python Agent kernel and ramdisk images used for bare metal system discovery over PXE boot. These images are available in a TAR archive labeled Ironic Python Agent Image for RHOSP director 8.0 at https://access.redhat.com/downloads/content/191/ver=8/rhel---7/8/x86_64/product-software. Download the TAR archive, extract the image files (ironic-python-agent.kernel and ironic-python-agent.initramfs) from it, and copy them to the /tftpboot directory on the TFTP server.
  2. On the server that will host the hardware introspection service, enable the Red Hat OpenStack Platform 8 director for RHEL 7 (RPMs) channel:

    # subscription-manager repos --enable=rhel-7-server-openstack-8-director-rpms
  3. Install the openstack-ironic-inspector package:

    # yum install openstack-ironic-inspector
  4. Enable introspection in the ironic.conf file:

    # openstack-config --set /etc/ironic/ironic.conf \
       inspector enabled True
  5. If the hardware introspection service is hosted on a separate server, set its URL on the server hosting the conductor service:

    # openstack-config --set /etc/ironic/ironic.conf \
       inspector service_url http://INSPECTOR_IP:5050

    Replace INSPECTOR_IP with the IP address or host name of the server hosting the hardware introspection service.

  6. Provide the hardware introspection service with authentication credentials:

    # openstack-config --set /etc/ironic-inspector/inspector.conf \
       keystone_authtoken identity_uri http://IDENTITY_IP:35357
    # openstack-config --set /etc/ironic-inspector/inspector.conf \
       keystone_authtoken auth_uri http://IDENTITY_IP:5000/v2.0
    # openstack-config --set /etc/ironic-inspector/inspector.conf \
       keystone_authtoken admin_user ironic
    # openstack-config --set /etc/ironic-inspector/inspector.conf \
       keystone_authtoken admin_password PASSWORD
    # openstack-config --set /etc/ironic-inspector/inspector.conf \
       keystone_authtoken admin_tenant_name services
    # openstack-config --set /etc/ironic-inspector/inspector.conf \
       ironic os_auth_url http://IDENTITY_IP:5000/v2.0
    # openstack-config --set /etc/ironic-inspector/inspector.conf \
       ironic os_username ironic
    # openstack-config --set /etc/ironic-inspector/inspector.conf \
       ironic os_password PASSWORD
    # openstack-config --set /etc/ironic-inspector/inspector.conf \
       ironic os_tenant_name service
    # openstack-config --set /etc/ironic-inspector/inspector.conf \
       firewall dnsmasq_interface br-ironic
    # openstack-config --set /etc/ironic-inspector/inspector.conf \
       database connection sqlite:////var/lib/ironic-inspector/inspector.sqlite

    Replace the following values:

    • Replace IDENTITY_IP with the IP address or host name of the Identity server.
    • Replace PASSWORD with the password that Bare Metal Provisioning uses to authenticate with Identity.
  7. Optionally, set the hardware introspection service to store logs for the ramdisk:

    # openstack-config --set /etc/ironic-inspector/inspector.conf \
    processing ramdisk_logs_dir /var/log/ironic-inspector/ramdisk
  8. Optionally, enable an additional data processing plug-in that gathers block devices on bare metal machines with multiple local disks and exposes root devices. ramdisk_error, root_disk_selection, scheduler, and validate_interfaces are enabled by default, and should not be disabled. The following command adds root_device_hint to the list:

    # openstack-config --set /etc/ironic-inspector/inspector.conf \
    processing processing_hooks '$default_processing_hooks,root_device_hint'
  9. Generate the initial ironic inspector database:

    # ironic-inspector-dbsync --config-file /etc/ironic-inspector/inspector.conf upgrade
  10. Update the inspector database file to be owned by ironic-inspector:

    # chown ironic-inspector /var/lib/ironic-inspector/inspector.sqlite
  11. Open the /etc/ironic-inspector/dnsmasq.conf file in a text editor, and configure the following PXE boot settings for the openstack-ironic-inspector-dnsmasq service:

    port=0
    interface=br-ironic
    bind-interfaces
    dhcp-range=START_IP,END_IP
    enable-tftp
    tftp-root=/tftpboot
    dhcp-boot=pxelinux.0

    Replace the following values:

    • Replace INTERFACE with the name of the Bare Metal Provisioning Network interface.
    • Replace START_IP with the IP address that denotes the start of the range of IP addresses from which floating IP addresses will be allocated.
    • Replace END_IP with the IP address that denotes the end of the range of IP addresses from which floating IP addresses will be allocated.
  12. Copy the syslinux bootloader to the tftp directory:

    # cp /usr/share/syslinux/pxelinux.0 /tftpboot/pxelinux.0
  13. Optionally, you can configure the hardware introspection service to store metadata in the swift section of the /etc/ironic-inspector/inspector.conf file.

    [swift]
    username = ironic
    password = PASSWORD
    tenant_name = service
    os_auth_url = http://IDENTITY_IP:5000/v2.0

    Replace the following values:

    • Replace IDENTITY_IP with the IP address or host name of the Identity server.
    • Replace PASSWORD with the password that Bare Metal Provisioning uses to authenticate with Identity.
  14. Open the /tftpboot/pxelinux.cfg/default file in a text editor, and configure the following options:

    default discover
    
    label discover
    kernel ironic-python-agent.kernel
    append initrd=ironic-python-agent.initramfs \
    ipa-inspection-callback-url=http://INSPECTOR_IP:5050/v1/continue
    ipa-api-url=http://IRONIC_API_IP:6385
    
    ipappend 3

    Replace INSPECTOR_IP with the IP address or host name of the server hosting the hardware introspection service. Note that the text from append to /continue must be on a single line, as indicated by the \ in the block above.

  15. Reset the security context for the /tftpboot/ directory and its files:

    # restorecon -R /tftpboot/

    This step ensures that the directory has the correct SELinux security labels, and the dnsmasq service is able to access the directory.

  16. Start the hardware introspection service and the dnsmasq service, and configure them to start at boot time:

    # systemctl start openstack-ironic-inspector.service
    # systemctl enable openstack-ironic-inspector.service
    # systemctl start openstack-ironic-inspector-dnsmasq.service
    # systemctl enable openstack-ironic-inspector-dnsmasq.service

    Hardware introspection can be used on nodes after they have been registered with Bare Metal Provisioning.

2.3. Add Physical Machines as Bare Metal Nodes

Add as nodes the physical machines onto which you will provision instances, and confirm that Compute can see the available hardware. Compute is not immediately notified of new resources, because Compute’s resource tracker synchronizes periodically. Changes will be visible after the next periodic task is run. This value, scheduler_driver_task_period, can be updated in /etc/nova/nova.conf. The default period is 60 seconds.

After systems are registered as bare metal nodes, hardware details can be discovered using hardware introspection, or added manually.

2.3.1. Add a Node with Hardware Introspection

Register a physical machine as a bare metal node, then use openstack-ironic-inspector to detect the node’s hardware details and create ports for each of its Ethernet MAC addresses. All steps in the following procedure must be performed on the server hosting the Bare Metal Provisioning conductor service, while logged in as the root user.

Adding a Node with Hardware Introspection

  1. Set up the shell to use Identity as the administrative user:

    # source ~/keystonerc_admin
  2. Add a new node:

    # ironic node-create -d DRIVER_NAME

    Replace DRIVER_NAME with the name of the driver that Bare Metal Provisioning will use to provision this node. You must have enabled this driver in the /etc/ironic/ironic.conf file. To create a node, you must, at a minimum, specify the driver name.

    Important

    Note the unique identifier for the node.

  3. You can refer to a node by a logical name or by its UUID. Optionally assign a logical name to the node:

    # ironic node-update NODE_UUID add name=NAME

    Replace NODE_UUID with the unique identifier for the node. Replace NAME with a logical name for the node.

  4. Determine the node information that is required by the driver, then update the node driver information to allow Bare Metal Provisioning to manage the node:

    # ironic driver-properties DRIVER_NAME
    # ironic node-update NODE_UUID add \
       driver_info/PROPERTY=VALUE \
       driver_info/PROPERTY=VALUE

    Replace the following values:

    • Replace DRIVER_NAME with the name of the driver for which to show properties. The information is not returned unless the driver has been enabled in the /etc/ironic/ironic.conf file.
    • Replace NODE_UUID with the unique identifier for the node. Alternatively, use the node’s logical name.
    • Replace PROPERTY with a required property returned by the ironic driver-properties command.
    • Replace VALUE with a valid value for that property.
  5. Specify the deploy kernel and deploy ramdisk for the node driver:

    # ironic node-update NODE_UUID add \
      driver_info/deploy_kernel=KERNEL_UUID \
      driver_info/deploy_ramdisk=INITRAMFS_UUID

    Replace the following values:

    • Replace NODE_UUID with the unique identifier for the node. Alternatively, use the node’s logical name.
    • Replace KERNEL_UUID with the unique identifier for the .kernel image that was uploaded to the Image service.
    • Replace INITRAMFS_UUID with the unique identifier for the .initramfs image that was uploaded to the Image service.
  6. Configure the node to reboot after initial deployment from a local boot loader installed on the node’s disk, instead of via PXE or virtual media. The local boot capability must also be set on the flavor used to provision the node. To enable local boot, the image used to deploy the node must contain grub2. Configure local boot:

    # ironic node-update NODE_UUID add \
       properties/capabilities="boot_option:local"

    Replace NODE_UUID with the unique identifier for the node. Alternatively, use the node’s logical name.

  7. Move the bare metal node to manageable state:

    # ironic node-set-provision-state NODE_UUID manage

    Replace NODE_UUID with the unique identifier for the node. Alternatively, use the node’s logical name.

  8. Start introspection:

    # openstack baremetal introspection start NODE_UUID --discoverd-url http://overcloud IP:5050
    • Replace NODE_UUID with the unique identifier for the node. Alternatively, use the node’s logical name. The node discovery and introspection process must run to completion before the node can be provisioned. To check the status of node introspection, run ironic node-list and look for Provision State. Nodes will be in available state after successful introspection.
    • Replace overcloud IP with the service_url value that was previously set in ironic.conf.
  9. Validate the node’s setup:

    # ironic node-validate NODE_UUID
    +------------+--------+----------------------------+
    | Interface  | Result | Reason                     |
    +------------+--------+----------------------------+
    | console    | None   | not supported              |
    | deploy     | True   |                            |
    | inspect    | True   |                            |
    | management | True   |                            |
    | power      | True   |                            |
    +------------+--------+----------------------------+

    Replace NODE_UUID with the unique identifier for the node. Alternatively, use the node’s logical name. The output of the command above should report either True or None for each interface. Interfaces marked None are those that you have not configured, or those that are not supported for your driver.

2.3.2. Add a Node Manually

Register a physical machine as a bare metal node, then manually add its hardware details and create ports for each of its Ethernet MAC addresses. All steps in the following procedure must be performed on the server hosting the Bare Metal Provisioning conductor service, while logged in as the root user.

Adding a Node without Hardware Introspection

  1. Set up the shell to use Identity as the administrative user:

    # source ~/keystonerc_admin
  2. Add a new node:

    # ironic node-create -d DRIVER_NAME

    Replace DRIVER_NAME with the name of the driver that Bare Metal Provisioning will use to provision this node. You must have enabled this driver in the /etc/ironic/ironic.conf file. To create a node, you must, at a minimum, specify the driver name.

    Important

    Note the unique identifier for the node.

  3. You can refer to a node by a logical name or by its UUID. Optionally assign a logical name to the node:

    # ironic node-update NODE_UUID add name=NAME

    Replace NODE_UUID with the unique identifier for the node. Replace NAME with a logical name for the node.

  4. Determine the node information that is required by the driver, then update the node driver information to allow Bare Metal Provisioning to manage the node:

    # ironic driver-properties DRIVER_NAME
    # ironic node-update NODE_UUID add \
       driver_info/PROPERTY=VALUE \
       driver_info/PROPERTY=VALUE

    Replace the following values:

    • Replace DRIVER_NAME with the name of the driver for which to show properties. The information is not returned unless the driver has been enabled in the /etc/ironic/ironic.conf file.
    • Replace NODE_UUID with the unique identifier for the node. Alternatively, use the node’s logical name.
    • Replace PROPERTY with a required property returned by the ironic driver-properties command.
    • Replace VALUE with a valid value for that property.
  5. Specify the deploy kernel and deploy ramdisk for the node driver:

    # ironic node-update NODE_UUID add \
      driver_info/deploy_kernel=KERNEL_UUID \
      driver_info/deploy_ramdisk=INITRAMFS_UUID

    Replace the following values:

    • Replace NODE_UUID with the unique identifier for the node. Alternatively, use the node’s logical name.
    • Replace KERNEL_UUID with the unique identifier for the .kernel image that was uploaded to the Image service.
    • Replace INITRAMFS_UUID with the unique identifier for the .initramfs image that was uploaded to the Image service.
  6. Update the node’s properties to match the hardware specifications on the node:

    # ironic node-update NODE_UUID add \
       properties/cpus=CPU \
       properties/memory_mb=RAM_MB \
       properties/local_gb=DISK_GB \
       properties/cpu_arch=ARCH

    Replace the following values:

    • Replace NODE_UUID with the unique identifier for the node. Alternatively, use the node’s logical name.
    • Replace CPU with the number of CPUs to use.
    • Replace RAM_MB with the RAM (in MB) to use.
    • Replace DISK_GB with the disk size (in GB) to use.
    • Replace ARCH with the architecture type to use.
  7. Configure the node to reboot after initial deployment from a local boot loader installed on the node’s disk, instead of via PXE or virtual media. The local boot capability must also be set on the flavor used to provision the node. To enable local boot, the image used to deploy the node must contain grub2. Configure local boot:

    # ironic node-update NODE_UUID add \
       properties/capabilities="boot_option:local"

    Replace NODE_UUID with the unique identifier for the node. Alternatively, use the node’s logical name.

  8. Inform Bare Metal Provisioning of the network interface cards on the node. Create a port with each NIC’s MAC address:

    # ironic port-create -n NODE_UUID -a MAC_ADDRESS

    Replace NODE_UUID with the unique identifier for the node. Replace MAC_ADDRESS with the MAC address for a NIC on the node.

  9. Validate the node’s setup:

    # ironic node-validate NODE_UUID
    +------------+--------+----------------------------+
    | Interface  | Result | Reason                     |
    +------------+--------+----------------------------+
    | console    | None   | not supported              |
    | deploy     | True   |                            |
    | inspect    | None   | not supported              |
    | management | True   |                            |
    | power      | True   |                            |
    +------------+--------+----------------------------+

    Replace NODE_UUID with the unique identifier for the node. Alternatively, use the node’s logical name. The output of the command above should report either True or None for each interface. Interfaces marked None are those that you have not configured, or those that are not supported for your driver.

2.4. Use Host Aggregates to Separate Physical and Virtual Machine Provisioning

Host aggregates are used by OpenStack Compute to partition availability zones, and group nodes with specific shared properties together. Key value pairs are set both on the host aggregate and on instance flavors to define these properties. When an instance is provisioned, Compute’s scheduler compares the key value pairs on the flavor with the key value pairs assigned to host aggregates, and ensures that the instance is provisioned in the correct aggregate and on the correct host: either on a physical machine or as a virtual machine on an openstack-nova-compute node.

If your Red Hat OpenStack Platform environment is set up to provision both bare metal machines and virtual machines, use host aggregates to direct instances to spawn as either physical machines or virtual machines. The procedure below creates a host aggregate for bare metal hosts, and adds a key value pair specifying that the host type is baremetal. Any bare metal node grouped in this aggregate inherits this key value pair. The same key value pair is then added to the flavor that will be used to provision the instance.

If the image or images you will use to provision bare metal machines were uploaded to the Image service with the hypervisor_type=ironic property set, the scheduler will also use that key pair value in its scheduling decision. To ensure effective scheduling in situations where image properties may not apply, set up host aggregates in addition to setting image properties. See Section 2.1.3, “Create the Bare Metal Images” for more information on building and uploading images.

Creating a Host Aggregate for Bare Metal Provisioning

  1. Create the host aggregate for baremetal in the default nova availability zone:

    # nova aggregate-create baremetal nova
  2. Set metadata on the baremetal aggregate that will assign hosts added to the aggregate the hypervisor_type=ironic property:

    # nova aggregate-set-metadata baremetal hypervisor_type=ironic
  3. Add the openstack-nova-compute node with Bare Metal Provisioning drivers to the baremetal aggregate:

    # nova aggregate-add-host baremetal COMPUTE_HOSTNAME

    Replace COMPUTE_HOSTNAME with the host name of the system hosting the openstack-nova-compute service. A single, dedicated compute host should be used to handle all Bare Metal Provisioning requests.

  4. Add the ironic hypervisor property to the flavor or flavors that you have created for provisioning bare metal nodes:

    # nova flavor-key FLAVOR_NAME set hypervisor_type="ironic"

    Replace FLAVOR_NAME with the name of the flavor.

  5. Add the following Compute filter scheduler to the existing list under scheduler_default_filters in /etc/nova/nova.conf:

    AggregateInstanceExtraSpecsFilter

    This filter ensures that the Compute scheduler processes the key value pairs assigned to host aggregates.

2.5. Example: Test Bare Metal Provisioning with SSH and Virsh

Test the Bare Metal Provisioning setup by deploying instances on two virtual machines acting as bare metal nodes on a single physical host. Both virtual machines are virtualized using libvirt and virsh.

Important

The SSH driver is for testing and evaluation purposes only. It is not recommended for Red Hat OpenStack Platform enterprise environments.

This scenario requires the following resources:

  • A Red Hat OpenStack Platform environment with Bare Metal Provisioning services configured on an overcloud node. You must have completed all steps in this guide.
  • One bare metal machine with Red Hat Enterprise Linux 7.2 and libvirt virtualization tools installed. This system acts as the host containing the virtualized bare metal nodes.
  • One network connection between the Bare Metal Provisioning node and the host containing the virtualized bare metal nodes. This network acts as the Bare Metal Provisioning Network.

2.5.1. Create the Virtualized Bare Metal Nodes

Create two virtual machines that will act as the bare metal nodes in the test scenario. The nodes will be referred to as Node1 and Node2.

Creating Virtualized Bare Metal Nodes

  1. Access the Virtual Machine Manager from the libvirt host.
  2. Create two virtual machines with the following configuration:

    • 1 vCPU
    • 2048 MB of memory
    • Network Boot (PXE)
    • 20 GB storage
    • Network source: Host device eth0: macvtap and Source mode: Bridge. Selecting macvtap sets the virtual machines to share the host’s Ethernet network interface. This way the Bare Metal Provisioning node has direct access to the virtualized nodes.
  3. Shut down both virtual machines.

2.5.2. Create an SSH Key Pair

Create an SSH key pair that will allow the Bare Metal Provisioning node to connect to the libvirt host.

Creating an SSH Key Pair

  1. On the Bare Metal Provisioning node, create a new SSH key:

    # ssh-keygen -t rsa -b 2048 -C "user@domain.com" -f ./virtkey

    Replace user@domain.com with an email address or other comment that identifies this key. When the command prompts you for a passphrase, press Enter to proceed without a passphrase. The command creates two files: the private key (virtkey) and the public key (virtkey.pub).

  2. Copy the contents of the public key into the /root/.ssh/authorized_keys file of the libvirt host’s root user:

    # ssh-copy-id -i virtkey root@LIBVIRT_HOST

    Replace LIBVIRT_HOST with the IP address or host name of the libvirt host.

The private key (virtkey) is used when the nodes are registered.

2.5.3. Add the Virtualized Nodes as Bare Metal Nodes

Add as nodes the virtual machines onto which you will provision instances. In this example, the driver details are provided manually and the node details are discovered using hardware introspection. Node details can also be added manually on a node-by-node basis. See Section 2.3.2, “Add a Node Manually” for more information.

Adding Virtualized Nodes as Bare Metal Nodes

  1. On the Bare Metal Provisioning conductor service node, enable the pxe_ssh driver:

    # openstack-config --set /etc/ironic/ironic.conf \
       DEFAULT enabled_drivers pxe_ssh

    If you are adding pxe_ssh to a list of existing drivers, open the file and add the driver to the list in enabled_drivers, separated by a comma.

  2. Set up the shell to use Identity as the administrative user:

    # source ~/keystonerc_admin
  3. Add the first node, and register the SSH details for the libvirt host:

    # ironic node-create -d pxe_ssh -n Node1 \
       -i ssh_virt_type=virsh \
       -i ssh_username=root \
       -i ssh_key_filename=VIRTKEY_FILE_PATH \
       -i ssh_address=LIBVIRT_HOST_IP \
       -i deploy_kernel=KERNEL_UUID \
       -i deploy_ramdisk=INITRAMFS_UUID

    Replace the following values:

    • Replace VIRTKEY_FILE_PATH with the absolute file path of the virtkey SSH private key file.
    • Replace LIBVIRT_HOST_IP with the IP address or host name of the libvirt host.
    • Replace KERNEL_UUID with the unique identifier for the .kernel image that was uploaded to the Image service.
    • Replace INITRAMFS_UUID with the unique identifier for the .initramfs image that was uploaded to the Image service.
  4. Add a second node, using the same command as above, and replacing Node1 with Node2.
  5. Configure the node to reboot after initial deployment from a local boot loader installed on the node’s disk, instead of via PXE or virtual media. The local boot capability must also have been set on the flavor you will use to provision the node. To enable local boot, the image used to deploy the node must contain grub2. Configure local boot:

    # ironic node-update Node1 add \
       properties/capabilities="boot_option:local"
    # ironic node-update Node2 add \
       properties/capabilities="boot_option:local"
  6. Move the nodes to the manageable state:

    # ironic node-set-provision-state Node1 manage
    # ironic node-set-provision-state Node2 manage
  7. Start introspection on the nodes:

    # ironic node-set-provision-state Node1 inspect
    # ironic node-set-provision-state Node2 inspect

    The node discovery and introspection process must run to completion before the node can be provisioned. To check the status of node introspection, run ironic node-list and look for Provision State. Nodes will be in the available state after successful introspection.

  8. Validate the node’s setup:

    # ironic node-validate Node1
    # ironic node-validate Node2
    +------------+--------+----------------------------+
    | Interface  | Result | Reason                     |
    +------------+--------+----------------------------+
    | console    | None   | not supported              |
    | deploy     | True   |                            |
    | inspect    | True   |                            |
    | management | True   |                            |
    | power      | True   |                            |
    +------------+--------+----------------------------+

    The output of the command above should report either True or None for each interface. Interfaces marked None are those that you have not configured, or those that are not supported for your driver.

  9. When the nodes have been successfully added, launch two instances using Chapter 3, Launch Bare Metal Instances.