Red Hat Training

A Red Hat training course is available for Red Hat OpenStack Platform

Appendix B. Power Management Drivers

Although IPMI is the main method the director uses for power management control, the director also supports other power management types. This appendix provides a list of the supported power management features. Use these power management settings for Section 6.1, “Registering Nodes for the Overcloud”.

B.1. Dell Remote Access Controller (DRAC)

DRAC is an interface that provides out-of-band remote management features including power management and server monitoring.

pm_type
Set this option to pxe_drac.
pm_user; pm_password
The DRAC username and password.
pm_addr

The IP address of the DRAC host.

  • To enable this driver, add pxe_drac to the enabled_drivers option in your undercloud.conf file, then rerun openstack undercloud install command.

B.2. Integrated Lights-Out (iLO)

iLO from Hewlett-Packard is an interface that provides out-of-band remote management features including power management and server monitoring.

pm_type
Set this option to pxe_ilo.
pm_user; pm_password
The iLO username and password.
pm_addr

The IP address of the iLO interface.

  • To enable this driver, add pxe_ilo to the enabled_drivers option in your undercloud.conf file, then rerun openstack undercloud install command.
  • The director also requires an additional set of utilities for iLo. Install the python-proliantutils package and restart the openstack-ironic-conductor service:

    $ sudo yum install python-proliantutils
    $ sudo systemctl restart openstack-ironic-conductor.service
  • HP nodes must a 2015 firmware version for successful introspection. The director has been successfully tested with nodes using firmware version 1.85 (May 13 2015).
  • Using a shared iLO port is not supported.

B.3. Cisco Unified Computing System (UCS)

UCS from Cisco is a data center platform that unites compute, network, storage access, and virtualization resources. This driver focuses on the power management for bare metal systems connected to the UCS.

pm_type
Set this option to pxe_ucs.
pm_user; pm_password
The UCS username and password.
pm_addr
The IP address of the UCS interface.
pm_service_profile

The UCS service profile to use. Usually takes the format of org-root/ls-[service_profile_name]. For example:

"pm_service_profile": "org-root/ls-Nova-1"
  • To enable this driver, add pxe_ucs to the enabled_drivers option in your undercloud.conf file, then rerun openstack undercloud install command.
  • The director also requires an additional set of utilities for UCS. Install the python-UcsSdk package and restart the openstack-ironic-conductor service:

    $ sudo yum install python-UcsSdk
    $ sudo systemctl restart openstack-ironic-conductor.service

B.4. Fujitsu Integrated Remote Management Controller (iRMC)

Fujitsu’s iRMC is a Baseboard Management Controller (BMC) with integrated LAN connection and extended functionality. This driver focuses on the power management for bare metal systems connected to the iRMC.

Important

iRMC S4 or higher is required.

pm_type
Set this option to pxe_irmc.
pm_user; pm_password
The username and password for the iRMC interface.
pm_addr
The IP address of the iRMC interface.
pm_port (Optional)
The port to use for iRMC operations. The default is 443.
pm_auth_method (Optional)
The authentication method for iRMC operations. Use either basic or digest. The default is basic
pm_client_timeout (Optional)
Timeout (in seconds) for iRMC operations. The default is 60 seconds.
pm_sensor_method (Optional)

Sensor data retrieval method. Use either ipmitool or scci. The default is ipmitool.

  • To enable this driver, add pxe_irmc to the enabled_drivers option in your undercloud.conf file, then rerun openstack undercloud install command.
  • The director also requires an additional set of utilities if you enabled SCCI as the sensor method. Install the python-scciclient package and restart the openstack-ironic-conductor service:

    $ yum install python-scciclient
    $ sudo systemctl restart openstack-ironic-conductor.service

B.5. Virtual Baseboard Management Controller (VBMC)

The director can use virtual machines as nodes on a KVM host. It controls their power management through emulated IPMI devices. This allows you to use the standard IPMI parameters from Section 6.1, “Registering Nodes for the Overcloud” but for virtual nodes.

Important

This option uses virtual machines instead of bare metal nodes. This means it is available for testing and evaluation purposes only. It is not recommended for Red Hat OpenStack Platform enterprise environments.

Configuring the KVM Host

On the KVM host, enable the OpenStack Platform repository and install the python-virtualbmc package:

$ sudo subscription-manager repos --enable=rhel-7-server-openstack-12-rpms
$ sudo yum install -y python-virtualbmc

Create a virtual baseboard management controller (BMC) for each virtual machine using the vbmc command. For example, if you aim to create a BMC for virtual machines named Node01 and Node02, run the following commands:

$ vbmc add Node01 --port 6230 --username admin --password p455w0rd!
$ vbmc add Node02 --port 6231 --username admin --password p455w0rd!

This defines the port to access each BMC and sets each BMC’s authentication details.

Note

Use a different port for each virtual machine. Port numbers lower than 1025 require root privileges in the system.

Start each BMC with the following commands:

$ vbmc start Node01
$ vbmc start Node02
Note

You must repeat this step after rebooting the KVM host.

Registering Nodes

Use the following parameters in your node registration file (/home/stack/instackenv.json):

pm_type
Set this option to pxe_ipmitool.
pm_user; pm_password
The IPMI username and password for the node’s virtual BMC device.
pm_addr
The IP address of the KVM host that contains the node.
pm_port
The port to access the specific node on the KVM host.
mac
A list of MAC addresses for the network interfaces on the node. Use only the MAC address for the Provisioning NIC of each system.

For example:

{
  "nodes": [
    {
      "pm_type": "pxe_ipmitool",
      "mac": [
        "aa:aa:aa:aa:aa:aa"
      ],
      "pm_user": "admin",
      "pm_password": "p455w0rd!",
      "pm_addr": "192.168.0.1",
      "pm_port": "6230",
      "name": "Node01"
    },
    {
      "pm_type": "pxe_ipmitool",
      "mac": [
        "bb:bb:bb:bb:bb:bb"
      ],
      "pm_user": "admin",
      "pm_password": "p455w0rd!",
      "pm_addr": "192.168.0.1",
      "pm_port": "6231",
      "name": "Node02"
    }
  ]
}

Migrating Existing Nodes

You can migrate existing nodes from using the deprecated pxe_ssh driver to using the new virtual BMC method. The following command is an example that sets a node to use the pxe_ipmitool driver and its parameters:

openstack baremetal node set Node01 \
    --driver pxe_ipmitool \
    --driver-info ipmi_address=192.168.0.1 \
    --driver-info ipmi_port=6230 \
    --driver-info ipmi_username="admin" \
    --driver-info ipmi_password="p455w0rd!"

B.6. Fake PXE Driver

This driver provides a method to use bare metal devices without power management. This means the director does not control the registered bare metal devices and as such require manual control of power at certain points in the introspect and deployment processes.

Important

This option is available for testing and evaluation purposes only. It is not recommended for Red Hat OpenStack Platform enterprise environments.

pm_type

Set this option to fake_pxe.

  • This driver does not use any authentication details because it does not control power management.
  • To enable this driver, add fake_pxe to the enabled_drivers option in your undercloud.conf file, then rerun openstack undercloud install command.
  • When performing introspection on nodes, manually power the nodes after running the openstack overcloud node introspect command.
  • When performing overcloud deployment, check the node status with the ironic node-list command. Wait until the node status changes from deploying to deploy wait-callback and then manually power the nodes.
  • After the overcloud provisioning process completes, reboot the nodes. To check the completion of provisioning, check the node status with the ironic node-list command, wait until the node status changes to active, then manually reboot all overcloud nodes.