Red Hat Training

A Red Hat training course is available for Red Hat OpenStack Platform

Appendix B. Power Management Drivers

Although IPMI is the main method the director uses for power management control, the director also supports other power management types. This appendix provides a list of the supported power management features. Use these power management settings for Section 6.1, “Registering Nodes for the Overcloud”.

B.1. Redfish

A standard RESTful API for IT infrastructure developed by the Distributed Management Task Force (DMTF)

pm_type
Set this option to redfish.
pm_user; pm_password
The Redfish username and password.
pm_addr
The IP address of the Redfish controller.
pm_system_id
The canonical path to the system resource. This path should include the root service, version, and the path/unqiue ID for the system. For example: /redfish/v1/Systems/CX34R87.
redfish_verify_ca
If the Redfish service in your baseboard management controller (BMC) is not configured to use a valid TLS certificate signed by a recognized certificate authority (CA), the Redfish client in ironic fails to connect to the BMC. Set the redfish_verify_ca option to false to mute the error. However, be aware that disabling BMC authentication compromises the access security of your BMC.

B.2. Dell Remote Access Controller (DRAC)

DRAC is an interface that provides out-of-band remote management features including power management and server monitoring.

pm_type
Set this option to idrac.
pm_user; pm_password
The DRAC username and password.
pm_addr
The IP address of the DRAC host.

B.3. Integrated Lights-Out (iLO)

iLO from Hewlett-Packard is an interface that provides out-of-band remote management features including power management and server monitoring.

pm_type
Set this option to ilo.
pm_user; pm_password
The iLO username and password.
pm_addr

The IP address of the iLO interface.

  • To enable this driver, add ilo to the enabled_hardware_types option in your undercloud.conf and rerun openstack undercloud install.
  • The director also requires an additional set of utilities for iLo. Install the python-proliantutils package and restart the openstack-ironic-conductor service:

    $ sudo yum install python-proliantutils
    $ sudo systemctl restart openstack-ironic-conductor.service
  • HP nodes must have a minimum ILO firmware version of 1.85 (May 13 2015) for successful introspection. The director has been successfully tested with nodes using this ILO firmware version.
  • Using a shared iLO port is not supported.

B.4. Cisco Unified Computing System (UCS)

Note

Cisco UCS is being deprecated and will be removed from Red Hat OpenStack Platform (RHOSP) 16.0.

UCS from Cisco is a data center platform that unites compute, network, storage access, and virtualization resources. This driver focuses on the power management for bare metal systems connected to the UCS.

pm_type
Set this option to cisco-ucs-managed.
pm_user; pm_password
The UCS username and password.
pm_addr
The IP address of the UCS interface.
pm_service_profile

The UCS service profile to use. Usually takes the format of org-root/ls-[service_profile_name]. For example:

"pm_service_profile": "org-root/ls-Nova-1"
  • To enable this driver, add cisco-ucs-managed to the enabled_hardware_types option in your undercloud.conf and rerun openstack undercloud install.
  • The director also requires an additional set of utilities for UCS. Install the python-UcsSdk package and restart the openstack-ironic-conductor service:

    $ sudo yum install python-UcsSdk
    $ sudo systemctl restart openstack-ironic-conductor.service

B.5. Fujitsu Integrated Remote Management Controller (iRMC)

Fujitsu’s iRMC is a Baseboard Management Controller (BMC) with integrated LAN connection and extended functionality. This driver focuses on the power management for bare metal systems connected to the iRMC.

Important

iRMC S4 or higher is required.

pm_type
Set this option to irmc.
pm_user; pm_password
The username and password for the iRMC interface.
pm_addr
The IP address of the iRMC interface.
pm_port (Optional)
The port to use for iRMC operations. The default is 443.
pm_auth_method (Optional)
The authentication method for iRMC operations. Use either basic or digest. The default is basic
pm_client_timeout (Optional)
Timeout (in seconds) for iRMC operations. The default is 60 seconds.
pm_sensor_method (Optional)

Sensor data retrieval method. Use either ipmitool or scci. The default is ipmitool.

  • To enable this driver, add irmc to the enabled_hardware_types option in your undercloud.conf and rerun openstack undercloud install.
  • The director also requires an additional set of utilities if you enabled SCCI as the sensor method. Install the python-scciclient package and restart the openstack-ironic-conductor service:

    $ yum install python-scciclient
    $ sudo systemctl restart openstack-ironic-conductor.service

B.6. Virtual Baseboard Management Controller (VBMC)

The director can use virtual machines as nodes on a KVM host. It controls their power management through emulated IPMI devices. This allows you to use the standard IPMI parameters from Section 6.1, “Registering Nodes for the Overcloud” but for virtual nodes.

Important

This option uses virtual machines instead of bare metal nodes. This means it is available for testing and evaluation purposes only. It is not recommended for Red Hat OpenStack Platform enterprise environments.

Configuring the KVM Host

  1. On the KVM host, enable the OpenStack Platform repository and install the python2-virtualbmc package:

    $ sudo subscription-manager repos --enable=rhel-7-server-openstack-13-rpms
    $ sudo yum install -y python2-virtualbmc
  2. Create a virtual baseboard management controller (BMC) for each virtual machine using the vbmc command. For example, to create a BMC for virtual machines named Node01 and Node02, define the port to access each BMC and set the authentication details, enter the following commands:

    $ vbmc add Node01 --port 6230 --username admin --password PASSWORD
    $ vbmc add Node02 --port 6231 --username admin --password PASSWORD
  3. Open the corresponding ports on the host:

    $ sudo firewall-cmd --zone=public \
    --add-port=6230/udp \
    --add-port=6231/udp
  4. Make the changes persistent:

    $ sudo firewall-cmd --runtime-to-permanent
  5. Verify that your changes are applied to the firewall settings and the ports are open:

    $ sudo firewall-cmd --list-all
    Note

    Use a different port for each virtual machine. Port numbers lower than 1025 require root privileges in the system.

  6. Start each of the BMCs you have created using the following commands:

    $ vbmc start Node01
    $ vbmc start Node02
    Note

    You must repeat this step after rebooting the KVM host.

  7. To verify that you can manage the nodes using ipmitool, display the power status of a remote node:

    $ ipmitool -I lanplus -U admin -P PASSWORD -H 127.0.0.1 -p 6231 power status
    Chassis Power is off

Registering Nodes

Use the following parameters in your /home/stack/instackenv.json node registration file:

pm_type
Set this option to ipmi.
pm_user; pm_password
Specify the IPMI username and password for the node’s virtual BMC device.
pm_addr
Specify the IP address of the KVM host that contains the node.
pm_port
Specify the port to access the specific node on the KVM host.
mac
Specify a list of MAC addresses for the network interfaces on the node. Use only the MAC address for the Provisioning NIC of each system.

For example:

{
  "nodes": [
    {
      "pm_type": "ipmi",
      "mac": [
        "aa:aa:aa:aa:aa:aa"
      ],
      "pm_user": "admin",
      "pm_password": "p455w0rd!",
      "pm_addr": "192.168.0.1",
      "pm_port": "6230",
      "name": "Node01"
    },
    {
      "pm_type": "ipmi",
      "mac": [
        "bb:bb:bb:bb:bb:bb"
      ],
      "pm_user": "admin",
      "pm_password": "p455w0rd!",
      "pm_addr": "192.168.0.1",
      "pm_port": "6231",
      "name": "Node02"
    }
  ]
}

Migrating Existing Nodes

You can migrate existing nodes from using the deprecated pxe_ssh driver to using the new virtual BMC method. The following command is an example that sets a node to use the ipmi driver and its parameters:

openstack baremetal node set Node01 \
    --driver ipmi \
    --driver-info ipmi_address=192.168.0.1 \
    --driver-info ipmi_port=6230 \
    --driver-info ipmi_username="admin" \
    --driver-info ipmi_password="p455w0rd!"

B.7. Red Hat Virtualization

This driver provides control over virtual machines in Red Hat Virtualization through its RESTful API.

pm_type
Set this option to staging-ovirt.
pm_user; pm_password
The username and password for your Red Hat Virtualization environment. The username also includes the authentication provider. For example: admin@internal.
pm_addr
The IP address of the Red Hat Virtualization REST API.
pm_vm_name
The name of the virtual machine to control.
mac
A list of MAC addresses for the network interfaces on the node. Use only the MAC address for the Provisioning NIC of each system.

To enable this driver, complete the following steps:

  1. Add staging-ovirt to the enabled_hardware_types option in your undercloud.conf file:

    enabled_hardware_types = ipmi,staging-ovirt
  2. Install the python-ovirt-engine-sdk4.x86_64 package.

    $ sudo yum install python-ovirt-engine-sdk4
  3. Run the openstack undercloud install command:

    $ openstack undercloud install

B.8. Fake Driver

This driver provides a method to use bare metal devices without power management. This means that director does not control the registered bare metal devices and as such require manual control of power at certain points in the introspection and deployment processes.

Important

This option is available for testing and evaluation purposes only. It is not recommended for Red Hat OpenStack Platform enterprise environments.

pm_type

Set this option to fake_pxe.

  • This driver does not use any authentication details because it does not control power management.
  • To enable this driver, add fake_pxe to the enabled_drivers option in your undercloud.conf and rerun openstack undercloud install.
  • In your instackenv.json node inventory file, set the pm_type to fake_pxe for the nodes that you want to manage manually.
  • When performing introspection on nodes, manually power the nodes after running the openstack overcloud node introspect command.
  • When performing overcloud deployment, check the node status with the ironic node-list command. Wait until the node status changes from deploying to deploy wait-callback and then manually power the nodes.
  • After the overcloud provisioning process completes, reboot the nodes. To check the completion of provisioning, check the node status with the ironic node-list command, wait until the node status changes to active, then manually reboot all overcloud nodes.