Chapter 30. Power management drivers

Although IPMI is the main method that director uses for power management control, director also supports other power management types. This appendix contains a list of the power management features that director supports. Use these power management settings when you register nodes for the overcloud. For more information, see Registering nodes for the overcloud.

30.1. Intelligent Platform Management Interface (IPMI)

The standard power management method when you use a baseboard management controller (BMC).

pm_type
Set this option to ipmi.
pm_user; pm_password
The IPMI username and password.
pm_addr
The IP address of the IPMI controller.
pm_port (Optional)
The port to connect to the IPMI controller.

30.2. Redfish

A standard RESTful API for IT infrastructure developed by the Distributed Management Task Force (DMTF)

pm_type
Set this option to redfish.
pm_user; pm_password
The Redfish username and password.
pm_addr
The IP address of the Redfish controller.
pm_system_id
The canonical path to the system resource. This path must include the root service, version, and the path/unique ID for the system. For example: /redfish/v1/Systems/CX34R87.
redfish_verify_ca
If the Redfish service in your baseboard management controller (BMC) is not configured to use a valid TLS certificate signed by a recognized certificate authority (CA), the Redfish client in ironic fails to connect to the BMC. Set the redfish_verify_ca option to false to mute the error. However, be aware that disabling BMC authentication compromises the access security of your BMC.

30.3. Dell Remote Access Controller (DRAC)

DRAC is an interface that provides out-of-band remote management features including power management and server monitoring.

pm_type
Set this option to idrac.
pm_user; pm_password
The DRAC username and password.
pm_addr
The IP address of the DRAC host.

30.4. Integrated Lights-Out (iLO)

iLO from Hewlett-Packard is an interface that provides out-of-band remote management features including power management and server monitoring.

pm_type
Set this option to ilo.
pm_user; pm_password
The iLO username and password.
pm_addr

The IP address of the iLO interface.

  • To enable this driver, add ilo to the enabled_hardware_types option in your undercloud.conf and rerun openstack undercloud install.
  • HP nodes must have a minimum ILO firmware version of 1.85 (May 13 2015) for successful introspection. Director has been successfully tested with nodes using this ILO firmware version.
  • Using a shared iLO port is not supported.

30.5. Fujitsu Integrated Remote Management Controller (iRMC)

Fujitsu iRMC is a Baseboard Management Controller (BMC) with integrated LAN connection and extended functionality. This driver focuses on the power management for bare metal systems connected to the iRMC.

Important

iRMC S4 or higher is required.

pm_type
Set this option to irmc.
pm_user; pm_password

The username and password for the iRMC interface.

Important

The iRMC user must have the ADMINISTRATOR privilege.

pm_addr
The IP address of the iRMC interface.
pm_port (Optional)
The port for iRMC operations. The default is 443.
pm_auth_method (Optional)
The authentication method for iRMC operations. Use either basic or digest. The default is basic.
pm_client_timeout (Optional)
Timeout, in seconds, for iRMC operations. The default is 60 seconds.
pm_sensor_method (Optional)

Sensor data retrieval method. Use either ipmitool or scci. The default is ipmitool.

  • To enable this driver, add irmc to the enabled_hardware_types option in your undercloud.conf file and rerun the openstack undercloud install command.

iRMC with UEFI boot mode

iRMC needs ipxe.efi to boot with UEFI mode. To boot with UEFI, disable the default behavior that uses SNP (Simple Network Protocol) iPXE EFI, and re-install the undercloud.

Procedure

  1. Create a custom environment file, for example, /home/stack/templates/irmc-uefi-boot.yaml.
  2. Add the following configuration to the custom environment file:

    parameter_defaults:
      IronicIPXEUefiSnpOnly: false
  3. Edit the custom_env_files parameter in your undercloud.conf file to add your custom environment file:

    custom_env_files = /home/stack/templates/irmc-uefi-boot.yaml
    Note

    You can specify multiple environment files by using a comma-separated list.

  4. Re-install the undercloud to apply your configuration updates:

    $ openstack undercloud install

30.6. Red Hat Virtualization

This driver provides control over virtual machines in Red Hat Virtualization (RHV) through its RESTful API.

pm_type
Set this option to staging-ovirt.
pm_user; pm_password
The username and password for your RHV environment. The username also includes the authentication provider. For example: admin@internal.
pm_addr
The IP address of the RHV REST API.
pm_vm_name
The name of the virtual machine to control.
mac

A list of MAC addresses for the network interfaces on the node. Use only the MAC address for the Provisioning NIC of each system.

  • To enable this driver, add staging-ovirt to the enabled_hardware_types option in your undercloud.conf and rerun the openstack undercloud install command.

30.7. manual-management Driver

Use the manual-management driver to control bare metal devices that do not have power management. Director does not control the registered bare metal devices, and you must perform manual power operations at certain points in the introspection and deployment processes.

Important

This option is available only for testing and evaluation purposes. It is not recommended for Red Hat OpenStack Platform enterprise environments.

pm_type

Set this option to manual-management.

  • This driver does not use any authentication details because it does not control power management.
  • To enable this driver, add manual-management to the enabled_hardware_types option in your undercloud.conf and rerun the openstack undercloud install command.
  • In your instackenv.json node inventory file, set the pm_type to manual-management for the nodes that you want to manage manually.

Introspection

  • When performing introspection on nodes, manually start the nodes after running the openstack overcloud node introspect command. Ensure the nodes boot through PXE.
  • If you have enabled node cleaning, manually reboot the nodes after the Introspection completed message appears and the node status is clean wait for each node when you run the openstack baremetal node list command. Ensure the nodes boot through PXE.
  • After the introspection and cleaning process completes, shut down the nodes.

Deployment

  • When performing overcloud deployment, check the node status with the openstack baremetal node list command. Wait until the node status changes from deploying to wait call-back and then manually start the nodes. Ensure the nodes boot through PXE.
  • After the overcloud provisioning process completes, the nodes will shut down. You must boot the nodes from disk to start the configuration process. To check the completion of provisioning, check the node status with the openstack baremetal node list command, and wait until the node status changes to active for each node. When the node status is active, manually boot the provisioned overcloud nodes.