Chapter 3. Deploying an Overcloud with the Bare Metal Service

For full details about overcloud deployment with the director, see Director Installation and Usage. This chapter covers only the deployment steps specific to ironic.

3.1. Creating the Ironic template

Use an environment file to deploy the overcloud with the Bare Metal service enabled. A template is located on the director node at /usr/share/openstack-tripleo-heat-templates/environments/services/ironic-overcloud.yaml.

Filling in the template

Additional configuration can be specified either in the provided template or in an additional yaml file, for example ~/templates/ironic.yaml.

  • For a hybrid deployment with both bare metal and virtual instances, you must add AggregateInstanceExtraSpecsFilter to the list of NovaSchedulerDefaultFilters. If you have not set NovaSchedulerDefaultFilters anywhere, you can do so in ironic.yaml. For an example, see Section 3.4, “Example Templates”.

    Note

    If you are using SR-IOV, NovaSchedulerDefaultFilters is already set in tripleo-heat-templates/environments/neutron-sriov.yaml. Append AggregateInstanceExtraSpecsFilter to this list.

  • The type of cleaning that occurs before and between deployments is set by IronicCleaningDiskErase. By default, this is set to ‘full’ by deployment/ironic/ironic-conductor-container-puppet.yaml. Setting this to ‘metadata’ can substantially speed up the process, as it cleans only the partition table, however, since the deployment will be less secure in a multi-tenant environment, you should do this only in a trusted tenant environment.
  • You can add drivers with the IronicEnabledHardwareTypes parameter. By default, ipmi and redfish are enabled.

For a full list of configuration parameters, see Bare Metal in the Overcloud Parameters guide.

3.2. Configuring the undercloud for bare metal provisioning over IPv6

Important

This feature is available in this release as a Technology Preview, and therefore is not fully supported by Red Hat. It should only be used for testing, and should not be deployed in a production environment. For more information about Technology Preview features, see Scope of Coverage Details.

If you have IPv6 nodes and infrastructure, you can configure the undercloud and the provisioning network to use IPv6 instead of IPv4 so that director can provision and deploy Red Hat OpenStack Platform onto IPv6 nodes. However, there are some considerations:

  • Stateful DHCPv6 is available only with a limited set of UEFI firmware. For more information, see Bugzilla #1575026.
  • Dual stack IPv4/6 is not available.
  • Tempest validations might not perform correctly.
  • IPv4 to IPv6 migration is not available during upgrades.

Modify the undercloud.conf file to enable IPv6 provisioning in Red Hat OpenStack Platform.

Prerequisites

Procedure

  1. Copy the sample undercloud.conf file, or modify your existing undercloud.conf file.
  2. Set the following parameter values in the undercloud.conf file:

    1. Set ipv6_address_mode to dhcpv6-stateless or dhcpv6-stateful if your NIC supports stateful DHCPv6 with Red Hat OpenStack Platform. For more information about stateful DHCPv6 availability, see Bugzilla #1575026.
    2. Set enable_routed_networks to true if you do not want the undercloud to create a router on the provisioning network. In this case, the data center router must provide router advertisements. Otherwise, set this value to false.
    3. Set local_ip to the IPv6 address of the undercloud.
    4. Use IPv6 addressing for the undercloud interface parameters undercloud_public_host and undercloud_admin_host.
    5. In the [ctlplane-subnet] section, use IPv6 addressing in the following parameters:

      • cidr
      • dhcp_start
      • dhcp_end
      • gateway
      • inspection_iprange
    6. In the [ctlplane-subnet] section, set an IPv6 nameserver for the subnet in the dns_nameservers parameter.

      ipv6_address_mode = dhcpv6-stateless
      enable_routed_networks: false
      local_ip = <ipv6-address>
      undercloud_admin_host = <ipv6-address>
      undercloud_public_host = <ipv6-address>
      
      [ctlplane-subnet]
      cidr = <ipv6-address>::<ipv6-mask>
      dhcp_start = <ipv6-address>
      dhcp_end = <ipv6-address>
      dns_nameservers = <ipv6-dns>
      gateway = <ipv6-address>
      inspection_iprange = <ipv6-address>,<ipv6-address>

3.3. Network Configuration

If you use the default flat bare metal network, you must create a bridge br-baremetal for ironic to use. You can specify this in an additional template:

~/templates/network-environment.yaml

parameter_defaults:
  NeutronBridgeMappings: datacentre:br-ex,baremetal:br-baremetal
  NeutronFlatNetworks: datacentre,baremetal

You can configure this bridge either in the provisioning network (control plane) of the controllers, so that you can reuse this network as the bare metal network, or add a dedicated network. The configuration requirements are the same, however the bare metal network cannot be VLAN-tagged, as it is used for provisioning.

~/templates/nic-configs/controller.yaml

network_config:
    -
      type: ovs_bridge
          name: br-baremetal
          use_dhcp: false
          members:
            -
              type: interface
              name: eth1
Note

The Bare Metal service in the overcloud is designed for a trusted tenant environment, as the bare metal nodes have direct access to the control plane network of your OpenStack installation.

3.3.1. Configuring a custom IPv4 provisioning network

The default flat provisioning network can introduce security concerns in a customer environment as a tenant can interfere with the undercloud network. To prevent this risk, you can configure a custom composable bare metal provisioning network for ironic services that does not have access to the control plane:

  1. Configure the shell to access Identity as the administrative user:

    $ source ~/stackrc
  2. Copy the network_data.yaml file:

    (undercloud) [stack@host01 ~]$ cp /usr/share/openstack-tripleo-heat-templates/network_data.yaml .
  3. Edit the new network_data.yaml file and add a new network for IPv4 overcloud provisioning:

    # custom network for overcloud provisioning
    - name: OcProvisioning
    name_lower: oc_provisioning
    vip: true
    vlan: 205
    ip_subnet: '172.23.3.0/24'
    allocation_pools: [{'start': '172.23.3.10', 'end': '172.23.3.200'}]
  4. Update the network_environments.yaml and nic-configs/controller.yaml files to use the new network.

    1. In the network_environments.yaml file, remap Ironic networks:

      ServiceNetMap:
         IronicApiNetwork: oc_provisioning
         IronicNetwork: oc_provisioning
    2. In the nic-configs/controller.yaml file, add an interface and necessary parameters:

      $network_config:
           - type: vlan
               vlan_id:
                 get_param: OcProvisioningNetworkVlanID
               addresses:
               - ip_netmask:
                   get_param: OcProvisioningIpSubnet
  5. Copy the roles_data.yaml file:

    (undercloud) [stack@host01 ~]$ cp /usr/share/openstack-tripleo-heat-templates/roles_data.yaml .
  6. Edit the new roles_data.yaml and add the new network for the controller:

      networks:
       ...
        OcProvisioning:
          subnet: oc_provisioning_subnet
  7. Include the new network_data.yaml and roles_data.yaml files in the deploy command:

    -n /home/stack/network_data.yaml \
    -r /home/stack/roles_data.yaml \

3.3.2. Configuring a custom IPv6 provisioning network

Important

This feature is available in this release as a Technology Preview, and therefore is not fully supported by Red Hat. It should only be used for testing, and should not be deployed in a production environment. For more information about Technology Preview features, see Scope of Coverage Details.

Create a custom IPv6 provisioning network to provision and deploy the overcloud over IPv6.

Procedure

  1. Configure the shell to access Identity as the administrative user:

    $ source ~/stackrc
  2. Copy the network_data.yaml file:

    $ cp /usr/share/openstack-tripleo-heat-templates/network_data.yaml .
  3. Edit the new network_data.yaml file and add a new network for overcloud provisioning:

    # custom network for IPv6 overcloud provisioning
    - name: OcProvisioningIPv6
    vip: true
    name_lower: oc_provisioning_ipv6
    vlan: 10
    ipv6: true
    ipv6_subnet: '$IPV6_SUBNET_ADDRESS/$IPV6_MASK'
    ipv6_allocation_pools: [{'start': '$IPV6_START_ADDRESS', 'end': '$IPV6_END_ADDRESS'}]
    gateway_ipv6: '$IPV6_GW_ADDRESS'
    • Replace $IPV6_ADDRESS with the IPv6 address of your IPv6 subnet.
    • Replace $IPV6_MASK with the IPv6 network mask for your IPv6 subnet.
    • Replace $IPV6_START_ADDRESS and $IPV6_END_ADDRESS with the IPv6 range that you want to use for address allocation.
    • Replace $IPV6_GW_ADDRESS with the IPv6 address of your gateway.
  4. Create a new file network-environment.yaml and define IPv6 settings for the provisioning network:

    $ touch /home/stack/network-environment.yaml`
    1. Remap the ironic networks to use the new IPv6 provisioning network:

      ServiceNetMap:
         IronicApiNetwork: oc_provisioning_ipv6
         IronicNetwork: oc_provisioning_ipv6
    2. Set the IronicIpVersion parameter to 6:

      parameter_defaults:
        IronicIpVersion: 6
    3. Set the RabbitIPv6, MysqlIPv6, and RedisIPv6 parameters to True:

      parameter_defaults:
        RabbitIPv6: True
        MysqlIPv6: True
        RedisIPv6: True
    4. Set the ControlPlaneSubnetCidr parameter to the subnet IPv6 mask length for the provisioning network:

      parameter_defaults:
        ControlPlaneSubetCidr: '64'
    5. Set the ControlPlaneDefaultRoute parameter to the IPv6 address of the gateway router for the provisioning network:

      parameter_defaults:
        ControlPlaneDefaultRoute: <ipv6-address>
  5. Add an interface and necessary parameters to the nic-configs/controller.yaml file:

    $network_config:
         - type: vlan
             vlan_id:
               get_param: OcProvisioningIPv6NetworkVlanID
             addresses:
             - ip_netmask:
                 get_param: OcProvisioningIPv6IpSubnet
  6. Copy the roles_data.yaml file:

    (undercloud) [stack@host01 ~]$ cp /usr/share/openstack-tripleo-heat-templates/roles_data.yaml .
  7. Edit the new roles_data.yaml and add the new network for the controller:

      networks:
       ...
        - OcProvisioningIPv6

When you deploy the overcloud, include the new network_data.yaml and roles_data.yaml files in the deployment command with the -n and -r options, and the network-environment.yaml file with the -e option:

$ sudo openstack overcloud deploy --templates \
...
-n /home/stack/network_data.yaml \
-r /home/stack/roles_data.yaml \
-e /home/stack/network-environment.yaml
...

For more information about IPv6 network configuration, see Configuring the network in the IPv6 Networking for the Overcloud guide.

3.4. Example Templates

The following is an example template file. This file might not meet the requirements of your environment. Before using this example, ensure that it does not interfere with any existing configuration in your environment.

~/templates/ironic.yaml

parameter_defaults:

    NovaSchedulerDefaultFilters:
        - RetryFilter
        - AggregateInstanceExtraSpecsFilter
        - AvailabilityZoneFilter
        - ComputeFilter
        - ComputeCapabilitiesFilter
        - ImagePropertiesFilter

    IronicCleaningDiskErase: metadata

In this example:

  • The AggregateInstanceExtraSpecsFilter allows both virtual and bare metal instances, for a hybrid deployment.
  • Disk cleaning that is done before and between deployments erases only the partition table (metadata).

3.5. Enabling Ironic Introspection in the Overcloud

To enable Bare Metal introspection, include both the following files in the deploy command:

For deployments using OVN
  • ironic-overcloud.yaml
  • ironic-inspector.yaml
For deployments using OVS
  • ironic.yaml
  • ironic-inspector.yaml

You can find these files in the /usr/share/openstack-tripleo-heat-templates/environments/services directory. Use the following example to include configuration details for the ironic inspector that correspond to your environment:

parameter_defaults:
  IronicInspectorSubnets:
    - ip_range: 192.168.101.201,192.168.101.250
  IPAImageURLs: '["http://192.168.24.1:8088/agent.kernel", "http://192.168.24.1:8088/agent.ramdisk"]'
  IronicInspectorInterface: 'br-baremetal'

IronicInspectorSubnets

This parameter can contain multiple ranges and works with both spine and leaf.

IPAImageURLs

This parameter contains details about the IPA kernel and ramdisk. In most cases, you can use the same images that you use on the undercloud. If you omit this parameter, place alternatives on each controller.

IronicInspectorInterface

Use this parameter to specify the bare metal network interface.

Note

If you use a composable Ironic or IronicConductor role, you must include the IronicInspector service in the Ironic role in your roles file.

ServicesDefault:
  OS::TripleO::Services::IronicInspector

3.6. Deploying the Overcloud

To enable the Bare Metal service, include your ironic environment files with the -e option when deploying or redeploying the overcloud, along with the rest of your overcloud configuration.

For example:

$ openstack overcloud deploy \
  --templates \
  -e ~/templates/node-info.yaml \
  -e /usr/share/openstack-tripleo-heat-templates/environments/network-isolation.yaml \
  -e ~/templates/network-environment.yaml \
  -e /usr/share/openstack-tripleo-heat-templates/environments/services/ironic-overcloud.yaml \
  -e ~/templates/ironic.yaml \

For more information about deploying the overcloud, see Deployment command options and Including Environment Files in Overcloud Creation in the Director Installation and Usage guide.

For more information about deploying the overcloud over IPv6, see Setting up your environment and Creating the overcloud in the IPv6 Networking for the Overcloud guide.

3.7. Testing the Bare Metal Service

You can use the OpenStack Integration Test Suite to validate your Red Hat OpenStack deployment. For more information, see the OpenStack Integration Test Suite Guide.

Additional Ways to Verify the Bare Metal Service:

  1. Configure the shell to access Identity as the administrative user:

    $ source ~/overcloudrc
  2. Check that the nova-compute service is running on the controller nodes:

    $ openstack compute service list -c Binary -c Host -c Status
  3. If you have changed the default ironic drivers, ensure that the required drivers are enabled:

    $ openstack baremetal driver list
  4. Ensure that the ironic endpoints are listed:

    $ openstack catalog list