Chapter 13. Controlling node placement

By default, director selects nodes for each role randomly, usually according to the profile tag of the node. However, you can also define specific node placement. This is useful in the following scenarios:

  • Assign specific node IDs, for example, controller-0, controller-1
  • Assign custom host names
  • Assign specific IP addresses
  • Assign specific Virtual IP addresses
Note

Manually setting predictable IP addresses, virtual IP addresses, and ports for a network alleviates the need for allocation pools. However, it is recommended to retain allocation pools for each network to ease with scaling new nodes. Ensure that any statically defined IP addresses fall outside the allocation pools.

13.1. Assigning specific node IDs

You can assign node IDs to specific nodes, for example, controller-0, controller-1, compute-0, and compute-1.

Procedure

  1. Assign the ID as a per-node capability that the Compute scheduler matches on deployment:

    openstack baremetal node set --property capabilities='node:controller-0,boot_option:local' <id>

    This command assigns the capability node:controller-0 to the node. Repeat this pattern using a unique continuous index, starting from 0, for all nodes. Ensure that all nodes for a given role (Controller, Compute, or each of the storage roles) are tagged in the same way, or the Compute scheduler cannot match the capabilities correctly.

  2. Create a heat environment file (for example, scheduler_hints_env.yaml) that uses scheduler hints to match the capabilities for each node:

    parameter_defaults:
      ControllerSchedulerHints:
        'capabilities:node': 'controller-%index%'

    Use the following parameters to configure scheduler hints for other role types:

    • ControllerSchedulerHints for Controller nodes.
    • ComputeSchedulerHints for Compute nodes.
    • BlockStorageSchedulerHints for Block Storage nodes.
    • ObjectStorageSchedulerHints for Object Storage nodes.
    • CephStorageSchedulerHints for Ceph Storage nodes.
    • [ROLE]SchedulerHints for custom roles. Replace [ROLE] with the role name.
  3. Include the scheduler_hints_env.yaml environment file in the overcloud deploy command.
Note

Node placement takes priority over profile matching. To avoid scheduling failures, use the default baremetal flavor for deployment and not the flavors that are designed for profile matching (compute, control):. Set the respective flavor parameters to baremetal in an environment file:

parameter_defaults:
  OvercloudControllerFlavor: baremetal
  OvercloudComputeFlavor: baremetal

13.2. Assigning custom host names

In combination with the node ID configuration in Section 13.1, “Assigning specific node IDs”, director can also assign a specific custom host name to each node. This is useful when you need to define where a system is located (for example, rack2-row12), match an inventory identifier, or other situations where a custom hostname is desirable.

Important

Do not rename a node after it has been deployed. Renaming a node after deployment creates issues with instance management.

Procedure

  • Use the HostnameMap parameter in an environment file, such as the scheduler_hints_env.yaml file from Section 13.1, “Assigning specific node IDs”:

    parameter_defaults:
      ControllerSchedulerHints:
        'capabilities:node': 'controller-%index%'
      ComputeSchedulerHints:
        'capabilities:node': 'compute-%index%'
      HostnameMap:
        overcloud-controller-0: overcloud-controller-prod-123-0
        overcloud-controller-1: overcloud-controller-prod-456-0
        overcloud-controller-2: overcloud-controller-prod-789-0
        overcloud-novacompute-0: overcloud-compute-prod-abc-0

    Define the HostnameMap in the parameter_defaults section, and set each mapping as the original hostname that heat defines with HostnameFormat parameters (for example, overcloud-controller-0) and the second value is the desired custom hostname for that node (overcloud-controller-prod-123-0).

Use this method in combination with the node ID placement to ensure that each node has a custom hostname.

13.3. Assigning predictable IPs

For further control over the resulting environment, director can assign overcloud nodes with specific IP addresses on each network.

Procedure

  1. Create an environment file to define the predictive IP addressing:

    $ touch ~/templates/predictive_ips.yaml
  2. Create a parameter_defaults section in the ~/templates/predictive_ips.yaml file and use the following syntax to define predictive IP addressing for each node on each network:

    parameter_defaults:
      <role_name>IPs:
        <network>:
        - <IP_address>
        <network>:
        - <IP_address>

    Each node role has a unique parameter. Replace <role_name>IPs with the relevant parameter:

    • ControllerIPs for Controller nodes.
    • ComputeIPs for Compute nodes.
    • CephStorageIPs for Ceph Storage nodes.
    • BlockStorageIPs for Block Storage nodes.
    • SwiftStorageIPs for Object Storage nodes.
    • [ROLE]IPs for custom roles. Replace [ROLE] with the role name.

      Each parameter is a map of network names to a list of addresses. Each network type must have at least as many addresses as there will be nodes on that network. Director assigns addresses in order. The first node of each type receives the first address on each respective list, the second node receives the second address on each respective lists, and so forth.

      For example, use the following example syntax if you want to deploy three Ceph Storage nodes in your overcloud with predictive IP addresses:

      parameter_defaults:
        CephStorageIPs:
          storage:
          - 172.16.1.100
          - 172.16.1.101
          - 172.16.1.102
          storage_mgmt:
          - 172.16.3.100
          - 172.16.3.101
          - 172.16.3.102

      The first Ceph Storage node receives two addresses: 172.16.1.100 and 172.16.3.100. The second receives 172.16.1.101 and 172.16.3.101, and the third receives 172.16.1.102 and 172.16.3.102. The same pattern applies to the other node types.

      To configure predictable IP addresses on the control plane, copy the /usr/share/openstack-tripleo-heat-templates/environments/ips-from-pool-ctlplane.yaml file to the templates directory of the stack user:

      $ cp /usr/share/openstack-tripleo-heat-templates/environments/ips-from-pool-ctlplane.yaml ~/templates/.

      Configure the new ips-from-pool-ctlplane.yaml file with the following parameter example. You can combine the control plane IP address declarations with the IP address declarations for other networks and use only one file to declare the IP addresses for all networks on all roles. You can also use predictable IP addresses for spine/leaf. Each node must have IP addresses from the correct subnet.

      parameter_defaults:
        ControllerIPs:
          ctlplane:
          - 192.168.24.10
          - 192.168.24.11
          - 192.168.24.12
          internal_api:
          - 172.16.1.20
          - 172.16.1.21
          - 172.16.1.22
          external:
          - 10.0.0.40
          - 10.0.0.57
          - 10.0.0.104
        ComputeLeaf1IPs:
          ctlplane:
          - 192.168.25.100
          - 192.168.25.101
          internal_api:
          - 172.16.2.100
          - 172.16.2.101
        ComputeLeaf2IPs:
          ctlplane:
          - 192.168.26.100
          - 192.168.26.101
          internal_api:
          - 172.16.3.100
          - 172.16.3.101

      Ensure that the IP addresses that you choose fall outside the allocation pools for each network that you define in your network environment file. For example, ensure that the internal_api assignments fall outside of the InternalApiAllocationPools range to avoid conflicts with any IPs chosen automatically. Also ensure that the IP assignments do not conflict with the VIP configuration, either for standard predictable VIP placement (see Section 13.4, “Assigning predictable Virtual IPs”) or external load balancing (see Section 21.4, “Configuring external load balancing”).

      Important

      If an overcloud node is deleted, do not remove its entries in the IP lists. The IP list is based on the underlying heat indices, which do not change even if you delete nodes. To indicate a given entry in the list is no longer used, replace the IP value with a value such as DELETED or UNUSED. Entries should never be removed from the IP lists, only changed or added.

  3. To apply this configuration during a deployment, include the predictive_ips.yaml environment file with the openstack overcloud deploy command.

    Important

    If you use network isolation, include the predictive_ips.yaml file after the network-isolation.yaml file:

    $ openstack overcloud deploy --templates \
      -e /usr/share/openstack-tripleo-heat-templates/environments/network-isolation.yaml \
      -e ~/templates/predictive_ips.yaml \
      [OTHER OPTIONS]

13.4. Assigning predictable Virtual IPs

In addition to defining predictable IP addresses for each node, you can also define predictable Virtual IPs (VIPs) for clustered services.

Procedure

  • Edit the network environment file and add the VIP parameters in the parameter_defaults section:

    parameter_defaults:
      ...
      # Predictable VIPs
      ControlFixedIPs: [{'ip_address':'192.168.201.101'}]
      InternalApiVirtualFixedIPs: [{'ip_address':'172.16.0.9'}]
      PublicVirtualFixedIPs: [{'ip_address':'10.1.1.9'}]
      StorageVirtualFixedIPs: [{'ip_address':'172.18.0.9'}]
      StorageMgmtVirtualFixedIPs: [{'ip_address':'172.19.0.9'}]
      RedisVirtualFixedIPs: [{'ip_address':'172.16.0.8'}]
      OVNDBsVirtualFixedIPs: [{'ip_address':'172.16.0.7'}]

    Select these IPs from outside of their respective allocation pool ranges. For example, select an IP address for InternalApiVirtualFixedIPs that is not within the InternalApiAllocationPools range.

Note

This step is only for overclouds that use the default internal load balancing configuration. If you want to assign VIPs with an external load balancer, use the procedure in the dedicated External Load Balancing for the Overcloud guide.