Chapter 4. Configuring the overcloud

After you configure the undercloud, you can configure the remaining overcloud leaf networks with a series of configuration files. After you configure the remaining overcloud leaf networks and deploy the overcloud, the resulting deployment has multiple sets of networks with routing available.

4.1. Creating a network data file

To define the leaf networks, create a network data file that contains a YAML formatted list of each composable network and its attributes. Use the subnets parameter to define the additional Leaf subnets with a base network.

Procedure

  1. Create a new network_data_spine_leaf.yaml file in the home directory of the stack user. Use the default network_data_subnets_routed.yaml file as a basis:

    $ cp /usr/share/openstack-tripleo-heat-templates/network_data_subnets_routed.yaml /home/stack/network_data_spine_leaf.yaml
  2. In the network_data_spine_leaf.yaml file, edit the YAML list to define each base network and respective leaf subnets as a composable network item. Use the following example syntax to define a base leaf and two leaf subnets:

    - name: <base_name>
      name_lower: <lowercase_name>
      vip: <true/false>
      vlan: '<vlan_id>'
      ip_subnet: '<network_address>/<prefix>'
      allocation_pools: [{'start': '<start_address>', 'end': '<end_address>'}]
      gateway_ip: '<router_ip_address>'
      subnets:
        <leaf_subnet_name>:
          vlan: '<vlan_id>'
          ip_subnet: '<network_address>/<prefix>'
          allocation_pools: [{'start': '<start_address>', 'end': '<end_address>'}]
          gateway_ip: '<router_ip_address>'
        <leaf_subnet_name>:
          vlan: '<vlan_id>'
          ip_subnet: '<network_address>/<prefix>'
          allocation_pools: [{'start': '<start_address>', 'end': '<end_address>'}]
          gateway_ip: '<router_ip_address>'

    The following example demonstrates how to define the Internal API network and its leaf networks:

    - name: InternalApi
      name_lower: internal_api
      vip: true
      vlan: 10
      ip_subnet: '172.18.0.0/24'
      allocation_pools: [{'start': '172.18.0.4', 'end': '172.18.0.250'}]
      gateway_ip: '172.18.0.1'
      subnets:
        internal_api_leaf1:
          vlan: 11
          ip_subnet: '172.18.1.0/24'
          allocation_pools: [{'start': '172.18.1.4', 'end': '172.18.1.250'}]
          gateway_ip: '172.18.1.1'
        internal_api_leaf2:
          vlan: 12
          ip_subnet: '172.18.2.0/24'
          allocation_pools: [{'start': '172.18.2.4', 'end': '172.18.2.250'}]
          gateway_ip: '172.18.2.1'
Note

You do not define the Control Plane networks in the network data file because the undercloud has already created these networks. However, you must set the parameters manually so that the overcloud can configure the NICs accordingly.

Note

Define vip: true for the networks that contain the Controller-based services. In this example, InternalApiLeaf0 contains these services.

4.2. Creating a roles data file

To define each composable role for each leaf and attach the composable networks to each respective role, complete the following steps.

Procedure

  1. Create a custom roles directory in the home directory of the stack user:

    $ mkdir ~/roles
  2. Copy the default Controller, Compute, and Ceph Storage roles from the director core template collection to the roles directory. Rename the files for Compute and Ceph Storage to suit Leaf 0:

    $ cp /usr/share/openstack-tripleo-heat-templates/roles/Controller.yaml ~/roles/Controller.yaml
    $ cp /usr/share/openstack-tripleo-heat-templates/roles/Compute.yaml ~/roles/Compute0.yaml
    $ cp /usr/share/openstack-tripleo-heat-templates/roles/CephStorage.yaml ~/roles/CephStorage0.yaml
  3. Copy the Leaf 0 Compute and Ceph Storage files as a basis for your Leaf 1 and Leaf 2 files:

    $ cp ~/roles/Compute0.yaml ~/roles/Compute1.yaml
    $ cp ~/roles/Compute0.yaml ~/roles/Compute2.yaml
    $ cp ~/roles/CephStorage0.yaml ~/roles/CephStorage1.yaml
    $ cp ~/roles/CephStorage0.yaml ~/roles/CephStorage2.yaml
  4. Edit the name, HostnameFormatDefault, and deprecated_nic_config_name parameters in the Leaf 0, Leaf 1, and Leaf 2 files so that they align with the respective Leaf parameters. For example, the parameters in the Leaf 0 Compute file have the following values:

    - name: ComputeLeaf0
      HostnameFormatDefault: '%stackname%-compute-leaf0-%index%'
      deprecated_nic_config_name: 'computeleaf0.yaml'

    The Leaf 0 Ceph Storage parameters have the following values:

    - name: CephStorageLeaf0
      HostnameFormatDefault: '%stackname%-cephstorage-leaf0-%index%'
      deprecated_nic_config_name: 'ceph-strorageleaf0.yaml'
  5. Edit the network parameter in the Leaf 1 and Leaf 2 files so that they align with the respective Leaf network parameters. For example, the parameters in the Leaf 1 Compute file have the following values:

    - name: ComputeLeaf1
      networks:
        InternalApi:
          subnet: internal_api_leaf1
        Tenant:
          subnet: tenant_leaf1
        Storage:
          subnet: storage_leaf1

    The Leaf 1 Ceph Storage parameters have the following values:

    - name: CephStorageLeaf1
      networks:
        Storage:
          subnet: storage_leaf1
        StorageMgmt:
          subnet: storage_mgmt_leaf1
    Note

    This applies only to Leaf 1 and Leaf 2. The network parameter for Leaf 0 retains the base subnet values, which are the lowercase names of each subnet combined with a _subnet suffix. For example, the Internal API for Leaf 0 is internal_api_subnet.

  6. When your role configuration is complete, run the following command to generate the full roles data file:

    $ openstack overcloud roles generate --roles-path ~/roles -o roles_data_spine_leaf.yaml Controller Compute Compute1 Compute2 CephStorage CephStorage1 CephStorage2

    This creates a full roles_data_spine_leaf.yaml file that includes all of the custom roles for each respective leaf network.

Each role has its own NIC configuration. Before you configure the spine-leaf configuration, you must create a base set of NIC templates to suit your current NIC configuration.

4.3. Creating a custom NIC configuration

Each role requires a unique NIC configuration. Complete the following steps to create a copy of the base set of NIC templates and map the new templates to the respective NIC configuration resources.

Procedure

  1. Change to the core heat template directory:

    $ cd /usr/share/openstack-tripleo-heat-templates
  2. Render the Jinja2 templates with the tools/process-templates.py script, your custom network_data file, and custom roles_data file:

    $ tools/process-templates.py \
        -n /home/stack/network_data_spine_leaf.yaml \
        -r /home/stack/roles_data_spine_leaf.yaml \
        -o /home/stack/openstack-tripleo-heat-templates-spine-leaf
  3. Change to the home directory:

    $ cd /home/stack
  4. Copy the content from one of the default NIC templates to use as a basis for your spine-leaf templates. For example, copy the single-nic-vlans NIC template:

    $ cp -r openstack-tripleo-heat-templates-spine-leaf/network/config/single-nic-vlans/* /home/stack/templates/spine-leaf-nics/.
  5. Edit each NIC configuration in /home/stack/templates/spine-leaf-nics/ and change the location of the configuration script to an absolute location. Scroll to the network configuration section, which resembles the following snippet:

    resources:
      OsNetConfigImpl:
        type: OS::Heat::SoftwareConfig
        properties:
          group: script
          config:
            str_replace:
              template:
                get_file: ../../scripts/run-os-net-config.sh
              params:
                $network_config:
                  network_config:

    Change the location of the script to the absolute path:

    resources:
      OsNetConfigImpl:
        type: OS::Heat::SoftwareConfig
        properties:
          group: script
          config:
            str_replace:
              template:
                get_file: /usr/share/openstack-tripleo-heat-templates/network/scripts/run-os-net-config.sh
              params:
                $network_config:
                  network_config:

    Make this change in each file for each Leaf and save the changes.

    Note

    For further NIC changes, see Custom network interface templates in the Advanced Overcloud Customization guide.

  6. Create a file called spine-leaf-nics.yaml and edit the file.
  7. Create a resource_registry section in the file and add a set of ::Net::SoftwareConfig resources that map to the respective NIC templates:

    resource_registry:
      OS::TripleO::Controller::Net::SoftwareConfig: /home/stack/templates/spine-leaf-nics/controller.yaml
      OS::TripleO::ComputeLeaf0::Net::SoftwareConfig: /home/stack/templates/spine-leaf-nics/computeleaf0.yaml
      OS::TripleO::ComputeLeaf1::Net::SoftwareConfig: /home/stack/templates/spine-leaf-nics/computeleaf1.yaml
      OS::TripleO::ComputeLeaf2::Net::SoftwareConfig: /home/stack/templates/spine-leaf-nics/computeleaf2.yaml
      OS::TripleO::CephStorageLeaf0::Net::SoftwareConfig: /home/stack/templates/spine-leaf-nics/ceph-storageleaf0.yaml
      OS::TripleO::CephStorageLeaf1::Net::SoftwareConfig: /home/stack/templates/spine-leaf-nics/ceph-storageleaf1.yaml
      OS::TripleO::CephStorageLeaf2::Net::SoftwareConfig: /home/stack/templates/spine-leaf-nics/ceph-storageleaf2.yaml

    These resources mappings override the default resource mappings during deployment.

  8. Save the spine-leaf-nics.yaml file.
  9. Remove the rendered template directory:

    $ rm -rf openstack-tripleo-heat-templates-spine-leaf

    As a result of this procedure, you now have a set of NIC templates and an environment file that maps the required ::Net::SoftwareConfig resources to them.

  10. When you eventually run the openstack overcloud deploy command, ensure that you include the environment files in the following order:

    1. /usr/share/openstack-tripleo-heat-templates/environments/network-isolation.yaml, which enables network isolation. Note that the director renders this file from the network-isolation.j2.yaml Jinja2 template.
    2. /usr/share/openstack-tripleo-heat-templates/environments/network-environment.yaml, which is the default network environment file, including default NIC resource mappings. Note that the director renders this file from the network-environment.j2.yaml Jinja2 template.
    3. /home/stack/templates/spine-leaf-nics.yaml, which contains your custom NIC resource mappings and overrides the default NIC resource mappings.

      The following command snippet demonstrates the ordering:

      $ openstack overcloud deploy --templates
          ...
          -e /usr/share/openstack-tripleo-heat-templates/environments/network-isolation.yaml \
          -e /usr/share/openstack-tripleo-heat-templates/environments/network-environment.yaml \
          -e /home/stack/templates/spine-leaf-nics.yaml \
          ...
  11. Complete the procedures in the following sections to add details to your network environment file, and define certain aspects of the spine leaf architecture. After you complete this configuration, include this file in the openstack overcloud deploy command.

Additional resources

4.4. Setting control plane parameters

You usually define networking details for isolated spine-leaf networks using a network_data file. The exception is the control plane network, which the undercloud creates. However, the overcloud requires access to the control plane for each leaf. To enable this access, you must define additional parameters in your deployment.

In this example, define the IP, subnet, and default route for the respective Control Plane network on Leaf 0.

Procedure

  1. Create a file called spine-leaf-ctlplane.yaml and edit the file.
  2. Create a parameter_defaults section in the file and add the control plane subnet mapping for each spine-leaf network:

    parameter_defaults:
      ...
      ControllerControlPlaneSubnet: leaf0
      Compute0ControlPlaneSubnet: leaf0
      Compute1ControlPlaneSubnet: leaf1
      Compute2ControlPlaneSubnet: leaf2
      CephStorage0ControlPlaneSubnet: leaf0
      CephStorage1ControlPlaneSubnet: leaf1
      CephStorage2ControlPlaneSubnet: leaf2
  3. Save the spine-leaf-ctlplane.yaml file.

4.5. Setting the subnet for virtual IP addresses

The Controller role typically hosts virtual IP (VIP) addresses for each network. By default, the overcloud takes the VIPs from the base subnet of each network except for the control plane. The control plane uses ctlplane-subnet, which is the default subnet name created during a standard undercloud installation.

In this spine leaf scenario, the default base provisioning network is leaf0 instead of ctlplane-subnet. This means that you must add overriding values to the VipSubnetMap parameter to change the subnet that the control plane VIP uses.

Additionally, if the VIPs for each network do not use the base subnet of one or more networks, you must add additional overrides to the VipSubnetMap parameter to ensure that the director creates VIPs on the subnet associated with the L2 network segment that connects the Controller nodes.

Procedure:

  1. Create a file called spine-leaf-vips.yaml and edit the file.
  2. Create a parameter_defaults section in the file and add the VipSubnetMap parameter based on your requirements:

    • If you use leaf0 for the provisioning / control plane network, set the ctlplane VIP remapping to leaf0:

      parameter_defaults:
        VipSubnetMap:
          ctlplane: leaf0
    • If you use a different Leaf for multiple VIPs, set the VIP remapping to suit these requirements. For example, use the following snippet to configure the VipSubnetMap parameter to use leaf1 for all VIPs:

      parameter_defaults:
        VipSubnetMap:
          ctlplane: leaf1
          redis: internal_api_leaf1
          InternalApi: internal_api_leaf1
          Storage: storage_leaf1
          StorageMgmt: storage_mgmt_leaf1
  3. Save the spine-leaf-vips.yaml file.

4.6. Mapping separate networks

By default, OpenStack Platform uses Open Virtual Network (OVN), which requires that all Controller and Compute nodes connect to a single L2 network for external network access. This means that both Controller and Compute network configurations use a br-ex bridge, which director maps to the datacentre network in the overcloud by default. This mapping is usually either for a flat network mapping or a VLAN network mapping. In a spine leaf architecture, you can change these mappings so that each Leaf routes traffic through the specific bridge or VLAN on that Leaf, which is often the case with edge computing scenarios.

Procedure

  1. Create a file called spine-leaf-separate.yaml and edit the file.
  2. Create a parameter_defaults section in the spine-leaf-separate.yaml file and include the external network mapping for each spine-leaf network:

    • For flat network mappings, list each Leaf in the NeutronFlatNetworks parameter and set the NeutronBridgeMappings parameter for each Leaf:

      parameter_defaults:
        NeutronFlatNetworks: leaf0,leaf1,leaf2
        Controller0Parameters:
          NeutronBridgeMappings: "leaf0:br-ex"
        Compute0Parameters:
          NeutronBridgeMappings: "leaf0:br-ex"
        Compute1Parameters:
          NeutronBridgeMappings: "leaf1:br-ex"
        Compute2Parameters:
          NeutronBridgeMappings: "leaf2:br-ex"
    • For VLAN network mappings, additionally set the NeutronNetworkVLANRanges to map VLANs for all three Leaf networks:

        NeutronNetworkType: 'geneve,vlan'
        NeutronNetworkVLANRanges: 'leaf0:1:1000,leaf1:1:1000,leaf2:1:1000'
  3. Save the spine-leaf-separate.yaml file.

4.7. Deploying a spine-leaf enabled overcloud

When you have completed your spine-leaf overcloud configuration, complete the following steps to review each file and then run the deployment command:

Procedure

  1. Review the /home/stack/template/network_data_spine_leaf.yaml file and ensure that it contains each network and subnet for each leaf.

    Note

    There is currently no automatic validation for the network subnet and allocation_pools values. Ensure that you define these values consistently and that there is no conflict with existing networks.

  2. Review the /home/stack/templates/roles_data_spine_leaf.yaml values and ensure that you define a role for each leaf.
  3. Review the NIC templates in the ~/templates/spine-leaf-nics/ directory and ensure that you define the interfaces for each role on each leaf correctly.
  4. Review the custom spine-leaf-nics.yaml environment file and ensure that it contains a resource_registry section that references the custom NIC templates for each role.
  5. Review the /home/stack/templates/nodes_data.yaml file and ensure that all roles have an assigned flavor and a node count. Also check that you have correctly tagged all nodes for each leaf.
  6. Run the openstack overcloud deploy command to apply the spine leaf configuration. For example:

    $ openstack overcloud deploy --templates \
        -n /home/stack/templates/network_data_spine_leaf.yaml \
        -r /home/stack/templates/roles_data_spine_leaf.yaml \
        -e /usr/share/openstack-tripleo-heat-templates/environments/network-isolation.yaml \
        -e /usr/share/openstack-tripleo-heat-templates/environments/network-environment.yaml \
        -e /home/stack/templates/spine-leaf-nics.yaml \
        -e /home/stack/templates/spine-leaf-ctlplane.yaml \
        -e /home/stack/templates/spine-leaf-vips.yaml \
        -e /home/stack/templates/spine-leaf-separate.yaml \
        -e /home/stack/templates/nodes_data.yaml \
        -e [OTHER ENVIRONMENT FILES]
    • The network-isolation.yaml is the rendered name of the Jinja2 file in the same location (network-isolation.j2.yaml). Include this file in the deployment command to ensure that the director isolates each networks to the correct leaf. This ensures that the networks are created dynamically during the overcloud creation process.
    • Include the network-environment.yaml file after the network-isolation.yaml. The network-environment.yaml file provides the default network configuration for composable network parameters.
    • Include the spine-leaf-nics.yaml file after the network-environment.yaml. The spine-leaf-nics.yaml file overrides the default NIC template mappings from the network-environment.yaml file.
    • If you created any other spine leaf network environment files, include these environment files after the spine-leaf-nics.yaml file.
    • Add any additional environment files. For example, an environment file with your container image locations or Ceph cluster configuration.
  7. Wait until the spine-leaf enabled overcloud deploys.

4.8. Adding a new leaf to a spine-leaf deployment

When increasing network capacity or adding a new physical site, you might need to a new leaf to your Red Hat OpenStack Platform (RHOSP) spine-leaf network.

Prerequisites

  • Your RHOSP deployment uses a spine-leaf network topology.

Procedure

  1. Log in to the undercloud host as the stack user.
  2. Source the undercloud credentials file:

    $ source ~/stackrc
  3. In the /usr/share/openstack-tripleo-heat-templates/network_data_spine_leaf.yaml file, under the appropriate base network, add a leaf subnet as a composable network item for the new leaf that you are adding.

    Example

    In this example, a subnet entry for the new leaf (leaf3) has been added:

    - name: InternalApi
      name_lower: internal_api
      vip: true
      vlan: 10
      ip_subnet: '172.18.0.0/24'
      allocation_pools: [{'start': '172.18.0.4', 'end': '172.18.0.250'}]
      gateway_ip: '172.18.0.1'
      subnets:
        internal_api_leaf1:
          vlan: 11
          ip_subnet: '172.18.1.0/24'
          allocation_pools: [{'start': '172.18.1.4', 'end': '172.18.1.250'}]
          gateway_ip: '172.18.1.1'
        internal_api_leaf2:
          vlan: 12
          ip_subnet: '172.18.2.0/24'
          allocation_pools: [{'start': '172.18.2.4', 'end': '172.18.2.250'}]
          gateway_ip: '172.18.2.1'
        internal_api_leaf3:
          vlan: 13
          ip_subnet: '172.18.3.0/24'
          allocation_pools: [{'start': '172.18.3.4', 'end': '172.18.3.250'}]
          gateway_ip: '172.18.3.1'
  4. Create a roles data file for the new leaf that you are adding.

    1. Copy a leaf Compute and a leaf Ceph Storage file for the new leaf that you are adding.

      Example

      In this example, Compute1.yaml and CephStorage1.yaml are copied for the new leaf, Compute3.yaml and CephStorage3.yaml, repectively:

      $ cp ~/roles/Compute1.yaml ~/roles/Compute3.yaml
      $ cp ~/roles/CephStorage1.yaml ~/roles/CephStorage3.yaml
    2. Edit the name, HostnameFormatDefault, and deprecated_nic_config_name parameters in the new leaf files so that they align with the respective Leaf parameters.

      Example

      For example, the parameters in the Leaf 1 Compute file have the following values:

      - name: ComputeLeaf1
        HostnameFormatDefault: '%stackname%-compute-leaf1-%index%'
        deprecated_nic_config_name: 'computeleaf1.yaml'

      Example

      The Leaf 1 Ceph Storage parameters have the following values:

      - name: CephStorageLeaf1
        HostnameFormatDefault: '%stackname%-cephstorage-leaf1-%index%'
        deprecated_nic_config_name: 'ceph-strorageleaf1.yaml'
    3. Edit the network parameter in the new leaf files so that they align with the respective Leaf network parameters.

      Example

      For example, the parameters in the Leaf 1 Compute file have the following values:

      - name: ComputeLeaf1
        networks:
          InternalApi:
            subnet: internal_api_leaf1
          Tenant:
            subnet: tenant_leaf1
          Storage:
            subnet: storage_leaf1

      Example

      The Leaf 1 Ceph Storage parameters have the following values:

      - name: CephStorageLeaf1
        networks:
          Storage:
            subnet: storage_leaf1
          StorageMgmt:
            subnet: storage_mgmt_leaf1
    4. When your role configuration is complete, run the following command to generate the full roles data file. Include all of the leafs in your network and the new leaf that you are adding.

      Example

      In this example, leaf3 is added to leaf0, leaf1, and leaf2:

      $ openstack overcloud roles generate --roles-path ~/roles -o roles_data_spine_leaf.yaml Controller Compute Compute1 Compute2 Compute3 CephStorage CephStorage1 CephStorage2 CephStorage3

      This creates a full roles_data_spine_leaf.yaml file that includes all of the custom roles for each respective leaf network.

  5. Create a custom NIC configuration for the leaf that you are adding.

    1. Copy a leaf Compute and a leaf Ceph Storage NIC configuration file for the new leaf that you are adding.

      Example

      In this example, computeleaf1.yaml and ceph-storageleaf1.yaml are copied for the new leaf, computeleaf3.yaml and ceph-storageleaf3.yaml, repectively:

      $ cp ~/templates/spine-leaf-nics/computeleaf1.yaml ~/templates/spine-leaf-nics/computeleaf3.yaml
      $ cp ~/templates/spine-leaf-nics/ceph-storageleaf1.yaml ~/templates/spine-leaf-nics/ceph-storageleaf3.yaml
    2. In /usr/share/openstack-tripleo-heat-templates/network_data_spine_leaf.yaml, under the resource_registry section in the file, add a set of ::Net::SoftwareConfig resources that map to the respective NIC templates:

      Example

      In this example, the new leaf NIC configuration files (computeleaf3.yaml and ceph-storageleaf3.yaml) have been added:

      resource_registry:
        OS::TripleO::Controller::Net::SoftwareConfig: /home/stack/templates/spine-leaf-nics/controller.yaml
        OS::TripleO::ComputeLeaf0::Net::SoftwareConfig: /home/stack/templates/spine-leaf-nics/computeleaf0.yaml
        OS::TripleO::ComputeLeaf1::Net::SoftwareConfig: /home/stack/templates/spine-leaf-nics/computeleaf1.yaml
        OS::TripleO::ComputeLeaf2::Net::SoftwareConfig: /home/stack/templates/spine-leaf-nics/computeleaf2.yaml
        OS::TripleO::ComputeLeaf3::Net::SoftwareConfig: /home/stack/templates/spine-leaf-nics/computeleaf3.yaml
        OS::TripleO::CephStorageLeaf0::Net::SoftwareConfig: /home/stack/templates/spine-leaf-nics/ceph-storageleaf0.yaml
        OS::TripleO::CephStorageLeaf1::Net::SoftwareConfig: /home/stack/templates/spine-leaf-nics/ceph-storageleaf1.yaml
        OS::TripleO::CephStorageLeaf2::Net::SoftwareConfig: /home/stack/templates/spine-leaf-nics/ceph-storageleaf2.yaml
        OS::TripleO::CephStorageLeaf3::Net::SoftwareConfig: /home/stack/templates/spine-leaf-nics/ceph-storageleaf3.yaml

      These resources mappings override the default resource mappings during deployment.

      As a result of this procedure, you now have a set of NIC templates and an environment file that maps the required ::Net::SoftwareConfig resources to them. When you eventually run the openstack overcloud deploy command, ensure that you include the environment files in the following order:

    3. /usr/share/openstack-tripleo-heat-templates/environments/network-isolation.yaml, which enables network isolation.

      Note that the director renders this file from the network-isolation.j2.yaml Jinja2 template.

    4. /usr/share/openstack-tripleo-heat-templates/environments/network-environment.yaml, which is the default network environment file, including default NIC resource mappings.

      Note that the director renders this file from the network-environment.j2.yaml Jinja2 template.

    5. /home/stack/templates/spine-leaf-nics.yaml, which contains your custom NIC resource mappings and overrides the default NIC resource mappings.

      The following command snippet demonstrates the ordering:

      $ openstack overcloud deploy --templates
          ...
          -e /usr/share/openstack-tripleo-heat-templates/environments/network-isolation.yaml \
          -e /usr/share/openstack-tripleo-heat-templates/environments/network-environment.yaml \
          -e /home/stack/templates/spine-leaf-nics.yaml \
          ...
  6. Update the control plane parameters.

    In ~/templates/spine-leaf-ctlplane.yaml, under the parameter_defaults section, add the control plane subnet mapping for the new leaf network:

    Example

    In this example, the new leaf (leaf3) entries are added:

    parameter_defaults:
      ...
      ControllerControlPlaneSubnet: leaf0
      Compute0ControlPlaneSubnet: leaf0
      Compute1ControlPlaneSubnet: leaf1
      Compute2ControlPlaneSubnet: leaf2
      Compute3ControlPlaneSubnet: leaf3
      CephStorage0ControlPlaneSubnet: leaf0
      CephStorage1ControlPlaneSubnet: leaf1
      CephStorage2ControlPlaneSubnet: leaf2
      CephStorage3ControlPlaneSubnet: leaf3
  7. Map the new leaf network.

    In ~/templates/spine-leaf-separate.yaml, under the parameter_defaults section, include the external network mapping for the new leaf network.

    • For flat network mappings, list the new leaf (leaf3) in the NeutronFlatNetworks parameter and set the NeutronBridgeMappings parameter for the new leaf:

      parameter_defaults:
        NeutronFlatNetworks: leaf0,leaf1,leaf2, leaf3
        Controller0Parameters:
          NeutronBridgeMappings: "leaf0:br-ex"
        Compute0Parameters:
          NeutronBridgeMappings: "leaf0:br-ex"
        Compute1Parameters:
          NeutronBridgeMappings: "leaf1:br-ex"
        Compute2Parameters:
          NeutronBridgeMappings: "leaf2:br-ex"
        Compute3Parameters:
          NeutronBridgeMappings: "leaf3:br-ex"
    • For VLAN network mappings, additionally set the NeutronNetworkVLANRanges to map VLANs for the new leaf (leaf3) network:

        NeutronNetworkType: 'geneve,vlan'
        NeutronNetworkVLANRanges: 'leaf0:1:1000,leaf1:1:1000,leaf2:1:1000,leaf3:1:1000'
  8. Redeploy your spine-leaf enabled overcloud, by following the steps in Section 4.7, “Deploying a spine-leaf enabled overcloud”.