Chapter 3. Configuring the overcloud

Now that you have configured the undercloud, you can configure the remaining overcloud leaf networks. You accomplish this with a series of configuration files. Afterwards, you deploy the overcloud and the resulting deployment has multiple sets of networks with routing available.

3.1. Creating a network data file

To define the leaf networks, you create a network data file, which contain a YAML formatted list of each composable network and its attributes. The default network data is located on the undercloud at /usr/share/openstack-tripleo-heat-templates/network_data.yaml.

Procedure

  1. Create a new network_data_spine_leaf.yaml file in your stack user’s local directory.
  2. In the network_data_spine_leaf.yaml file, create a YAML list that define each network and leaf network as an composable network item. For example, the Internal API network and its leaf networks are defined using the following syntax:

    # Internal API
    - name: InternalApi0
      name_lower: internal_api0
      vip: true
      ip_subnet: '172.18.0.0/24'
      allocation_pools: [{'start': '172.18.0.4', 'end': '172.18.0.250'}]
    - name: InternalApi1
      name_lower: internal_api1
      vip: true
      ip_subnet: '172.18.1.0/24'
      allocation_pools: [{'start': '172.18.1.4', 'end': '172.18.1.250'}]
    - name: InternalApi2
      name_lower: internal_api2
      vip: true
      ip_subnet: '172.18.2.0/24'
      allocation_pools: [{'start': '172.18.2.4', 'end': '172.18.2.250'}]
Note

You do not define the Control Plane networks in the network data file since the undercloud has already created these networks. However, you need to manually set the parameters so that the overcloud can configure its NICs accordingly.

See Appendix A, Example network_data file for a full example with all composable networks.

Each role has its own NIC configuration. Before configuring the spine-leaf configuration, you need to create a base set of NIC templates to suit your current NIC configuration.

3.2. Creating a custom NIC Configuration

Each role requires its own NIC configuration. Create a copy of the base set of NIC templates and modify them to suit your current NIC configuration.

Procedure

  1. Create a new directory to store your NIC templates. For example:

    $ mkdir ~/templates/spine-leaf-nics/
    $ cd ~/templates/spine-leaf-nics/
  2. Create a base template called base.yaml. Use the boilerplate content from Appendix B, Custom NIC template. We use this template as a basis for copying our templates for individual roles.

Resources

3.3. Creating a custom Controller NIC configuration

This procedure creates a YAML structure for Controller nodes on Leaf0 only.

Procedure

  1. Change to your custom NIC directory:

    $ cd ~/templates/spine-leaf-nics/
  2. Copy the base template (base.yaml) for Leaf0. For example:

    $ cp base.yaml controller0.yaml
  3. Edit the template for controller0.yaml and scroll to the network configuration section, which looks like the following:

    resources:
      OsNetConfigImpl:
        type: OS::Heat::SoftwareConfig
        properties:
          group: script
          config:
            str_replace:
              template:
                get_file: /usr/share/openstack-tripleo-heat-templates/network/scripts/run-os-net-config.sh
              params:
                $network_config:
                  network_config:
  4. In the network_config section, define the control plane / provisioning interface. For example

                  network_config:
                  - type: interface
                    name: nic1
                    use_dhcp: false
                    dns_servers:
                      get_param: DnsServers
                    addresses:
                    - ip_netmask:
                        list_join:
                        - /
                        - - get_param: ControlPlaneIp
                          - get_param: ControlPlane0SubnetCidr
                    routes:
                    - ip_netmask: 169.254.169.254/32
                      next_hop:
                        get_param: Leaf0EC2MetadataIp
                    - ip_netmask: 192.168.10.0/24
                      next_hop:
                        get_param: ControlPlane0DefaultRoute

    Note that the parameters used in this case are specific to Leaf0: ControlPlane0SubnetCidr, Leaf0EC2MetadataIp, and ControlPlane0DefaultRoute. Also note the use of the CIDR for Leaf0 on the provisioning network (192.168.10.0/24), which is used as a route.

  5. Define a new interface for our external bridge:

                  - type: ovs_bridge
                    name: br-ex
                    use_dhcp: false
                    addresses:
                    - ip_netmask:
                        get_param: ExternalIpSubnet
                    routes:
                    - default: true
                      next_hop:
                        get_param: ExternalInterfaceDefaultRoute
                    members:
                    - type: interface
                      name: nic2
                      primary: true

    The members section will also contain the VLAN configuration for our networks.

  6. Add each VLAN to the members section. For example, to add the Storage network:

                    members:
                    - type: interface
                      name: nic2
                      primary: true
                    - type: vlan
                      vlan_id:
                        get_param: Storage0NetworkVlanID
                      addresses:
                      - ip_netmask:
                          get_param: Storage0IpSubnet
                      routes:
                      - ip_netmask:
                          get_param: StorageSupernet
                        next_hop:
                          get_param: Storage0InterfaceDefaultRoute

    Note that each interface structure uses parameters specific to Leaf0 (Storage0NetworkVlanID, Storage0IpSubnet, Storage0InterfaceDefaultRoute) as well as the supernet route (StorageSupernet).

    Add a VLAN structure for the following Controller networks: Storage, StorageMgmt, InternalApi, and Tenant.

  7. Save this file.

3.4. Creating custom Compute NIC configurations

This procedure creates a YAML structure for Compute nodes on Leaf0, Leaf1, and Leaf2.

Procedure

  1. Change to your custom NIC directory:

    $ cd ~/templates/spine-leaf-nics/
  2. Copy the base template (base.yaml) for Leaf0. For example:

    $ cp base.yaml compute0.yaml
  3. Edit the template for compute0.yaml and scroll to the network configuration section, which looks like the following:

    resources:
      OsNetConfigImpl:
        type: OS::Heat::SoftwareConfig
        properties:
          group: script
          config:
            str_replace:
              template:
                get_file: /usr/share/openstack-tripleo-heat-templates/network/scripts/run-os-net-config.sh
              params:
                $network_config:
                  network_config:
  4. In the network_config section, define the control plane / provisioning interface. For example

                  network_config:
                  - type: interface
                    name: nic1
                    use_dhcp: false
                    dns_servers:
                      get_param: DnsServers
                    addresses:
                    - ip_netmask:
                        list_join:
                        - /
                        - - get_param: ControlPlaneIp
                          - get_param: ControlPlane0SubnetCidr
                    routes:
                    - ip_netmask: 169.254.169.254/32
                      next_hop:
                        get_param: Leaf0EC2MetadataIp
                    - ip_netmask: 192.168.10.0/24
                      next_hop:
                        get_param: ControlPlane0DefaultRoute

    Note that the parameters used in this case are specific to Leaf0: ControlPlane0SubnetCidr, Leaf0EC2MetadataIp, and ControlPlane0DefaultRoute. Also note the use of the CIDR for Leaf0 on the provisioning network (192.168.10.0/24), which is used as a route.

  5. Define a new interface for our external bridge:

                  - type: ovs_bridge
                    name: br-ex
                    use_dhcp: false
                    members:
                    - type: interface
                      name: nic2
                      primary: true

    The members section will also contain the VLAN configuration for our networks.

  6. Add each VLAN to the members section. For example, to add the Storage network:

                    members:
                    - type: interface
                      name: nic2
                      primary: true
                    - type: vlan
                      vlan_id:
                        get_param: Storage0NetworkVlanID
                      addresses:
                      - ip_netmask:
                          get_param: Storage0IpSubnet
                      routes:
                      - ip_netmask:
                          get_param: StorageSupernet
                        next_hop:
                          get_param: Storage0InterfaceDefaultRoute

    Note that each interface structure uses parameters specific to Leaf0 (Storage0NetworkVlanID, Storage0IpSubnet, Storage0InterfaceDefaultRoute) as well as the supernet route (StorageSupernet).

    Add a VLAN structure for the following Controller networks: Storage, InternalApi, and Tenant.

  7. Save this file.
  8. Copy this file for use with Leaf1 and Leaf2.

    $ cp compute0.yaml compute1.yaml
    $ cp compute0.yaml compute2.yaml
  9. Edit compute1.yaml and scroll to the network_config section. Replace the Leaf0 parameters with the Leaf1 parameters. This includes parameters for the following networks: Control Plane, Storage, InternalApi, and Tenant. Save this file when complete.
  10. Edit compute2.yaml and scroll to the network_config section. Replace the Leaf0 parameters with the Leaf2 parameters. This includes parameters for the following networks: Control Plane, Storage, InternalApi, and Tenant. Save this file when complete.

3.5. Creating custom Ceph Storage NIC configurations

This procedure creates a YAML structure for Ceph Storage nodes on Leaf0, Leaf1, and Leaf2.

Procedure

  1. Change to your custom NIC directory:

    $ cd ~/templates/spine-leaf-nics/
  2. Copy the base template (base.yaml) for Leaf0. For example:

    $ cp base.yaml compute0.yaml
  3. Edit the template for ceph-storage0.yaml and scroll to the network configuration section, which looks like the following:

    resources:
      OsNetConfigImpl:
        type: OS::Heat::SoftwareConfig
        properties:
          group: script
          config:
            str_replace:
              template:
                get_file: /usr/share/openstack-tripleo-heat-templates/network/scripts/run-os-net-config.sh
              params:
                $network_config:
                  network_config:
  4. In the network_config section, define the control plane / provisioning interface. For example

                  network_config:
                  - type: interface
                    name: nic1
                    use_dhcp: false
                    dns_servers:
                      get_param: DnsServers
                    addresses:
                    - ip_netmask:
                        list_join:
                        - /
                        - - get_param: ControlPlaneIp
                          - get_param: ControlPlane0SubnetCidr
                    routes:
                    - ip_netmask: 169.254.169.254/32
                      next_hop:
                        get_param: Leaf0EC2MetadataIp
                    - ip_netmask: 192.168.10.0/24
                      next_hop:
                        get_param: ControlPlane0DefaultRoute

    Note that the parameters used in this case are specific to Leaf0: ControlPlane0SubnetCidr, Leaf0EC2MetadataIp, and ControlPlane0DefaultRoute. Also note the use of the CIDR for Leaf0 on the provisioning network (192.168.10.0/24), which is used as a route.

  5. Define a new interface for our external bridge:

                  - type: ovs_bridge
                    name: br-ex
                    use_dhcp: false
                    members:
                    - type: interface
                      name: nic2
                      primary: true

    The members section will also contain the VLAN configuration for our networks.

  6. Add each VLAN to the members section. For example, to add the Storage network:

                    members:
                    - type: interface
                      name: nic2
                      primary: true
                    - type: vlan
                      vlan_id:
                        get_param: Storage0NetworkVlanID
                      addresses:
                      - ip_netmask:
                          get_param: Storage0IpSubnet
                      routes:
                      - ip_netmask:
                          get_param: StorageSupernet
                        next_hop:
                          get_param: Storage0InterfaceDefaultRoute

    Note that each interface structure uses parameters specific to Leaf0 (Storage0NetworkVlanID, Storage0IpSubnet, Storage0InterfaceDefaultRoute) as well as the supernet route (StorageSupernet).

    Add a VLAN structure for the following Controller networks: Storage, StorageMgmt.

  7. Save this file.
  8. Copy this file for use with Leaf1 and Leaf2.

    $ cp ceph-storage0.yaml ceph-storage1.yaml
    $ cp ceph-storage0.yaml ceph-storage2.yaml
  9. Edit ceph-storage1.yaml and scroll to the network_config section. Replace the Leaf0 parameters with the Leaf1 parameters. This includes parameters for the following networks: Control Plane, Storage, InternalApi, and Tenant. Save this file when complete.
  10. Edit ceph-storage2.yaml and scroll to the network_config section. Replace the Leaf0 parameters with the Leaf2 parameters. This includes parameters for the following networks: Control Plane, Storage, InternalApi, and Tenant. Save this file when complete.

3.6. Creating a network environment file

This procedure creates a basic network environment file for use later.

Procedure

  1. Create a network-environment.yaml file in your stack user’s templates directory.
  2. Add the following sections to the environment file:

    resource_registry:
    
    parameter_defaults:

    Note the following:

    • The resource_registry will map networking resources to their respective NIC templates.
    • The parameter_defaults will define additional networking parameters relevant to your configuration.

The next couple of sections add details to your network environment file to configure certain aspects of the spine leaf architecture. Once complete, you include this file with your openstack overcloud deploy command.

3.7. Mapping network resources to NIC templates

This procedure maps the relevant resources for network configurations to their respective NIC templates.

Procedure

  1. Edit your network-environment.yaml file.
  2. Add the resource mappings to your resource_registry. The resource names take the following format:

    OS::TripleO::[ROLE]::Net::SoftwareConfig: [NIC TEMPLATE]

    For this guide’s scenario, the resource_registry includes the following resource mappings:

    resource_registry:
      OS::TripleO::Controller0::Net::SoftwareConfig: ./spine-leaf-nics/controller0.yaml
      OS::TripleO::Compute0::Net::SoftwareConfig: ./spine-leaf-nics/compute0.yaml
      OS::TripleO::Compute1::Net::SoftwareConfig: ./spine-leaf-nics/compute1.yaml
      OS::TripleO::Compute2::Net::SoftwareConfig: ./spine-leaf-nics/compute2.yaml
      OS::TripleO::CephStorage0::Net::SoftwareConfig: ./spine-leaf-nics/CephStorage0.yaml
      OS::TripleO::CephStorage1::Net::SoftwareConfig: ./spine-leaf-nics/CephStorage1.yaml
      OS::TripleO::CephStorage2::Net::SoftwareConfig: ./spine-leaf-nics/CephStorage2.yaml
  3. Save the network-environment.yaml file.

3.8. Spine leaf routing

Each role requires routes on each isolated network, pointing to the other subnets used for the same function. So when a Compute1 node contacts a controller on the InternalApi VIP, the traffic should target the InternalApi1 interface through the InternalApi1 gateway. As a result, the return traffic from the controller to the InternalApi1 network should go through the InternalApi network gateway.

The supernet routes apply to all isolated networks on each role to avoid sending traffic through the default gateway, which by default is the Control Plane network on non-controllers, and the External network on the controllers.

You need to configure these routes on the isolated networks because Red Hat Enterprise Linux by default implements strict reverse path filtering on inbound traffic. If an API is listening on the Internal API interface and a request comes in to that API, it only accepts the request if the return path route is on the Internal API interface. If the server is listening on the Internal API network but the return path to the client is through the Control Plane, then the server drops the requests due to the reverse path filter.

This following diagram shows an attempt to route traffic through the control plane, which will not succeed. The return route from the router to the controller node does not match the interface where the VIP is listening, so the packet is dropped. 192.168.24.0/24 is directly connected to the controller, so it is considered local to the Control Plane network.

Figure 3.1. Routed traffic through Control Plane

OpenStack Spine Leaf 466050 0218 topology

For comparison, this diagram shows routing running through the Internal API networks:

Figure 3.2. Routed traffic through Internal API

OpenStack Spine Leaf 466050 0218 topology control plane

3.9. Assigning routes for composable networks

This procedure defines the routing for the leaf networks.

Procedure

  1. Edit your network-environment.yaml file.
  2. Add the supernet route parameters to the parameter_defaults section. Each isolated network should have a supernet route applied. For example:

    parameter_defaults:
      StorageSupernet: 172.16.0.0/16
      StorageMgmtSupernet: 172.17.0.0/16
      InternalApiSupernet: 172.18.0.0/16
      TenantSupernet: 172.19.0.0/16
    Note

    The network interface templates should contain the supernet parameters for each network. For example:

    - type: vlan
      vlan_id:
        get_param: Storage0NetworkVlanID
      addresses:
      - ip_netmask:
          get_param: Storage0IpSubnet
      routes:
      - ip_netmask:
          get_param: StorageSupernet
        next_hop:
          get_param: Storage0InterfaceDefaultRoute
  3. Add the following ExtraConfig settings to the parameter_defaults section to address routing for specific components on Compute and Ceph Storage nodes.

    parameter_defaults:
      ...
      Compute1ExtraConfig:
        nova::vncproxy::host: "%{hiera('internal_api1')}"
        neutron::agents::ml2::ovs::local_ip: "%{hiera('tenant1')}"
      Compute2ExtraConfig:
        nova::vncproxy::host: "%{hiera('internal_api2')}"
        neutron::agents::ml2::ovs::local_ip: "%{hiera('tenant2')}"
      Compute3ExtraConfig:
        nova::vncproxy::host: "%{hiera('internal_api3')}"
        neutron::agents::ml2::ovs::local_ip: "%{hiera('tenant3')}"
      CephAnsibleExtraConfig:
        public_network: '172.16.0.0/24,172.16.1.0/24,172.16.2.0/24'
        cluster_network: '172.17.0.0/24,172.17.1.0/24,172.17.2.0/24'
    • For the Compute ExtraConfig parameters:

      • Defines the IP address to use for the VNC proxy.
      • Defines the IP address to use for the ML2 agent.
    • For CephAnsibleExtraConfig:

      • The public_network setting lists all the storage networks (one per leaf).
      • The cluster_network entries lists the storage management networks (one per leaf).

3.10. Setting control plane parameters

You usually define networking details for isolated spine-leaf networks using a network_data file. The exception is the control plane network, which the undercloud created. However, the overcloud requires access to the control plane for each leaf. This requires some additional parameters, which you define in your network-environment.yaml file. For example, the following snippet is from an example NIC template for the Controller role on Leaf0

- type: interface
  name: nic1
  use_dhcp: false
  dns_servers:
    get_param: DnsServers
  addresses:
  - ip_netmask:
      list_join:
      - /
      - - get_param: ControlPlaneIp
        - get_param: ControlPlane0SubnetCidr
  routes:
  - ip_netmask: 169.254.169.254/32
    next_hop:
      get_param: Leaf0EC2MetadataIp
  - ip_netmask: 192.168.10.0/24
    next_hop:
      get_param: ControlPlane0DefaultRoute

In this instance, we need to define the IP, subnet, metadata IP, and default route for the respective Control Plane network on Leaf 0.

Procedure

  1. Edit your network-environment.yaml file.
  2. In the parameter_defaults section:

    1. Add the mapping to the main control plane subnet:

      parameter_defaults:
        ...
        ControlPlaneSubnet: leaf0
    2. Add the control plane subnet mapping for each spine-leaf network:

      parameter_defaults:
        ...
        Controller0ControlPlaneSubnet: leaf0
        Compute0ControlPlaneSubnet: leaf0
        Compute1ControlPlaneSubnet: leaf1
        Compute2ControlPlaneSubnet: leaf2
        CephStorage0ControlPlaneSubnet: leaf0
        CephStorage1ControlPlaneSubnet: leaf1
        CephStorage2ControlPlaneSubnet: leaf2
    3. Add the control plane routes for each leaf:

      parameter_defaults:
        ...
        ControlPlane0DefaultRoute: 192.168.10.1
        ControlPlane0SubnetCidr: '24'
        ControlPlane1DefaultRoute: 192.168.11.1
        ControlPlane1SubnetCidr: '24'
        ControlPlane2DefaultRoute: 192.168.12.1
        ControlPlane2SubnetCidr: '24'

      The default route parameters are typically the IP address set for the gateway of each provisioning subnet. Refer to your undercloud.conf file for this information.

    4. Add the parameters for the EC2 metadata IPs:

      parameter_defaults:
        ...
        Leaf0EC2MetadataIp: 192.168.10.1
        Leaf1EC2MetadataIp: 192.168.11.1
        Leaf2EC2MetadataIp: 192.168.12.1

      These act as routes through the control plane for the EC2 metadata service (169.254.169.254/32) and you should typically set these to the respective gateway for each leaf on the provisioning network.

  3. Save the network-environment.yaml file.

3.11. Creating a roles data file

This section demonstrates how to define each composable role for each leaf and attach the composable networks to each respective role.

Procedure

  1. Create a custom roles director in your stack user’s local directory:

    $ mkdir ~/roles
  2. Copy the default Controller, Compute, and Ceph Storage roles from the director’s core template collection to the roles director. Rename the files for Leaf 0:

    $ cp /usr/share/openstack-tripleo-heat-templates/roles/Controller.yaml ~/roles/Controller0.yaml
    $ cp /usr/share/openstack-tripleo-heat-templates/roles/Compute.yaml ~/roles/Compute0.yaml
    $ cp /usr/share/openstack-tripleo-heat-templates/roles/CephStorage.yaml ~/roles/CephStorage0.yaml
  3. Edit the Controller0.yaml file:

    $ vi ~/roles/Controller0.yaml
  4. Edit the name, networks, and HostnameFormatDefault parameters in this file so that they align with the Leaf 0 specific parameters. For example:

    - name: Controller0
      ...
      networks:
        - External
        - InternalApi0
        - Storage0
        - StorageMgmt0
        - Tenant0
      ...
      HostnameFormatDefault: '%stackname%-controller0-%index%'

    Save this file.

  5. Edit the Compute0.yaml file:

    $ vi ~/roles/Compute0.yaml
  6. Edit the name, networks, and HostnameFormatDefault parameters in this file so that they align with the Leaf 0 specific parameters. For example:

    - name: Compute0
      ...
      networks:
        - InternalApi0
        - Tenant0
        - Storage0
      HostnameFormatDefault: '%stackname%-compute0-%index%'

    Save this file.

  7. Edit the CephStorage0.yaml file:

    $ vi ~/roles/CephStorage0.yaml
  8. Edit the name`and `networks parameters in this file so that they align with the Leaf 0 specific parameters. In addition, add the HostnameFormatDefault parameter and define the Leaf 0 hostname for our Ceph Storage nodes. For example:

    - name: CephStorage0
      ...
      networks:
        - Storage0
        - StorageMgmt0
      HostnameFormatDefault: '%stackname%-cephstorage0-%index%'

    Save this file.

  9. Copy the Leaf 0 Compute and Ceph Storage files as a basis for your Leaf 1 and Leaf 2 files:

    $ cp ~/roles/Compute0.yaml ~/roles/Compute1.yaml
    $ cp ~/roles/Compute0.yaml ~/roles/Compute2.yaml
    $ cp ~/roles/CephStorage0.yaml ~/roles/CephStorage1.yaml
    $ cp ~/roles/CephStorage0.yaml ~/roles/CephStorage2.yaml
  10. Edit the name, networks, and HostnameFormatDefault parameters in the Leaf 1 and Leaf 2 files so that they align with the respective Leaf network parameters. For example, the parameters in the Leaf 1 Compute file have the following values:

    - name: Compute1
      ...
      networks:
        - InternalApi1
        - Tenant1
        - Storage1
      HostnameFormatDefault: '%stackname%-compute1-%index%'

    The Leaf 1 Ceph Storage parameters have the following values:

    - name: CephStorage1
      ...
      networks:
        - Storage1
        - StorageMgmt1
      HostnameFormatDefault: '%stackname%-cephstorage1-%index%'
  11. When you roles are ready, generate the full roles data file using the following command:

    $ openstack overcloud roles generate --roles-path ~/roles -o roles_data_spine_leaf.yaml Controller0 Compute0 Compute1 Compute2 CephStorage0 CephStorage1 CephStorage2

    This creates a full roles_data_spine_leaf.yaml file that includes all the custom roles for each respective leaf network.

See Appendix C, Example roles_data file for a full example of this file.

3.12. Deploying a spine-leaf enabled overcloud

All our files are now ready for our deployment. This section provides a review of each file and the deployment command:

Procedure

  1. Review the /home/stack/template/network_data_spine_leaf.yaml file and ensure it contains each network for each leaf.

    Note

    There is currently no validation performed for the network subnet and allocation_pools values. Be certain you have defined these consistently and there is no conflict with existing networks.

  2. Review the NIC templates contained in ~/templates/spine-leaf-nics/ and ensure the interfaces for each role on each leaf are correctly defined.
  3. Review the network-environment.yaml environment file and ensure it contains all custom parameters that fall outside control of the network data file. This includes routes, control plane parameters, and a resource_registry section that references the custom NIC templates for each role.
  4. Review the /home/stack/templates/roles_data_spine_leaf.yaml values and ensure you have defined a role for each leaf.
  5. Check the `/home/stack/templates/nodes_data.yaml file and ensure all roles have an assigned flavor and a node count. Check also that all nodes for each leaf are correctly tagged.
  6. Run the openstack overcloud deploy command to apply the spine leaf configuration. For example:

    openstack overcloud deploy --templates \
    -n /home/stack/template/network_data_spine_leaf.yaml \
    -r /home/stack/templates/roles_data_spine_leaf.yaml \
    -e /usr/share/openstack-tripleo-heat-templates/environments/network-isolation.yaml \
    -e /home/stack/templates/network-environment.yaml \
    -e /home/stack/templates/nodes_data.yaml \
    -e [OTHER ENVIRONMENT FILES]

    Add any additional environment files. For example, an environment file with your container image locations or Ceph cluster configuration.

  7. Wait until the spine-leaf enabled overcloud deploys.