Chapter 3. Configuring the overcloud
Now that you have configured the undercloud, you can configure the remaining overcloud leaf networks. You accomplish this with a series of configuration files. Afterwards, you deploy the overcloud and the resulting deployment has multiple sets of networks with routing available.
3.1. Creating a network data file
To define the leaf networks, you create a network data file, which contain a YAML formatted list of each composable network and its attributes. The default network data is located on the undercloud at /usr/share/openstack-tripleo-heat-templates/network_data.yaml.
Procedure
-
Create a new
network_data_spine_leaf.yamlfile in yourstackuser’s local directory. In the
network_data_spine_leaf.yamlfile, create a YAML list that define each network and leaf network as an composable network item. For example, the Internal API network and its leaf networks are defined using the following syntax:# Internal API - name: InternalApi0 name_lower: internal_api0 vip: true ip_subnet: '172.18.0.0/24' allocation_pools: [{'start': '172.18.0.4', 'end': '172.18.0.250'}] - name: InternalApi1 name_lower: internal_api1 vip: true ip_subnet: '172.18.1.0/24' allocation_pools: [{'start': '172.18.1.4', 'end': '172.18.1.250'}] - name: InternalApi2 name_lower: internal_api2 vip: true ip_subnet: '172.18.2.0/24' allocation_pools: [{'start': '172.18.2.4', 'end': '172.18.2.250'}]
You do not define the Control Plane networks in the network data file since the undercloud has already created these networks. However, you need to manually set the parameters so that the overcloud can configure its NICs accordingly.
See Appendix A, Example network_data file for a full example with all composable networks.
Each role has its own NIC configuration. Before configuring the spine-leaf configuration, you need to create a base set of NIC templates to suit your current NIC configuration.
3.2. Creating a custom NIC Configuration
Each role requires its own NIC configuration. Create a copy of the base set of NIC templates and modify them to suit your current NIC configuration.
Procedure
Create a new directory to store your NIC templates. For example:
$ mkdir ~/templates/spine-leaf-nics/ $ cd ~/templates/spine-leaf-nics/
-
Create a base template called
base.yaml. Use the boilerplate content from Appendix B, Custom NIC template. We use this template as a basis for copying our templates for individual roles.
Resources
- See "Creating Custom Interface Templates" in the Advanced Overcloud Customization guide for more information on customizing your NIC templates.
3.3. Creating a custom Controller NIC configuration
This procedure creates a YAML structure for Controller nodes on Leaf0 only.
Procedure
Change to your custom NIC directory:
$ cd ~/templates/spine-leaf-nics/
Copy the base template (
base.yaml) for Leaf0. For example:$ cp base.yaml controller0.yaml
Edit the template for
controller0.yamland scroll to the network configuration section, which looks like the following:resources: OsNetConfigImpl: type: OS::Heat::SoftwareConfig properties: group: script config: str_replace: template: get_file: /usr/share/openstack-tripleo-heat-templates/network/scripts/run-os-net-config.sh params: $network_config: network_config:In the
network_configsection, define the control plane / provisioning interface. For examplenetwork_config: - type: interface name: nic1 use_dhcp: false dns_servers: get_param: DnsServers addresses: - ip_netmask: list_join: - / - - get_param: ControlPlaneIp - get_param: ControlPlane0SubnetCidr routes: - ip_netmask: 169.254.169.254/32 next_hop: get_param: Leaf0EC2MetadataIp - ip_netmask: 192.168.10.0/24 next_hop: get_param: ControlPlane0DefaultRouteNote that the parameters used in this case are specific to Leaf0:
ControlPlane0SubnetCidr,Leaf0EC2MetadataIp, andControlPlane0DefaultRoute. Also note the use of the CIDR for Leaf0 on the provisioning network (192.168.10.0/24), which is used as a route.Define a new interface for our external bridge:
- type: ovs_bridge name: br-ex use_dhcp: false addresses: - ip_netmask: get_param: ExternalIpSubnet routes: - default: true next_hop: get_param: ExternalInterfaceDefaultRoute members: - type: interface name: nic2 primary: trueThe
memberssection will also contain the VLAN configuration for our networks.Add each VLAN to the
memberssection. For example, to add the Storage network:members: - type: interface name: nic2 primary: true - type: vlan vlan_id: get_param: Storage0NetworkVlanID addresses: - ip_netmask: get_param: Storage0IpSubnet routes: - ip_netmask: get_param: StorageSupernet next_hop: get_param: Storage0InterfaceDefaultRouteNote that each interface structure uses parameters specific to Leaf0 (
Storage0NetworkVlanID,Storage0IpSubnet,Storage0InterfaceDefaultRoute) as well as the supernet route (StorageSupernet).Add a VLAN structure for the following Controller networks:
Storage,StorageMgmt,InternalApi, andTenant.- Save this file.
3.4. Creating custom Compute NIC configurations
This procedure creates a YAML structure for Compute nodes on Leaf0, Leaf1, and Leaf2.
Procedure
Change to your custom NIC directory:
$ cd ~/templates/spine-leaf-nics/
Copy the base template (
base.yaml) for Leaf0. For example:$ cp base.yaml compute0.yaml
Edit the template for
compute0.yamland scroll to the network configuration section, which looks like the following:resources: OsNetConfigImpl: type: OS::Heat::SoftwareConfig properties: group: script config: str_replace: template: get_file: /usr/share/openstack-tripleo-heat-templates/network/scripts/run-os-net-config.sh params: $network_config: network_config:In the
network_configsection, define the control plane / provisioning interface. For examplenetwork_config: - type: interface name: nic1 use_dhcp: false dns_servers: get_param: DnsServers addresses: - ip_netmask: list_join: - / - - get_param: ControlPlaneIp - get_param: ControlPlane0SubnetCidr routes: - ip_netmask: 169.254.169.254/32 next_hop: get_param: Leaf0EC2MetadataIp - ip_netmask: 192.168.10.0/24 next_hop: get_param: ControlPlane0DefaultRouteNote that the parameters used in this case are specific to Leaf0:
ControlPlane0SubnetCidr,Leaf0EC2MetadataIp, andControlPlane0DefaultRoute. Also note the use of the CIDR for Leaf0 on the provisioning network (192.168.10.0/24), which is used as a route.Define a new interface for our external bridge:
- type: ovs_bridge name: br-ex use_dhcp: false members: - type: interface name: nic2 primary: trueThe
memberssection will also contain the VLAN configuration for our networks.Add each VLAN to the
memberssection. For example, to add the Storage network:members: - type: interface name: nic2 primary: true - type: vlan vlan_id: get_param: Storage0NetworkVlanID addresses: - ip_netmask: get_param: Storage0IpSubnet routes: - ip_netmask: get_param: StorageSupernet next_hop: get_param: Storage0InterfaceDefaultRouteNote that each interface structure uses parameters specific to Leaf0 (
Storage0NetworkVlanID,Storage0IpSubnet,Storage0InterfaceDefaultRoute) as well as the supernet route (StorageSupernet).Add a VLAN structure for the following Controller networks:
Storage,InternalApi, andTenant.- Save this file.
Copy this file for use with Leaf1 and Leaf2.
$ cp compute0.yaml compute1.yaml $ cp compute0.yaml compute2.yaml
-
Edit
compute1.yamland scroll to thenetwork_configsection. Replace the Leaf0 parameters with the Leaf1 parameters. This includes parameters for the following networks:Control Plane,Storage,InternalApi, andTenant. Save this file when complete. -
Edit
compute2.yamland scroll to thenetwork_configsection. Replace the Leaf0 parameters with the Leaf2 parameters. This includes parameters for the following networks:Control Plane,Storage,InternalApi, andTenant. Save this file when complete.
3.5. Creating custom Ceph Storage NIC configurations
This procedure creates a YAML structure for Ceph Storage nodes on Leaf0, Leaf1, and Leaf2.
Procedure
Change to your custom NIC directory:
$ cd ~/templates/spine-leaf-nics/
Copy the base template (
base.yaml) for Leaf0. For example:$ cp base.yaml compute0.yaml
Edit the template for
ceph-storage0.yamland scroll to the network configuration section, which looks like the following:resources: OsNetConfigImpl: type: OS::Heat::SoftwareConfig properties: group: script config: str_replace: template: get_file: /usr/share/openstack-tripleo-heat-templates/network/scripts/run-os-net-config.sh params: $network_config: network_config:In the
network_configsection, define the control plane / provisioning interface. For examplenetwork_config: - type: interface name: nic1 use_dhcp: false dns_servers: get_param: DnsServers addresses: - ip_netmask: list_join: - / - - get_param: ControlPlaneIp - get_param: ControlPlane0SubnetCidr routes: - ip_netmask: 169.254.169.254/32 next_hop: get_param: Leaf0EC2MetadataIp - ip_netmask: 192.168.10.0/24 next_hop: get_param: ControlPlane0DefaultRouteNote that the parameters used in this case are specific to Leaf0:
ControlPlane0SubnetCidr,Leaf0EC2MetadataIp, andControlPlane0DefaultRoute. Also note the use of the CIDR for Leaf0 on the provisioning network (192.168.10.0/24), which is used as a route.Define a new interface for our external bridge:
- type: ovs_bridge name: br-ex use_dhcp: false members: - type: interface name: nic2 primary: trueThe
memberssection will also contain the VLAN configuration for our networks.Add each VLAN to the
memberssection. For example, to add the Storage network:members: - type: interface name: nic2 primary: true - type: vlan vlan_id: get_param: Storage0NetworkVlanID addresses: - ip_netmask: get_param: Storage0IpSubnet routes: - ip_netmask: get_param: StorageSupernet next_hop: get_param: Storage0InterfaceDefaultRouteNote that each interface structure uses parameters specific to Leaf0 (
Storage0NetworkVlanID,Storage0IpSubnet,Storage0InterfaceDefaultRoute) as well as the supernet route (StorageSupernet).Add a VLAN structure for the following Controller networks:
Storage,StorageMgmt.- Save this file.
Copy this file for use with Leaf1 and Leaf2.
$ cp ceph-storage0.yaml ceph-storage1.yaml $ cp ceph-storage0.yaml ceph-storage2.yaml
-
Edit
ceph-storage1.yamland scroll to thenetwork_configsection. Replace the Leaf0 parameters with the Leaf1 parameters. This includes parameters for the following networks:Control Plane,Storage,InternalApi, andTenant. Save this file when complete. -
Edit
ceph-storage2.yamland scroll to thenetwork_configsection. Replace the Leaf0 parameters with the Leaf2 parameters. This includes parameters for the following networks:Control Plane,Storage,InternalApi, andTenant. Save this file when complete.
3.6. Creating a network environment file
This procedure creates a basic network environment file for use later.
Procedure
-
Create a
network-environment.yamlfile in your stack user’stemplatesdirectory. Add the following sections to the environment file:
resource_registry: parameter_defaults:
Note the following:
-
The
resource_registrywill map networking resources to their respective NIC templates. -
The
parameter_defaultswill define additional networking parameters relevant to your configuration.
-
The
The next couple of sections add details to your network environment file to configure certain aspects of the spine leaf architecture. Once complete, you include this file with your openstack overcloud deploy command.
3.7. Mapping network resources to NIC templates
This procedure maps the relevant resources for network configurations to their respective NIC templates.
Procedure
-
Edit your
network-environment.yamlfile. Add the resource mappings to your
resource_registry. The resource names take the following format:OS::TripleO::[ROLE]::Net::SoftwareConfig: [NIC TEMPLATE]
For this guide’s scenario, the
resource_registryincludes the following resource mappings:resource_registry: OS::TripleO::Controller0::Net::SoftwareConfig: ./spine-leaf-nics/controller0.yaml OS::TripleO::Compute0::Net::SoftwareConfig: ./spine-leaf-nics/compute0.yaml OS::TripleO::Compute1::Net::SoftwareConfig: ./spine-leaf-nics/compute1.yaml OS::TripleO::Compute2::Net::SoftwareConfig: ./spine-leaf-nics/compute2.yaml OS::TripleO::CephStorage0::Net::SoftwareConfig: ./spine-leaf-nics/CephStorage0.yaml OS::TripleO::CephStorage1::Net::SoftwareConfig: ./spine-leaf-nics/CephStorage1.yaml OS::TripleO::CephStorage2::Net::SoftwareConfig: ./spine-leaf-nics/CephStorage2.yaml
-
Save the
network-environment.yamlfile.
3.8. Spine leaf routing
Each role requires routes on each isolated network, pointing to the other subnets used for the same function. So when a Compute1 node contacts a controller on the InternalApi VIP, the traffic should target the InternalApi1 interface through the InternalApi1 gateway. As a result, the return traffic from the controller to the InternalApi1 network should go through the InternalApi network gateway.
The supernet routes apply to all isolated networks on each role to avoid sending traffic through the default gateway, which by default is the Control Plane network on non-controllers, and the External network on the controllers.
You need to configure these routes on the isolated networks because Red Hat Enterprise Linux by default implements strict reverse path filtering on inbound traffic. If an API is listening on the Internal API interface and a request comes in to that API, it only accepts the request if the return path route is on the Internal API interface. If the server is listening on the Internal API network but the return path to the client is through the Control Plane, then the server drops the requests due to the reverse path filter.
This following diagram shows an attempt to route traffic through the control plane, which will not succeed. The return route from the router to the controller node does not match the interface where the VIP is listening, so the packet is dropped. 192.168.24.0/24 is directly connected to the controller, so it is considered local to the Control Plane network.
Figure 3.1. Routed traffic through Control Plane

For comparison, this diagram shows routing running through the Internal API networks:
Figure 3.2. Routed traffic through Internal API

3.9. Assigning routes for composable networks
This procedure defines the routing for the leaf networks.
Procedure
-
Edit your
network-environment.yamlfile. Add the supernet route parameters to the
parameter_defaultssection. Each isolated network should have a supernet route applied. For example:parameter_defaults: StorageSupernet: 172.16.0.0/16 StorageMgmtSupernet: 172.17.0.0/16 InternalApiSupernet: 172.18.0.0/16 TenantSupernet: 172.19.0.0/16
NoteThe network interface templates should contain the supernet parameters for each network. For example:
- type: vlan vlan_id: get_param: Storage0NetworkVlanID addresses: - ip_netmask: get_param: Storage0IpSubnet routes: - ip_netmask: get_param: StorageSupernet next_hop: get_param: Storage0InterfaceDefaultRouteAdd the following
ExtraConfigsettings to theparameter_defaultssection to address routing for specific components on Compute and Ceph Storage nodes.parameter_defaults: ... Compute1ExtraConfig: nova::vncproxy::host: "%{hiera('internal_api1')}" neutron::agents::ml2::ovs::local_ip: "%{hiera('tenant1')}" Compute2ExtraConfig: nova::vncproxy::host: "%{hiera('internal_api2')}" neutron::agents::ml2::ovs::local_ip: "%{hiera('tenant2')}" Compute3ExtraConfig: nova::vncproxy::host: "%{hiera('internal_api3')}" neutron::agents::ml2::ovs::local_ip: "%{hiera('tenant3')}" CephAnsibleExtraConfig: public_network: '172.16.0.0/24,172.16.1.0/24,172.16.2.0/24' cluster_network: '172.17.0.0/24,172.17.1.0/24,172.17.2.0/24'For the Compute
ExtraConfigparameters:- Defines the IP address to use for the VNC proxy.
- Defines the IP address to use for the ML2 agent.
For
CephAnsibleExtraConfig:-
The
public_networksetting lists all the storage networks (one per leaf). -
The
cluster_networkentries lists the storage management networks (one per leaf).
-
The
3.10. Setting control plane parameters
You usually define networking details for isolated spine-leaf networks using a network_data file. The exception is the control plane network, which the undercloud created. However, the overcloud requires access to the control plane for each leaf. This requires some additional parameters, which you define in your network-environment.yaml file. For example, the following snippet is from an example NIC template for the Controller role on Leaf0
- type: interface
name: nic1
use_dhcp: false
dns_servers:
get_param: DnsServers
addresses:
- ip_netmask:
list_join:
- /
- - get_param: ControlPlaneIp
- get_param: ControlPlane0SubnetCidr
routes:
- ip_netmask: 169.254.169.254/32
next_hop:
get_param: Leaf0EC2MetadataIp
- ip_netmask: 192.168.10.0/24
next_hop:
get_param: ControlPlane0DefaultRouteIn this instance, we need to define the IP, subnet, metadata IP, and default route for the respective Control Plane network on Leaf 0.
Procedure
-
Edit your
network-environment.yamlfile. In the
parameter_defaultssection:Add the mapping to the main control plane subnet:
parameter_defaults: ... ControlPlaneSubnet: leaf0
Add the control plane subnet mapping for each spine-leaf network:
parameter_defaults: ... Controller0ControlPlaneSubnet: leaf0 Compute0ControlPlaneSubnet: leaf0 Compute1ControlPlaneSubnet: leaf1 Compute2ControlPlaneSubnet: leaf2 CephStorage0ControlPlaneSubnet: leaf0 CephStorage1ControlPlaneSubnet: leaf1 CephStorage2ControlPlaneSubnet: leaf2
Add the control plane routes for each leaf:
parameter_defaults: ... ControlPlane0DefaultRoute: 192.168.10.1 ControlPlane0SubnetCidr: '24' ControlPlane1DefaultRoute: 192.168.11.1 ControlPlane1SubnetCidr: '24' ControlPlane2DefaultRoute: 192.168.12.1 ControlPlane2SubnetCidr: '24'
The default route parameters are typically the IP address set for the
gatewayof each provisioning subnet. Refer to yourundercloud.conffile for this information.Add the parameters for the EC2 metadata IPs:
parameter_defaults: ... Leaf0EC2MetadataIp: 192.168.10.1 Leaf1EC2MetadataIp: 192.168.11.1 Leaf2EC2MetadataIp: 192.168.12.1
These act as routes through the control plane for the EC2 metadata service (169.254.169.254/32) and you should typically set these to the respective
gatewayfor each leaf on the provisioning network.
-
Save the
network-environment.yamlfile.
3.11. Creating a roles data file
This section demonstrates how to define each composable role for each leaf and attach the composable networks to each respective role.
Procedure
Create a custom
rolesdirector in yourstackuser’s local directory:$ mkdir ~/roles
Copy the default Controller, Compute, and Ceph Storage roles from the director’s core template collection to the roles director. Rename the files for Leaf 0:
$ cp /usr/share/openstack-tripleo-heat-templates/roles/Controller.yaml ~/roles/Controller0.yaml $ cp /usr/share/openstack-tripleo-heat-templates/roles/Compute.yaml ~/roles/Compute0.yaml $ cp /usr/share/openstack-tripleo-heat-templates/roles/CephStorage.yaml ~/roles/CephStorage0.yaml
Edit the
Controller0.yamlfile:$ vi ~/roles/Controller0.yaml
Edit the
name,networks, andHostnameFormatDefaultparameters in this file so that they align with the Leaf 0 specific parameters. For example:- name: Controller0 ... networks: - External - InternalApi0 - Storage0 - StorageMgmt0 - Tenant0 ... HostnameFormatDefault: '%stackname%-controller0-%index%'Save this file.
Edit the
Compute0.yamlfile:$ vi ~/roles/Compute0.yaml
Edit the
name,networks, andHostnameFormatDefaultparameters in this file so that they align with the Leaf 0 specific parameters. For example:- name: Compute0 ... networks: - InternalApi0 - Tenant0 - Storage0 HostnameFormatDefault: '%stackname%-compute0-%index%'Save this file.
Edit the
CephStorage0.yamlfile:$ vi ~/roles/CephStorage0.yaml
Edit the
name`and `networksparameters in this file so that they align with the Leaf 0 specific parameters. In addition, add theHostnameFormatDefaultparameter and define the Leaf 0 hostname for our Ceph Storage nodes. For example:- name: CephStorage0 ... networks: - Storage0 - StorageMgmt0 HostnameFormatDefault: '%stackname%-cephstorage0-%index%'Save this file.
Copy the Leaf 0 Compute and Ceph Storage files as a basis for your Leaf 1 and Leaf 2 files:
$ cp ~/roles/Compute0.yaml ~/roles/Compute1.yaml $ cp ~/roles/Compute0.yaml ~/roles/Compute2.yaml $ cp ~/roles/CephStorage0.yaml ~/roles/CephStorage1.yaml $ cp ~/roles/CephStorage0.yaml ~/roles/CephStorage2.yaml
Edit the
name,networks, andHostnameFormatDefaultparameters in the Leaf 1 and Leaf 2 files so that they align with the respective Leaf network parameters. For example, the parameters in the Leaf 1 Compute file have the following values:- name: Compute1 ... networks: - InternalApi1 - Tenant1 - Storage1 HostnameFormatDefault: '%stackname%-compute1-%index%'The Leaf 1 Ceph Storage parameters have the following values:
- name: CephStorage1 ... networks: - Storage1 - StorageMgmt1 HostnameFormatDefault: '%stackname%-cephstorage1-%index%'When you roles are ready, generate the full roles data file using the following command:
$ openstack overcloud roles generate --roles-path ~/roles -o roles_data_spine_leaf.yaml Controller0 Compute0 Compute1 Compute2 CephStorage0 CephStorage1 CephStorage2
This creates a full
roles_data_spine_leaf.yamlfile that includes all the custom roles for each respective leaf network.
See Appendix C, Example roles_data file for a full example of this file.
3.12. Deploying a spine-leaf enabled overcloud
All our files are now ready for our deployment. This section provides a review of each file and the deployment command:
Procedure
Review the
/home/stack/template/network_data_spine_leaf.yamlfile and ensure it contains each network for each leaf.NoteThere is currently no validation performed for the network subnet and
allocation_poolsvalues. Be certain you have defined these consistently and there is no conflict with existing networks.-
Review the NIC templates contained in
~/templates/spine-leaf-nics/and ensure the interfaces for each role on each leaf are correctly defined. -
Review the
network-environment.yamlenvironment file and ensure it contains all custom parameters that fall outside control of the network data file. This includes routes, control plane parameters, and aresource_registrysection that references the custom NIC templates for each role. -
Review the
/home/stack/templates/roles_data_spine_leaf.yamlvalues and ensure you have defined a role for each leaf. - Check the `/home/stack/templates/nodes_data.yaml file and ensure all roles have an assigned flavor and a node count. Check also that all nodes for each leaf are correctly tagged.
Run the
openstack overcloud deploycommand to apply the spine leaf configuration. For example:openstack overcloud deploy --templates \ -n /home/stack/template/network_data_spine_leaf.yaml \ -r /home/stack/templates/roles_data_spine_leaf.yaml \ -e /usr/share/openstack-tripleo-heat-templates/environments/network-isolation.yaml \ -e /home/stack/templates/network-environment.yaml \ -e /home/stack/templates/nodes_data.yaml \ -e [OTHER ENVIRONMENT FILES]
Add any additional environment files. For example, an environment file with your container image locations or Ceph cluster configuration.
- Wait until the spine-leaf enabled overcloud deploys.
