Chapter 14. Deploying nodes with spine-leaf configuration by using director Operator
Deploy nodes with spine-leaf networking architecture to replicate an extensive network topology within your environment. Current restrictions allow only one provisioning network for Metal3
.
14.1. Creating or updating the OpenStackNetConfig custom resource to define all subnets
Define your OpenStackNetConfig custom resource and specify the subnets for the overcloud networks. Director Operator then renders the configuration and creates, or updates, the network topology.
Prerequisites
- Your OpenShift Container Platform cluster is operational and you have installed the director Operator correctly.
-
You have installed the
oc
command line tool on your workstation.
Procedure
Create a configuration file called
openstacknetconfig.yaml
:apiVersion: osp-director.openstack.org/v1beta1 kind: OpenStackNetConfig metadata: name: openstacknetconfig spec: attachConfigurations: br-osp: nodeNetworkConfigurationPolicy: nodeSelector: node-role.kubernetes.io/worker: "" desiredState: interfaces: - bridge: options: stp: enabled: false port: - name: enp7s0 description: Linux bridge with enp7s0 as a port name: br-osp state: up type: linux-bridge mtu: 1500 br-ex: nodeNetworkConfigurationPolicy: nodeSelector: node-role.kubernetes.io/worker: "" desiredState: interfaces: - bridge: options: stp: enabled: false port: - name: enp6s0 description: Linux bridge with enp6s0 as a port name: br-ex state: up type: linux-bridge mtu: 1500 # optional DnsServers list dnsServers: - 192.168.25.1 # optional DnsSearchDomains list dnsSearchDomains: - osptest.test.metalkube.org - some.other.domain # DomainName of the OSP environment domainName: osptest.test.metalkube.org networks: - name: Control nameLower: ctlplane subnets: - name: ctlplane ipv4: allocationEnd: 192.168.25.250 allocationStart: 192.168.25.100 cidr: 192.168.25.0/24 gateway: 192.168.25.1 attachConfiguration: br-osp - name: InternalApi nameLower: internal_api mtu: 1350 subnets: - name: internal_api ipv4: allocationEnd: 172.17.0.250 allocationStart: 172.17.0.10 cidr: 172.17.0.0/24 routes: - destination: 172.17.1.0/24 nexthop: 172.17.0.1 - destination: 172.17.2.0/24 nexthop: 172.17.0.1 vlan: 20 attachConfiguration: br-osp - name: internal_api_leaf1 ipv4: allocationEnd: 172.17.1.250 allocationStart: 172.17.1.10 cidr: 172.17.1.0/24 routes: - destination: 172.17.0.0/24 nexthop: 172.17.1.1 - destination: 172.17.2.0/24 nexthop: 172.17.1.1 vlan: 21 attachConfiguration: br-osp - name: internal_api_leaf2 ipv4: allocationEnd: 172.17.2.250 allocationStart: 172.17.2.10 cidr: 172.17.2.0/24 routes: - destination: 172.17.1.0/24 nexthop: 172.17.2.1 - destination: 172.17.0.0/24 nexthop: 172.17.2.1 vlan: 22 attachConfiguration: br-osp - name: External nameLower: external subnets: - name: external ipv4: allocationEnd: 10.0.0.250 allocationStart: 10.0.0.10 cidr: 10.0.0.0/24 gateway: 10.0.0.1 attachConfiguration: br-ex - name: Storage nameLower: storage mtu: 1350 subnets: - name: storage ipv4: allocationEnd: 172.18.0.250 allocationStart: 172.18.0.10 cidr: 172.18.0.0/24 routes: - destination: 172.18.1.0/24 nexthop: 172.18.0.1 - destination: 172.18.2.0/24 nexthop: 172.18.0.1 vlan: 30 attachConfiguration: br-osp - name: storage_leaf1 ipv4: allocationEnd: 172.18.1.250 allocationStart: 172.18.1.10 cidr: 172.18.1.0/24 routes: - destination: 172.18.0.0/24 nexthop: 172.18.1.1 - destination: 172.18.2.0/24 nexthop: 172.18.1.1 vlan: 31 attachConfiguration: br-osp - name: storage_leaf2 ipv4: allocationEnd: 172.18.2.250 allocationStart: 172.18.2.10 cidr: 172.18.2.0/24 routes: - destination: 172.18.0.0/24 nexthop: 172.18.2.1 - destination: 172.18.1.0/24 nexthop: 172.18.2.1 vlan: 32 attachConfiguration: br-osp - name: StorageMgmt nameLower: storage_mgmt mtu: 1350 subnets: - name: storage_mgmt ipv4: allocationEnd: 172.19.0.250 allocationStart: 172.19.0.10 cidr: 172.19.0.0/24 routes: - destination: 172.19.1.0/24 nexthop: 172.19.0.1 - destination: 172.19.2.0/24 nexthop: 172.19.0.1 vlan: 40 attachConfiguration: br-osp - name: storage_mgmt_leaf1 ipv4: allocationEnd: 172.19.1.250 allocationStart: 172.19.1.10 cidr: 172.19.1.0/24 routes: - destination: 172.19.0.0/24 nexthop: 172.19.1.1 - destination: 172.19.2.0/24 nexthop: 172.19.1.1 vlan: 41 attachConfiguration: br-osp - name: storage_mgmt_leaf2 ipv4: allocationEnd: 172.19.2.250 allocationStart: 172.19.2.10 cidr: 172.19.2.0/24 routes: - destination: 172.19.0.0/24 nexthop: 172.19.2.1 - destination: 172.19.1.0/24 nexthop: 172.19.2.1 vlan: 42 attachConfiguration: br-osp - name: Tenant nameLower: tenant vip: False mtu: 1350 subnets: - name: tenant ipv4: allocationEnd: 172.20.0.250 allocationStart: 172.20.0.10 cidr: 172.20.0.0/24 routes: - destination: 172.20.1.0/24 nexthop: 172.20.0.1 - destination: 172.20.2.0/24 nexthop: 172.20.0.1 vlan: 50 attachConfiguration: br-osp - name: tenant_leaf1 ipv4: allocationEnd: 172.20.1.250 allocationStart: 172.20.1.10 cidr: 172.20.1.0/24 routes: - destination: 172.20.0.0/24 nexthop: 172.20.1.1 - destination: 172.20.2.0/24 nexthop: 172.20.1.1 vlan: 51 attachConfiguration: br-osp - name: tenant_leaf2 ipv4: allocationEnd: 172.20.2.250 allocationStart: 172.20.2.10 cidr: 172.20.2.0/24 routes: - destination: 172.20.0.0/24 nexthop: 172.20.2.1 - destination: 172.20.1.0/24 nexthop: 172.20.2.1 vlan: 52 attachConfiguration: br-osp
Create the internal API network:
$ oc create -f openstacknetconfig.yaml -n openstack
Verification
View the resources and child resources for OpenStackNetConfig:
$ oc get openstacknetconfig/openstacknetconfig -n openstack $ oc get openstacknetattachment -n openstack $ oc get openstacknet -n openstack
14.2. Add roles for leaf networks to your deployment
To add roles for the leaf networks to your deployment, update the roles_data.yaml
configuration file and create the ConfigMap.
You must use roles_data.yaml
as the filename.
Prerequisites
- Your OpenShift Container Platform cluster is operational and you have installed the director Operator correctly.
-
You have installed the
oc
command line tool on your workstation.
Procedure
Update the
roles_data.yaml
file:... ############################################################################### # Role: ComputeLeaf1 # ############################################################################### - name: ComputeLeaf1 description: | Basic ComputeLeaf1 Node role # Create external Neutron bridge (unset if using ML2/OVS without DVR) tags: - external_bridge networks: InternalApi: subnet: internal_api_leaf1 Tenant: subnet: tenant_leaf1 Storage: subnet: storage_leaf1 HostnameFormatDefault: '%stackname%-novacompute-leaf1-%index%' ... ############################################################################### # Role: ComputeLeaf2 # ############################################################################### - name: ComputeLeaf2 description: | Basic ComputeLeaf1 Node role # Create external Neutron bridge (unset if using ML2/OVS without DVR) tags: - external_bridge networks: InternalApi: subnet: internal_api_leaf2 Tenant: subnet: tenant_leaf2 Storage: subnet: storage_leaf2 HostnameFormatDefault: '%stackname%-novacompute-leaf2-%index%' ...
In the
~/custom_environment_files
directory, archive the templates into a tarball:$ tar -cvzf custom-config.tar.gz *.yaml
Create the
tripleo-tarball-config
ConfigMap:$ oc create configmap tripleo-tarball-config --from-file=custom-config.tar.gz -n openstack
14.3. Creating NIC templates for the new roles
In Red Hat OpenStack Platform (RHOSP) 16.2, the tripleo NIC templates include the InterfaceRoutes parameter by default. You usually set up the routes parameter that you rendered in the environments/network-environment.yaml
configuration file on the host_routes
property of the Networking service (neutron) network. You then add it to the InterfaceRoutes parameter.
In director Operator the Networking service (neutron) is not present. To create new NIC templates for new roles, you must add the routes for a specific network to the NIC template and concatenate the lists.
14.3.1. Creating default network routes
Create the default network routes by adding the networking routes to the NIC template, and then concatenate the lists.
Procedure
- Open the NIC template.
Add the network routes to the template, and then concatenate the lists:
parameters: ... {{ $net.Name }}Routes: default: [] description: > Routes for the storage network traffic. JSON route e.g. [{'destination':'10.0.0.0/16', 'nexthop':'10.0.0.1'}] Unless the default is changed, the parameter is automatically resolved from the subnet host_routes attribute. type: json ... - type: interface ... routes: list_concat_unique: - get_param: {{ $net.Name }}Routes - get_param: {{ $net.Name }}InterfaceRoutes
14.3.2. Subnet routes
Routes subnet information is auto rendered to the tripleo environment file environments/network-environment.yaml
that is used by the Ansible playbooks. In the NIC templates use the Routes_<subnet_name>
parameter to set the correct routing on the host, for example, StorageRoutes_storage_leaf1
.
14.3.3. Modifying NIC templates for spine-leaf networking
To configure spine-leaf networking, modify the NIC templates for each role and re-create the ConfigMap.
Prerequisites
- Your OpenShift Container Platform cluster is operational and you have installed the director Operator correctly.
-
You have installed the
oc
command line tool on your workstation.
Procedure
Create NIC templates for each Compute role:
... StorageRoutes_storage_leaf1: default: [] description: > Routes for the storage network traffic. JSON route e.g. [{'destination':'10.0.0.0/16', 'nexthop':'10.0.0.1'}] Unless the default is changed, the parameter is automatically resolved from the subnet host_routes attribute. type: json ... InternalApiRoutes_internal_api_leaf1: default: [] description: > Routes for the internal_api network traffic. JSON route e.g. [{'destination':'10.0.0.0/16', 'nexthop':'10.0.0.1'}] Unless the default is changed, the parameter is automatically resolved from the subnet host_routes attribute. type: json ... TenantRoutes_tenant_leaf1: default: [] description: > Routes for the internal_api network traffic. JSON route e.g. [{'destination':'10.0.0.0/16', 'nexthop':'10.0.0.1'}] Unless the default is changed, the parameter is automatically resolved from the subnet host_routes attribute. type: json ... get_param: StorageIpSubnet routes: list_concat_unique: - get_param: StorageRoutes_storage_leaf1 - type: vlan ... get_param: InternalApiIpSubnet routes: list_concat_unique: - get_param: InternalApiRoutes_internal_api_leaf1 ... get_param: TenantIpSubnet routes: list_concat_unique: - get_param: TenantRoutes_tenant_leaf1 - type: ovs_bridge ...
In the
~/custom_environment_files
directory, archive the templates into a tarball:$ tar -cvzf custom-config.tar.gz *.yaml
Create the
tripleo-tarball-config
ConfigMap:$ oc create configmap tripleo-tarball-config --from-file=custom-config.tar.gz -n openstack
14.3.4. Creating or updating an environment file to register the NIC templates
To create or update your environment file, add the NIC templates for the new nodes to the resource registry and re-create the ConfigMap.
Prerequisites
- Your OpenShift Container Platform cluster is operational and you have installed the director Operator correctly.
-
You have installed the
oc
command line tool on your workstation. -
The
tripleo-tarball-config
ConfigMap was updated with the requiredroles_data.yaml
and NIC template for the role.
Procedure
Add the NIC templates for the new nodes to an environment file in the resource_registry section:
resource_registry: OS::TripleO::Compute::Net::SoftwareConfig: net-config-two-nic-vlan-compute.yaml OS::TripleO::ComputeLeaf1::Net::SoftwareConfig: net-config-two-nic-vlan-compute_leaf1.yaml OS::TripleO::ComputeLeaf2::Net::SoftwareConfig: net-config-two-nic-vlan-compute_leaf2.yaml
In the
~/custom_environment_files
directory archive the templates into a tarball:$ tar -cvzf custom-config.tar.gz *.yaml
Create the
tripleo-tarball-config
ConfigMap:$ oc create configmap tripleo-tarball-config --from-file=custom-config.tar.gz -n openstack
14.4. Deploying the overcloud with multiple routed networks
To deploy the overcloud with multiple sets of routed networking, create the control plane and the compute nodes for spine-leaf networking, and then render the Ansible playbooks and apply them.
14.4.1. Creating the control plane
To create the control plane, specify the resources for the Controller nodes and director Operator will create the openstackclient
pod for remote shell access.
Prerequisites
- Your OpenShift Container Platform cluster is operational and you have installed the director Operator correctly.
-
You have installed the
oc
command line tool on your workstation. - You have used the OpenStackNetConfig resource to create a control plane network and any additional network resources.
Procedure
Create a file named
openstack-controller.yaml
on your workstation. Include the resource specification for the Controller nodes. The following example shows a specification for a control plane that consists of three Controller nodes:apiVersion: osp-director.openstack.org/v1beta2 kind: OpenStackControlPlane metadata: name: overcloud namespace: openstack spec: gitSecret: git-secret openStackClientImageURL: registry.redhat.io/rhosp-rhel8/openstack-tripleoclient:16.2 openStackClientNetworks: - ctlplane - external - internal_api - internal_api_leaf1 # optionally the openstackclient can also be connected to subnets openStackClientStorageClass: host-nfs-storageclass passwordSecret: userpassword domainName: ostest.test.metalkube.org virtualMachineRoles: Controller: roleName: Controller roleCount: 1 networks: - ctlplane - internal_api - external - tenant - storage - storage_mgmt cores: 6 memory: 20 rootDisk: diskSize: 500 baseImageVolumeName: openstack-base-img storageClass: host-nfs-storageclass storageAccessMode: ReadWriteMany storageVolumeMode: Filesystem enableFencing: False
Create the control plane:
$ oc create -f openstack-controller.yaml -n openstack
Wait until OCP creates the resources related to OpenStackControlPlane resource.
The director Operator also creates an
openstackclient
pod providing remote shell access to run Red Hat OpenStack Platform (RHOSP) commands.
Verification
View the resource for the control plane:
$ oc get openstackcontrolplane/overcloud -n openstack
View the OpenStackVMSet resources to verify the creation of the control plane virtual machine set:
$ oc get openstackvmsets -n openstack
View the virtual machine resources to verify the creation of the control plane virtual machines in OpenShift Virtualization:
$ oc get virtualmachines
Test access to the
openstackclient
pod remote shell:$ oc rsh -n openstack openstackclient
14.4.2. Creating the compute nodes for the leafs
To create the Compute nodes from baremetal machines, include the resource specification in the OpenStackBaremetalSet custom resource.
Prerequisites
- Your OpenShift Container Platform cluster is operational and you have installed the director Operator correctly.
-
You have installed the
oc
command line tool on your workstation. - You have used the OpenStackNetConfig resource to create a control plane network and any additional network resources.
Procedure
Create a file named
openstack-computeleaf1.yaml
on your workstation. Include the resource specification for the Compute nodes. The following example shows a specification for one Compute leaf node:apiVersion: osp-director.openstack.org/v1beta1 kind: OpenStackBaremetalSet metadata: name: computeleaf1 namespace: openstack spec: # How many nodes to provision count: 1 # The image to install on the provisioned nodes baseImageUrl: http://host/images/rhel-image-8.4.x86_64.qcow2 # The secret containing the SSH pub key to place on the provisioned nodes deploymentSSHSecret: osp-controlplane-ssh-keys # The interface on the nodes that will be assigned an IP from the mgmtCidr ctlplaneInterface: enp7s0 # Networks to associate with this host networks: - ctlplane - internal_api_leaf1 - external - tenant_leaf1 - storage_leaf1 roleName: ComputeLeaf1 passwordSecret: userpassword
Create the Compute nodes:
$ oc create -f openstack-computeleaf1.yaml -n openstack
Verification
View the resource for the Compute node:
$ oc get openstackbaremetalset/computeleaf1 -n openstack
View the baremetal machines managed by OpenShift to verify the creation of the Compute node:
$ oc get baremetalhosts -n openshift-machine-api
14.5. Render playbooks and apply them
You can now configure your overcloud. For more information, see Configuring overcloud software with the director Operator.