Chapter 2. Configuring the undercloud
This section describes a use case on how to configure the undercloud to accommodate routed spine-leaf with composable networks.
2.1. Configuring the spine leaf provisioning networks
To configure the provisioning networks for your spine leaf infrastructure, edit the undercloud.conf file and set the relevant parameters as defined in the following procedure.
Procedure
-
Log into the undercloud as the
stackuser. If you do not already have an
undercloud.conf, copy the sample template file:[stack@director ~]$ cp /usr/share/instack-undercloud/undercloud.conf.sample ~/undercloud.conf
-
Edit your
undercloud.conf. In the
[DEFAULT]section:Set
enable_routed_networkstotrue:enable_routed_networks = true
Define your list of subnets using the
subnetsparameter. Define one subnet for each layer 2 segment in the routed spine and leaf:subnets = leaf0,leaf1,leaf2
Specify the subnet associated with the physical layer 2 segment local to the undercloud using the
local_subnetparameter:local_subnet = leaf0
Create a new section per each subnet defined with the
subnetsparameter:[leaf0] cidr = 192.168.10.0/24 dhcp_start = 192.168.10.10 dhcp_end = 192.168.10.90 inspection_iprange = 192.168.10.100,192.168.10.190 gateway = 192.168.10.1 masquerade = False [leaf1] cidr = 192.168.11.0/24 dhcp_start = 192.168.11.10 dhcp_end = 192.168.11.90 inspection_iprange = 192.168.11.100,192.168.11.190 gateway = 192.168.11.1 masquerade = False [leaf2] cidr = 192.168.12.0/24 dhcp_start = 192.168.12.10 dhcp_end = 192.168.12.90 inspection_iprange = 192.168.12.100,192.168.12.190 gateway = 192.168.12.1 masquerade = False
-
Save the
undercloud.conffile. Run the undercloud installation command:
[stack@director ~]$ openstack undercloud install
This creates three subnets on the provisioning network / control plane. The overcloud uses each network to provision systems within each respective leaf.
To ensure proper relay of DHCP requests to the undercloud, you might need to configure a DHCP relay. The next section provides some information on how to configure a DHCP relay.
2.2. Configuring a DHCP relay
The undercloud uses two DHCP servers on the provisioning network:
- one for introspection.
- one for provisioning.
When configuring a DHCP relay make sure to forward DHCP requests to both DHCP servers on the undercloud.
You can use UDP broadcast with devices that support it to relay DHCP requests to the L2 network segment where the undercloud provisioning network is connected. Alternatively you can use UDP unicast which relays DHCP requests to specific IP addresses.
Configuration of DHCP relay on specific devices types is beyond the scope of this document. As a reference, this document provides a DHCP relay configuration example using the implementation in ISC DHCP software is available below. Please refer to manual page dhcrelay(8) for further details on how to use this implementation.
Broadcast DHCP relay
This method relays DHCP requests using UDP broadcast traffic onto the L2 network segment where the DHCP server(s) resides. All devices on the network segment receive the broadcast traffic. When using UDP broadcast, both DHCP servers on the undercloud receive the relayed DHCP request. Depending on implementation this is typically configured by specifying either the interface or IP network address:
- Interface
- Specifying an interface connected to the L2 network segment where the DHCP requests are relayed.
- IP network address
- Specifying the network address of the IP network where the DHCP request are relayed.
Unicast DHCP relay
This method relays DHCP requests using UDP unicast traffic to specific DHCP servers. When using UDP unicast, you must configure the device providing DHCP relay to relay DHCP requests to both the IP address assigned to the interface used for introspection on the undercloud and the IP address of the network namespace created by the OpenStack Networking (neutron) service to host the DHCP service for the ctlplane network.
The interface used for introspection is the one defined as inspection_interface in undercloud.conf.
It is common to use the br-ctlplane interface for introspection. The IP address defined as local_ip in undercloud.conf is on the br-ctlplane interface.
The IP address allocated to the Neutron DHCP namespace is the first address available in the IP range configured for the local_subnet in undercloud.conf. The first address in the IP range is the one defined as dhcp_start in the configuration. For example: 172.20.0.10 would be the IP address when the following configuration is used:
[DEFAULT] local_subnet = leaf0 subnets = leaf0,leaf1,leaf2 [leaf0] cidr = 172.20.0.0/26 dhcp_start = 172.20.0.10 dhcp_end = 172.20.0.19 inspection_iprange = 172.20.0.20,172.20.0.29 gateway = 172.20.0.62 masquerade = False
The IP address for the DHCP namespace is automatically allocated. In most cases, it will be the first address in the IP range. Ensure sure to verify this is the case by running the following commands on the undercloud:
$ openstack port list --device-owner network:dhcp -c "Fixed IP Addresses" +----------------------------------------------------------------------------+ | Fixed IP Addresses | +----------------------------------------------------------------------------+ | ip_address='172.20.0.10', subnet_id='7526fbe3-f52a-4b39-a828-ec59f4ed12b2' | +----------------------------------------------------------------------------+ $ openstack subnet show 7526fbe3-f52a-4b39-a828-ec59f4ed12b2 -c name +-------+--------+ | Field | Value | +-------+--------+ | name | leaf0 | +-------+--------+
Example DHCP relay configuration
In the following example, the dhcrelay command from ISC DHCP software uses the following configuration:
-
Interfaces to relay incoming DHCP request:
eth1,eth2, andeth3. -
Interface the undercloud DHCP servers on the network segment are connected to:
eth0. -
The DHCP server used for introspection is listening on IP address:
172.20.0.1. -
The DHCP server used for provisioning is listening on IP address
172.20.0.10.
This results in the following dhcrelay command:
$ sudo dhcrelay -d --no-pid 172.20.0.10 172.20.0.1 \ -iu eth0 -id eth1 -id eth2 -id eth3
Now that you have configured the provisioning network, you can configure the remaining overcloud leaf networks. You accomplish this with a series of configuration files.
2.3. Creating flavors and tagging nodes for leaf networks
Each role in each leaf network requires a flavor and role assignment so you can tag nodes into their respective leaf. This procedure shows how to create each flavor and assign them to a role.
Procedure
Source the
stackrcfile:$ source ~/stackrc
Create flavors for each custom role:
$ ROLES="control0 compute0 compute1 compute2 ceph-storage0 ceph-storage1 ceph-storage2" $ for ROLE in $ROLES; do openstack flavor create --id auto --ram 4096 --disk 40 --vcpus 1 $ROLE ; done $ for ROLE in $ROLES; do openstack flavor set --property "cpu_arch"="x86_64" --property "capabilities:boot_option"="local" --property "capabilities:profile"="$ROLE" $ROLE ; done
Tag nodes to their respective leaf networks. For example, run the following to tag a node with UUID
58c3d07e-24f2-48a7-bbb6-6843f0e8ee13to the compute role on Leaf2:$ openstack baremetal node set --property capabilities='profile:compute2,boot_option:local' 58c3d07e-24f2-48a7-bbb6-6843f0e8ee13
Create an environment file (
~/templates/node-data.yaml) that contains the mapping of flavors to roles:parameter_defauls: OvercloudController0Flavor: control0 OvercloudController0Count: 3 OvercloudCompute0Flavor: compute0 OvercloudCompute0Count: 3 OvercloudCompute1Flavor: compute1 OvercloudCompute1Count: 3 OvercloudCompute2Flavor: compute2 OvercloudCompute2Count: 3 OvercloudCephStorage0Flavor: ceph-storage0 OvercloudCephStorage0Count: 3 OvercloudCephStorage1Flavor: ceph-storage1 OvercloudCephStorage1Count: 3 OvercloudCephStorage2Flavor: ceph-storage2 OvercloudCephStorage2Count: 3
You can also set the number of nodes to deploy in the overcloud using each respectve *Count` parameter.
2.4. Mapping bare metal node ports to control plane network segments
To enable deployment onto a L3 routed network the bare metal ports must have its physical_network field configured. Each baremetal port is associated with a bare metal node in the OpenStack Bare Metal (ironic) service. The physical network names are the ones used in the subnets option in the undercloud configuration.
The physical network name of the subnet specified as local_subnet in undercloud.conf is special. It is always named ctlplane.
Procedure
Source the
stackrcfile:$ source ~/stackrc
Check the bare metal nodes:
$ openstack baremetal node list
Ensure the bare metal nodes are either in
enrollormanageablestate. If the bare metal node is not in one of these states, the command used to set the physical_network property on the baremetal port will fail. To set all nodes tomanageablestate run the following command:$ for node in $(openstack baremetal node list -f value -c Name); do openstack baremetal node manage $node --wait; done
Check which baremetal ports are associated with which baremetal node. For example:
$ openstack baremetal port list --node <node-uuid>
Set the
physical-networkparameter for the ports. In the example below, three subnets are defined in the configuration:leaf0,leaf1, andleaf2. The local_subnet isleaf0. Since the physical network for thelocal_subnetis alwaysctlplane, the baremetal port connected toleaf0uses ctlplane. The remaining ports use the other leaf names:$ openstack baremetal port set --physical-network ctlplane <port-uuid> $ openstack baremetal port set --physical-network leaf1 <port-uuid> $ openstack baremetal port set --physical-network leaf2 <port-uuid> $ openstack baremetal port set --physical-network leaf2 <port-uuid>
Make sure the nodes are in available state before deploying the overcloud:
$ openstack overcloud node provide --all-manageable
