Chapter 3. Configuring and installing the hub undercloud

3.1. Configuring the spine leaf provisioning networks

To configure the provisioning networks for your spine leaf infrastructure, edit the undercloud.conf file and set the relevant parameters included in the following procedure.

Procedure

  1. Log in to the undercloud as the stack user.
  2. If you do not already have an undercloud.conf file, copy the sample template file:

    [stack@director ~]$ cp /usr/share/python-tripleoclient/undercloud.conf.sample ~/undercloud.conf
  3. Edit the undercloud.conf file.
  4. Set the following values in the [DEFAULT] section:

    1. Set local_ip to the undercloud IP on leaf0:

      local_ip = 192.168.10.1/24
    2. Set undercloud_public_vip to the externally facing IP address of the undercloud:

      undercloud_public_vip = 10.1.1.1
    3. Set undercloud_admin_vip to the administration IP address of the undercloud. This IP address is usually on leaf0:

      undercloud_admin_vip = 192.168.10.2
    4. Set local_interface to the interface to bridge for the local network:

      local_interface = eth1
    5. Set enable_routed_networks to true:

      enable_routed_networks = true
    6. Define your list of subnets using the subnets parameter. Define one subnet for each L2 segment in the routed spine and leaf:

      subnets = leaf0,leaf1,leaf2
    7. Specify the subnet associated with the physical L2 segment local to the undercloud using the local_subnet parameter:

      local_subnet = leaf0
    8. Set the value of undercloud_nameservers.

      undercloud_nameservers = 10.11.5.19,10.11.5.20
      Tip

      You can find the current IP addresses of the DNS servers that are used for the undercloud nameserver by looking in /etc/resolv.conf.

  5. Create a new section for each subnet that you define in the subnets parameter:

    [leaf0]
    cidr = 192.168.10.0/24
    dhcp_start = 192.168.10.10
    dhcp_end = 192.168.10.90
    inspection_iprange = 192.168.10.100,192.168.10.190
    gateway = 192.168.10.1
    masquerade = False
    
    [leaf1]
    cidr = 192.168.11.0/24
    dhcp_start = 192.168.11.10
    dhcp_end = 192.168.11.90
    inspection_iprange = 192.168.11.100,192.168.11.190
    gateway = 192.168.11.1
    masquerade = False
    
    [leaf2]
    cidr = 192.168.12.0/24
    dhcp_start = 192.168.12.10
    dhcp_end = 192.168.12.90
    inspection_iprange = 192.168.12.100,192.168.12.190
    gateway = 192.168.12.1
    masquerade = False
  6. Save the undercloud.conf file.
  7. Run the undercloud installation command:

    [stack@director ~]$ openstack undercloud install

This configuration creates three subnets on the provisioning network or control plane. The overcloud uses each network to provision systems within each respective leaf.

To ensure proper relay of DHCP requests to the undercloud, you might need to configure a DHCP relay.

3.2. Configuring a DHCP relay

The undercloud uses two DHCP servers on the provisioning network:

  • An introspection DHCP server.
  • A provisioning DHCP server.

When you configure a DHCP relay, ensure that you forward DHCP requests to both DHCP servers on the undercloud.

You can use UDP broadcast with devices that support it to relay DHCP requests to the L2 network segment where the undercloud provisioning network is connected. Alternatively, you can use UDP unicast, which relays DHCP requests to specific IP addresses.

Note

Configuration of DHCP relay on specific device types is beyond the scope of this document. As a reference, this document provides a DHCP relay configuration example using the implementation in ISC DHCP software. For more information, see manual page dhcrelay(8).

Broadcast DHCP relay

This method relays DHCP requests using UDP broadcast traffic onto the L2 network segment where the DHCP server or servers reside. All devices on the network segment receive the broadcast traffic. When using UDP broadcast, both DHCP servers on the undercloud receive the relayed DHCP request. Depending on the implementation, you can configure this by specifying either the interface or IP network address:

Interface
Specify an interface that is connected to the L2 network segment where the DHCP requests are relayed.
IP network address
Specify the network address of the IP network where the DHCP requests are relayed.

Unicast DHCP relay

This method relays DHCP requests using UDP unicast traffic to specific DHCP servers. When you use UDP unicast, you must configure the device that provides the DHCP relay to relay DHCP requests to both the IP address that is assigned to the interface used for introspection on the undercloud and the IP address of the network namespace that the OpenStack Networking (neutron) service creates to host the DHCP service for the ctlplane network.

The interface used for introspection is the one defined as inspection_interface in the undercloud.conf file. If you have not set this parameter, the default interface for the undercloud is br-ctlplane.

Note

It is common to use the br-ctlplane interface for introspection. The IP address that you define as the local_ip in the undercloud.conf file is on the br-ctlplane interface.

The IP address allocated to the Neutron DHCP namespace is the first address available in the IP range that you configure for the local_subnet in the undercloud.conf file. The first address in the IP range is the one that you define as dhcp_start in the configuration. For example, 192.168.10.10 is the IP address if you use the following configuration:

[DEFAULT]
local_subnet = leaf0
subnets = leaf0,leaf1,leaf2

[leaf0]
cidr = 192.168.10.0/24
dhcp_start = 192.168.10.10
dhcp_end = 192.168.10.90
inspection_iprange = 192.168.10.100,192.168.10.190
gateway = 192.168.10.1
masquerade = False
Warning

The IP address for the DHCP namespace is automatically allocated. In most cases, this address is the first address in the IP range. To verify that this is the case, run the following commands on the undercloud:

$ openstack port list --device-owner network:dhcp -c "Fixed IP Addresses"
+----------------------------------------------------------------------------+
| Fixed IP Addresses                                                         |
+----------------------------------------------------------------------------+
| ip_address='192.168.10.10', subnet_id='7526fbe3-f52a-4b39-a828-ec59f4ed12b2' |
+----------------------------------------------------------------------------+
$ openstack subnet show 7526fbe3-f52a-4b39-a828-ec59f4ed12b2 -c name
+-------+--------+
| Field | Value  |
+-------+--------+
| name  | leaf0  |
+-------+--------+

Example dhcrelay configuration

In the following example, the dhcrelay command in the dhcp package uses the following configuration:

  • Interfaces to relay incoming DHCP request: eth1, eth2, and eth3.
  • Interface the undercloud DHCP servers on the network segment are connected to: eth0.
  • The DHCP server used for introspection is listening on IP address: 192.168.10.1.
  • The DHCP server used for provisioning is listening on IP address 192.168.10.10.

This results in the following dhcrelay command:

$ sudo dhcrelay -d --no-pid 192.168.10.10 192.168.10.1 \
  -i eth0 -i eth1 -i eth2 -i eth3

Example Cisco IOS routing switch configuration

This example uses the following Cisco IOS configuration to perform the following tasks:

  • Configure a VLAN to use for the provisioning network.
  • Add the IP address of the leaf.
  • Forward UDP and BOOTP requests to the introspection DHCP server that listens on IP address: 192.168.10.1.
  • Forward UDP and BOOTP requests to the provisioning DHCP server that listens on IP address 192.168.10.10.
interface vlan 2
ip address 192.168.24.254 255.255.255.0
ip helper-address 192.168.10.1
ip helper-address 192.168.10.10
!

If you require the ability to automatically discover bare metal nodes without the need to create an instack.json file, you can use auto-discovery to register overcloud nodes. See Automatically discovering bare metal nodes for more information.

3.3. Prerequisites for using separate heat stacks

Your environment must meet the following prerequisites before you create a deployment using separate heat stacks:

  • A working Red Hat OpenStack Platform 16 undercloud.
  • For Ceph Storage users: access to Red Hat Ceph Storage 4.
  • For the central location: three nodes that are capable of serving as central Controller nodes. All three Controller nodes must be in the same heat stack. You cannot split Controller nodes, or any of the control plane services, across separate heat stacks.
  • Ceph storage is a requirement at the central location if you plan to deploy Ceph storage at the edge.
  • For each additional DCN site: three HCI compute nodes.
  • All nodes must be pre-provisioned or able to PXE boot from the central deployment network. You can use a DHCP relay to enable this connectivity for DCNs.
  • All nodes have been introspected by ironic.

3.4. Limitations of the example separate heat stacks deployment

This document provides an example deployment that uses separate heat stacks on Red Hat OpenStack Platform. This example environment has the following limitations:

  • Spine/Leaf networking - The example in this guide does not demonstrate routing requirements, which are required in distributed compute node (DCN) deployments.
  • Ironic DHCP Relay - This guide does not include how to configure Ironic with a DHCP relay.

3.5. Designing your separate heat stacks deployment

To segment your deployment within separate heat stacks, you must first deploy a single overcloud with the control plane. You can then create separate stacks for the distributed compute node (DCN) sites. The following example shows separate stacks for different node types:

  • Controller nodes: A separate heat stack named central, for example, deploys the controllers. When you create new heat stacks for the DCN sites, you must create them with data from the central stack. The Controller nodes must be available for any instance management tasks.
  • DCN sites: You can have separate, uniquely named heat stacks, such as dcn0, dcn1, and so on. Use a DHCP relay to extend the provisioning network to the remote site.
Note

You must create a separate availability zone (AZ) for each stack.

Note

If you use spine/leaf networking, you must use a specific format to define the Storage and StorageMgmt networks so that ceph-ansible correctly configures Ceph to use those networks. Define the Storage and StorageMgmt networks as override values and enclose the values in single quotes. In the following example the storage network (referred to as the public_network) spans two subnets, is separated by a comma, and is enclosed in single quotes:

CephAnsibleExtraConfig:
  public_network: '172.23.1.0/24,172.23.2.0/24'

3.6. Reusing network resources in multiple stacks

You can configure multiple stacks to use the same network resources, such as VIPs and subnets. You can duplicate network resources between stacks by using either the ManageNetworks setting or the external_resource_* fields.

Note

Do not use the ManageNetworks setting if you are using the external_resource_* fields.

If you are not reusing networks between stacks, each network that is defined in network_data.yaml must have a unique name across all deployed stacks. For example, the network name internal_api cannot be reused between stacks, unless you intend to share the network between the stacks. Give the network a different name and name_lower property, such as InternalApiCompute0 and internal_api_compute_0.

3.7. Using ManageNetworks to reuse network resources

With the ManageNetworks setting, multiple stacks can use the same network_data.yaml file and the setting is applied globally to all network resources. The network_data.yaml file defines the network resources that the stack uses:

- name: StorageBackup
  vip: true
  name_lower: storage_backup
  ip_subnet: '172.21.1.0/24'
  allocation_pools: [{'start': '171.21.1.4', 'end': '172.21.1.250'}]
  gateway_ip: '172.21.1.1'

When you set ManageNetworks to false, the nodes will use the existing networks that were already created in the central stack.

Use the following sequence so that the new stack does not manage the existing network resources.

Procedure

  1. Deploy the central stack with ManageNetworks: true or leave unset.
  2. Deploy the additional stack with ManageNetworks: false.

When you add new network resources, for example when you add new leaves in a spine/leaf deployment, you must update the central stack with the new network_data.yaml. This is because the central stack still owns and manages the network resources. After the network resources are available in the central stack, you can deploy the additional stack to use them.

3.8. Using UUIDs to reuse network resources

If you need more control over which networks are reused between stacks, you can use the external_resource_* field for resources in the network_data.yaml file, including networks, subnets, segments, or VIPs. These resources are marked as being externally managed, and heat does not perform any create, update, or delete operations on them.

Add an entry for each required network definition in the network_data.yaml file. The resource is then available for deployment on the separate stack:

external_resource_network_id: Existing Network UUID
external_resource_subnet_id: Existing Subnet UUID
external_resource_segment_id: Existing Segment UUID
external_resource_vip_id: Existing VIP UUID

This example reuses the internal_api network from the control plane stack in a separate stack.

Procedure

  1. Identify the UUIDs of the related network resources:

    $ openstack network show internal_api -c id -f value
    $ openstack subnet show internal_api_subnet -c id -f value
    $ openstack port show internal_api_virtual_ip -c id -f value
  2. Save the values that are shown in the output of the above commands and add them to the network definition for the internal_api network in the network_data.yaml file for the separate stack:

    - name: InternalApi
      external_resource_network_id: 93861871-7814-4dbc-9e6c-7f51496b43af
      external_resource_subnet_id: c85c8670-51c1-4b17-a580-1cfb4344de27
      external_resource_vip_id: 8bb9d96f-72bf-4964-a05c-5d3fed203eb7
      name_lower: internal_api
      vip: true
      ip_subnet: '172.16.2.0/24'
      allocation_pools: [{'start': '172.16.2.4', 'end': '172.16.2.250'}]
      ipv6_subnet: 'fd00:fd00:fd00:2000::/64'
      ipv6_allocation_pools: [{'start': 'fd00:fd00:fd00:2000::10', 'end': 'fd00:fd00:fd00:2000:ffff:ffff:ffff:fffe'}]
      mtu: 1400

3.9. Managing separate heat stacks

The procedures in this guide show how to organize the environment files for three heat stacks: central, dcn0, and dcn1. Red Hat recommends that you store the templates for each heat stack in a separate directory to keep the information about each deployment isolated.

Procedure

  1. Define the central heat stack:

    $ mkdir central
    $ touch central/overrides.yaml
  2. Extract data from the central heat stack into a common directory for all DCN sites:

    $ mkdir dcn-common
    $ touch dcn-common/overrides.yaml
    $ touch dcn-common/central-export.yaml

    The central-export.yaml file is created later by the openstack overcloud export command. It is in the dcn-common directory because all DCN deployments in this guide must use this file.

  3. Define the dcn0 site.

    $ mkdir dcn0
    $ touch dcn0/overrides.yaml

To deploy more DCN sites, create additional dcn directories by number.

Note

The touch is used to provide an example of file organization. Each file must contain the appropriate content for successful deployments.

3.10. Retrieving the container images

Use the following procedure, and its example file contents, to retrieve the container images you need for deployments with separate heat stacks. You must ensure the container images for optional or edge-specific services are included by running the openstack container image prepare command with edge site’s environment files.

For more information, see https://access.redhat.com/documentation/en-us/red_hat_openstack_platform/16.1/html/transitioning_to_containerized_services/obtaining-container-images#preparing-container-images.

Procedure

  1. Add your Registry Service Account credentials to containers.yaml.

    parameter_defaults:
      ContainerImagePrepare:
      - push_destination: true
        set:
          ceph_namespace: registry.redhat.io/rhceph
          ceph_image: rhceph-4-rhel8
          ceph_tag: latest
          name_prefix: openstack-
          namespace: registry.redhat.io/rhosp16-rhel8
          tag: latest
      ContainerImageRegistryCredentials:
        # https://access.redhat.com/RegistryAuthentication
        registry.redhat.io:
          registry-service-account-username: registry-service-account-password
  2. Generate the environment file as images-env.yaml:

    openstack tripleo container image prepare \
    -e containers.yaml \
    --output-env-file images-env.yaml

    The resulting images-env.yaml file is included as part of the overcloud deployment procedure for the stack for which it is generated.

3.11. Deploying the central controllers without edge storage

You can deploy a distributed compute node cluster without Block storage at edge sites if you use Swift as a backend for glance at the central location. A site deployed without block storage cannot be updated later to have block storage due to the differing role and networking profiles for each architecure.

Important: You cannot use Ceph as a backend for glance at the central location if it will not be used at the edge. The following procedure uses lvm as the backend for Cinder which is not supported for production. You must deploy a certified block storage solution as a backend for Cinder.

Deploy the central controller cluster in a similar way to a typical overcloud deployment. This cluster does not require any Compute nodes, so you can set the Compute count to 0 to override the default of 1. The central controller has particular storage and Oslo configuration requirements. Use the following procedure to address these requirements.

Procedure

The following procedure outlines the steps for the initial deployment of the central location.

Note

The following steps detail the deployment commands and environment files associated with an example DCN deployment without glance multistore. These steps do not include unrelated, but necessary, aspects of configuration, such as networking.

  1. In the home directory, create directories for each stack that you plan to deploy.

    mkdir /home/stack/central
    mkdir /home/stack/dcn0
    mkdir /home/stack/dcn1
  2. Create a file called central/overrides.yaml with the following settings:

    parameter_defaults:
      NtpServer:
        - clock.redhat.com
        - clock2.redhat.com
      ControllerCount: 3
      ComputeCount: 0
      OvercloudControlFlavor: baremetal
      OvercloudComputeFlavor: baremetal
      ControllerSchedulerHints:
        'capabilities:node': '0-controller-%index%'
      GlanceBackend: swift
    • ControllerCount: 3 specifies that three nodes will be deployed. These will use swift for glance, lvm for cinder, and host the control-plane services for edge compute nodes.
    • ComputeCount: 0 is an optional parameter to prevent Compute nodes from being deployed with the central Controller nodes.
    • GlanceBackend: swift uses Object Storage (swift) as the Image Service (glance) back end.

      The resulting configuration interacts with the distributed compute nodes (DCNs) in the following ways:

    • The Image service on the DCN creates a cached copy of the image it receives from the central Object Storage back end. The Image service uses HTTP to copy the image from Object Storage to the local disk cache.

      Note

      The central Controller node must be able to connect to the distributed compute node (DCN) site. The central Controller node can use a routed layer 3 connection.

  3. Generate roles for the central location using roles appropriate for your environment:

    openstack overcloud roles generate Controller \
    -o ~/central/control_plane_roles.yaml
  4. Generate an environment file ~/central/central-images-env.yaml:

    openstack tripleo container image prepare \
    -e containers.yaml \
    --output-env-file ~/central/central-images-env.yaml
  5. Configure the naming conventions for your site in the site-name.yaml environment file. The Nova availability zone, Cinder storage availability zone must match:

    cat > /home/stack/central/site-name.yaml << EOF
    parameter_defaults:
        NovaComputeAvailabilityZone: central
        ControllerExtraConfig:
            nova::availability_zone::default_schedule_zone: central
        NovaCrossAZAttach: false
        CinderStorageAvailabilityZone: central
    EOF
  6. Deploy the central Controller node. For example, you can use a deploy.sh file with the following contents:

    #!/bin/bash
    
    source ~/stackrc
    time openstack overcloud deploy \
    --stack central \
    --templates /usr/share/openstack-tripleo-heat-templates/ \
    -e ~/central/containers-env-file.yaml \
    -e ~/central/overrides.yaml \
    -e ~/central/site-name.yaml
Note

You must include heat templates for the configuration of networking in your openstack overcloud deploy command. Designing for edge architecture requires spine and leaf networking. See Spine Leaf Networking for more details.

3.12. Deploying edge nodes without storage

You can deploy edge compute nodes that will use the central location as the control plane. This procedure shows how to add a new DCN stack to your deployment and reuse the configuration from the existing heat stack to create new environment files. The first heat stack deploys an overcloud within a centralized datacenter. Create additional heat stacks to deploy Compute nodes to a remote location.

3.12.1. Configuring the distributed compute node environment files

Procedure

  1. Generate the configuration files that the DCN sites require:

    openstack overcloud export \
    --config-download-dir /var/lib/mistral/central \
    --stack central --output-file ~/dcn-common/central-export.yaml
Important

This procedure creates a new central-export.yaml environment file and uses the passwords in the plan-environment.yaml file from the overcloud. The central-export.yaml file contains sensitive security data. To improve security, you can remove the file when you no longer require it.

3.12.2. Deploying the Compute nodes to the DCN site

This procedure uses the DistributedCompute role to deploy Compute nodes to an availability zone (AZ) named dcn0. This role is used specifically for distributed compute nodes.

Procedure

  1. Review the overrides for the distributed compute (DCN) site in dcn0/overrides.yaml

    parameter_defaults:
      DistributedComputeCount: 3
      DistributedComputeFlavor: baremetal
      DistributedComputeSchedulerHints:
        'capabilities:node': '0-compute-%index%'
      NovaAZAttach: false
  2. Generate roles for the edge location using roles appropriate for your environment:

    openstack overcloud roles generate Compute \
    -o ~/dcn0/roles_data.yaml
  3. Create a new file called site-name.yaml in the ~/dcn0 directory with the following contents:

    resource_registry:
      OS::TripleO::Services::NovaAZConfig: /usr/share/openstack-tripleo-heat-templates/deployment/nova/nova-az-config.yaml
    parameter_defaults:
      NovaComputeAvailabilityZone: dcn0
      RootStackName: dcn0
  4. Retrieve the container images for the DCN Site:

    openstack tripleo container image prepare \
    --environment-directory dcn0 \
    -r ~/dcn0/roles_data.yaml \
    -e ~/dcn-common/central-export.yaml \
    -e ~/containers-prepare-parameter.yaml \
    --output-env-file ~/dcn0/dcn0-images-env.yaml
  5. Run the deploy.sh deployment script for dcn0:

    #!/bin/bash
    STACK=dcn0
    source ~/stackrc
    if [[ ! -e ~/dcn0/distributed_compute.yaml ]]; then
        openstack overcloud roles generate DistributedCompute -o ~/dcn0/roles_data.yaml
    fi
    time openstack overcloud deploy \
         --stack $STACK \
         --templates /usr/share/openstack-tripleo-heat-templates/ \
         -r roles_data.yaml \
         -e /usr/share/openstack-tripleo-heat-templates/environments/cinder-volume-active-active.yaml \
         -e ~/dcn-common/central-export.yaml \
         -e ~/dcn0/dcn0-images-env.yaml \
         -e site-name.yaml \
         -e overrides.yaml
Note

You must include heat templates for the configuration of networking in your openstack overcloud deploy command. Designing for edge architecture requires spine and leaf networking. See Spine Leaf Networking for more details.