Red Hat Training

A Red Hat training course is available for Red Hat OpenStack Platform

Chapter 2. Configuring the Overcloud before Creation

The following chapter provides the configuration required before running the openstack overcloud deploy command. This includes preparing nodes for provisioning, configuring an IPv6 address on the Undercloud, and creating a network environment file that defines the IPv6 parameters for the Overcloud.

2.1. Initializing the Stack User

Log into the director host as the stack user and run the following command to initialize your director configuration:

$ source ~/stackrc

This sets up environment variables containing authentication details to access the director’s CLI tools.

2.2. Configuring an IPv6 Address on the Undercloud

The Undercloud requires access to the Overcloud’s Public API, which is on the External network. To accomplish this, the Undercloud host requires an IPv6 address on the interface accessing the External network.

Note

The Provisioning network still requires IPv4 connectivity for every node. The Undercloud and the Overcloud nodes use this network for PXE boot, introspection, and deployment. In addition, the nodes use this network to access DNS and NTP services over IPv4.

Native VLAN or Dedicated Interface

If the Undercloud uses a native VLAN or a dedicated interface attached to the External network, use the ip command to add an IPv6 address to the interface. In this example, the dedicated interface is eth0:

$ sudo ip link set dev eth0 up; sudo ip addr add 2001:db8::1/64 dev eth0

Trunked VLAN Interface

If the Undercloud uses a trunked VLAN on the same interface as the control plane bridge (br-ctlplane) to access the External network, create a new VLAN interface, attach it to the control plane, and add an IPv6 address to the VLAN. For example, our scenario uses 100 for the External network’s VLAN ID:

$ sudo ovs-vsctl add-port br-ctlplane vlan100 tag=100 -- set interface vlan100 type=internal
$ sudo ip l set dev vlan100 up; sudo ip addr add 2001:db8::1/64 dev vlan100

Confirming the IPv6 Address

Confirm the addition of the IPv6 address with the ip command:

$ ip addr

The IPv6 address appears on the chosen interface.

Setting a Persistent IPv6 Address

In addition to the above, you might want to make the IPv6 address permanent. In this case, modify or create the appropriate interface file in /etc/sysconfig/network-scripts/ (In our example, either ifcfg-eth0 or ifcfg-vlan100). Include the following lines:

IPV6INIT=yes
IPV6ADDR=2001:db8::1/64

For more information, see How do I configure a network interface for IPv6? on the Red Hat Customer Portal.

2.3. Setting up your Environment

This section uses a cutdown version of the process from Configuring Basic Overcloud Requirements with the CLI Tools in the Red Hat OpenStack Platform 10 Director Installation and Usage.

Use the following workflow to setup your environment:

  • Create a node definition template and register blank nodes in the director.
  • Inspect hardware of all nodes.
  • Manually tag nodes into roles.
  • Create flavors and tag them into roles.

2.3.1. Registering Nodes

A node definition template (instackenv.json) is a JSON format file and contains the hardware and power management details for registering nodes. For example:

{
    "nodes":[
        {
            "mac":[
                "bb:bb:bb:bb:bb:bb"
            ],
            "cpu":"4",
            "memory":"6144",
            "disk":"40",
            "arch":"x86_64",
            "pm_type":"pxe_ipmitool",
            "pm_user":"admin",
            "pm_password":"p@55w0rd!",
            "pm_addr":"192.0.2.205"
        },
        {
            "mac":[
                "cc:cc:cc:cc:cc:cc"
            ],
            "cpu":"4",
            "memory":"6144",
            "disk":"40",
            "arch":"x86_64",
            "pm_type":"pxe_ipmitool",
            "pm_user":"admin",
            "pm_password":"p@55w0rd!",
            "pm_addr":"192.0.2.206"
        },
        {
            "mac":[
                "dd:dd:dd:dd:dd:dd"
            ],
            "cpu":"4",
            "memory":"6144",
            "disk":"40",
            "arch":"x86_64",
            "pm_type":"pxe_ipmitool",
            "pm_user":"admin",
            "pm_password":"p@55w0rd!",
            "pm_addr":"192.0.2.207"
        },
        {
            "mac":[
                "ee:ee:ee:ee:ee:ee"
            ],
            "cpu":"4",
            "memory":"6144",
            "disk":"40",
            "arch":"x86_64",
            "pm_type":"pxe_ipmitool",
            "pm_user":"admin",
            "pm_password":"p@55w0rd!",
            "pm_addr":"192.0.2.208"
        }
        {
            "mac":[
                "ff:ff:ff:ff:ff:ff"
            ],
            "cpu":"4",
            "memory":"6144",
            "disk":"40",
            "arch":"x86_64",
            "pm_type":"pxe_ipmitool",
            "pm_user":"admin",
            "pm_password":"p@55w0rd!",
            "pm_addr":"192.0.2.209"
        }
        {
            "mac":[
                "gg:gg:gg:gg:gg:gg"
            ],
            "cpu":"4",
            "memory":"6144",
            "disk":"40",
            "arch":"x86_64",
            "pm_type":"pxe_ipmitool",
            "pm_user":"admin",
            "pm_password":"p@55w0rd!",
            "pm_addr":"192.0.2.210"
        }
    ]
}
Note

The Provisioning network uses IPv4 addresses. The IPMI addresses must also be IPv4 addresses, and they must either be directly attached or reachable through routing over the Provisioning network.

After creating the template, save the file to the stack user’s home directory (/home/stack/instackenv.json), then import it into the director. Use the following command to accomplish this:

$ openstack baremetal import --json ~/instackenv.json

This imports the template and registers each node from the template into the director.

Assign the kernel and ramdisk images to all nodes:

$ openstack baremetal configure boot

The nodes are now registered and configured in the director.

2.3.2. Inspecting the Hardware of Nodes

After registering the nodes, inspect the hardware attribute of each node. Run the following command to inspect the hardware attributes of each node:

$ openstack baremetal introspection bulk start
Important

Make sure this process runs to completion. This process usually takes 15 minutes for bare metal nodes.

2.3.3. Manually Tagging the Nodes

After registering and inspecting the hardware of each node, tag them into specific profiles. These profile tags match your nodes to flavors, and in turn the flavors are assigned to a deployment role.

Retrieve a list of your nodes to identify their UUIDs:

$ ironic node-list

To manually tag a node to a specific profile, add a profile option to the properties/capabilities parameter for each node. For example, to tag three nodes to use a controller profile and one node to use a compute profile, use the following commands:

$ ironic node-update 1a4e30da-b6dc-499d-ba87-0bd8a3819bc0 add properties/capabilities='profile:control,boot_option:local'
$ ironic node-update 6faba1a9-e2d8-4b7c-95a2-c7fbdc12129a add properties/capabilities='profile:control,boot_option:local'
$ ironic node-update 5e3b2f50-fcd9-4404-b0a2-59d79924b38e add properties/capabilities='profile:control,boot_option:local'
$ ironic node-update 484587b2-b3b3-40d5-925b-a26a2fa3036f add properties/capabilities='profile:compute,boot_option:local'
$ ironic node-update d010460b-38f2-4800-9cc4-d69f0d067efe add properties/capabilities='profile:compute,boot_option:local'
$ ironic node-update d930e613-3e14-44b9-8240-4f3559801ea6 add properties/capabilities='profile:compute,boot_option:local'

The addition of the profile:compute and profile:control options tag the nodes into each respective profiles.

Note

As an alternative to manual tagging, use the automatic profile tagging to tag larger numbers of nodes based on benchmarking data.

2.4. Configuring the Network

This section examines the network configuration for the Overcloud. This includes isolating our services to use specific network traffic and configuring the Overcloud with our IPv6 options.

2.4.1. Configuring Interfaces

The Overcloud requires a set of network interface templates. These templates are standard Heat templates in YAML format. The director contains a set of example templates to get you started:

  • /usr/share/openstack-tripleo-heat-templates/network/config/single-nic-vlans - Directory containing templates for single NIC with VLANs configuration on a per role basis.
  • /usr/share/openstack-tripleo-heat-templates/network/config/bond-with-vlans - Directory containing templates for bonded NIC configuration on a per role basis.

Copy one of these template collections to the stack user’s templates directory. For example:

$ sudo cp -r /usr/share/openstack-tripleo-heat-templates/network/config/single-nic-vlans ~/templates/nic-configs

For more information on network interface configuration, see the Red Hat OpenStack Platform 10 Director Installation and Usage guide.

Note

Both interface template collections contain two files for configuring the controller nodes: controller.yaml and controller-v6.yaml. Use the controller-v6.yaml for configuring IPv6.

2.4.2. Configuring the IPv6 Isolated Network

The Overcloud requires a network environment file to configure IPv6. This file is a Heat environment file that describes the Overcloud’s network environment and points to the network interface configuration templates. For this scenario, create an environment file (/home/stack/network-environment.yaml) and include the following sections.

resource_registry:
  OS::TripleO::BlockStorage::Net::SoftwareConfig: /home/stack/templates/nic-configs/cinder-storage.yaml
  OS::TripleO::Compute::Net::SoftwareConfig: /home/stack/templates/nic-configs/compute.yaml
  OS::TripleO::Controller::Net::SoftwareConfig: /home/stack/templates/nic-configs/controller-v6.yaml
  OS::TripleO::ObjectStorage::Net::SoftwareConfig: /home/stack/templates/nic-configs/swift-storage.yaml
  OS::TripleO::CephStorage::Net::SoftwareConfig: /home/stack/templates/nic-configs/ceph-storage.yaml

This section registers each of our interface templates as a resource. When creating the Overcloud, the main Heat template collection calls the appropriate OS::TripleO::*::Net::SoftwareConfig resource when configuring the network for each node type. Without these resources, the main Heat template collection uses a default set of network configurations from the main resource registry.

Note

Both interface template collections contain two files for configuring the controller nodes: controller.yaml and controller-v6.yaml. Use the controller-v6.yaml for configuring IPv6.

parameter_defaults:
  DnsServers: ["8.8.8.8","8.8.4.4"]
  ControlPlaneSubnetCidr: "24"
  EC2MetadataIp: 192.0.2.1
  ControlPlaneDefaultRoute: 192.0.2.1
  ExternalInterfaceDefaultRoute: 2001:db8::1

  ExternalNetworkVlanID: 100
  ExternalNetCidr: '2001:db8:0:1::/64'
  ExternalAllocationPools: [{'start': '2001:db8:0:1::10', 'end': '2001:db8:0:1:ffff:ffff:ffff:fffe'}]

  InternalApiNetworkVlanID: 201
  InternalApiNetCidr: 'fd00:fd00:fd00:2000::/64'
  InternalApiAllocationPools: [{'start': 'fd00:fd00:fd00:2000::10', 'end': 'fd00:fd00:fd00:2000:ffff:ffff:ffff:fffe'}]

  TenantNetworkVlanID: 202
  TenantNetCidr: 172.17.0.0/24
  TenantAllocationPools: [{'start': '172.17.0.10', 'end': '172.17.0.200'}]

  StorageNetworkVlanID: 203
  StorageNetCidr: 'fd00:fd00:fd00:4000::/64'
  StorageAllocationPools: [{'start': 'fd00:fd00:fd00:4000:0000:0000:0000:0000', 'end': 'fd00:fd00:fd00:4000:ffff:ffff:ffff:ffff'}]

  StorageMgmtNetworkVlanID: 204
  StorageMgmtNetCidr: 'fd00:fd00:fd00:5000::/64'
  StorageMgmtAllocationPools: [{'start': 'fd00:fd00:fd00:5000:0000:0000:0000:0000', 'end': 'fd00:fd00:fd00:5000:ffff:ffff:ffff:ffff'}]

The parameter_defaults section contains the customization for the environment. Note the following parameters for our IPv6 setup:

ExternalInterfaceDefaultRoute
The IP address for the External interface default route. In this scenario, you use the Undercloud as a default route and specify the IP address created in Section 2.2, “Configuring an IPv6 Address on the Undercloud”.
ExternalNetworkVlanID
The VLAN ID of the External network. If using VLAN for accessing the External network on the Undercloud (See Section 2.2, “Configuring an IPv6 Address on the Undercloud”), make sure to map the same VLAN for this value.
ExternalNetCidr, InternalApiNetCidr, TenantNetCidr, StorageNetCidr, StorageMgmtNetCidr
The IPv6 CIDR prefix for each respective network.
ExternalAllocationPools, InternalApiAllocationPools, TenantAllocationPools, StorageAllocationPools, StorageMgmtAllocationPools
The range of IPv6 addresses to allocate to nodes. This is useful for ensuring no IPv6 conflicts occur.

2.4.3. Using a Hybrid IPv6/IPv4 Configuration

It is possible to configure the Overcloud to use a combination of IPv4 and IPv6 networking for various services. This requires modification of the networks and ports in the Overcloud. For example, we might aim to deploy the Storage and Storage Management networks on IPv4 while the other networks use IPv6.

Copy the network isolation initialization file (network-isolation-v6.yaml):

$ sudo cp /usr/share/openstack-tripleo-heat-templates/environments/network-isolation-v6.yaml ~/templates/nic-configs/network-isolation-v6.yaml

Edit the file to map the Storage and Storage Management resources to use the IPv4 versions of templates. For example:

resource_registry:
  OS::TripleO::Network::External: ../network/external_v6.yaml
  OS::TripleO::Network::InternalApi: ../network/internal_api_v6.yaml
  OS::TripleO::Network::StorageMgmt: ../network/storage_mgmt.yaml   # Changed for IPv4
  OS::TripleO::Network::Storage: ../network/storage.yaml            # Changed for IPv4
  OS::TripleO::Network::Tenant: ../network/tenant_v6.yaml

# Port assignments for the VIPs
  OS::TripleO::Network::Ports::ExternalVipPort: ../network/ports/external_v6.yaml
  OS::TripleO::Network::Ports::InternalApiVipPort: ../network/ports/internal_api_v6.yaml
  OS::TripleO::Network::Ports::StorageVipPort: ../network/ports/storage.yaml            # Changed for IPv4
  OS::TripleO::Network::Ports::StorageMgmtVipPort: ../network/ports/storage_mgmt.yaml   # Changed for IPv4
  OS::TripleO::Network::Ports::RedisVipPort: ../network/ports/vip_v6.yaml

# Port assignments for the controller role
  OS::TripleO::Controller::Ports::ExternalPort: ../network/ports/external_v6.yaml
  OS::TripleO::Controller::Ports::InternalApiPort: ../network/ports/internal_api_v6.yaml
  OS::TripleO::Controller::Ports::StoragePort: ../network/ports/storage.yaml            # Changed for IPv4
  OS::TripleO::Controller::Ports::StorageMgmtPort: ../network/ports/storage_mgmt.yaml   # Changed for IPv4
  OS::TripleO::Controller::Ports::TenantPort: ../network/ports/tenant_v6.yaml

...

Include the custom network-isolation-v6.yaml instead of the original when running openstack overcloud deploy.

In addition, modify the parameter_defaults section in /home/stack/network-environment.yaml to define a mix of IPv6 and IPv4 allocations for their respective networks. For example:

parameter_defaults:
  DnsServers: ["8.8.8.8","8.8.4.4"]
  ControlPlaneSubnetCidr: "24"
  EC2MetadataIp: 192.0.2.1
  ControlPlaneDefaultRoute: 192.0.2.1
  ExternalInterfaceDefaultRoute: 2001:db8::1

  ExternalNetworkVlanID: 100
  ExternalNetCidr: '2001:db8:0:1::/64'
  ExternalAllocationPools: [{'start': '2001:db8:0:1::10', 'end': '2001:db8:0:1:ffff:ffff:ffff:fffe'}]

  InternalApiNetworkVlanID: 201
  InternalApiNetCidr: 'fd00:fd00:fd00:2000::/64'
  InternalApiAllocationPools: [{'start': 'fd00:fd00:fd00:2000::10', 'end': 'fd00:fd00:fd00:2000:ffff:ffff:ffff:fffe'}]

  TenantNetworkVlanID: 202
  TenantNetCidr: 172.17.0.0/24
  TenantAllocationPools: [{'start': '172.17.0.10', 'end': '172.17.0.200'}]

  StorageNetworkVlanID: 203
  StorageNetCidr: 172.18.0.0/24
  StorageAllocationPools: [{'start': '172.18.0.10', 'end': '172.18.0.200'}]

  StorageMgmtNetworkVlanID: 204
  StorageMgmtNetCidr: 172.19.0.0/24
  StorageMgmtAllocationPools: [{'start': '172.19.0.10', 'end': '172.19.0.200'}]
Important

If using VXLAN networking in your Overcloud, set the Tenant network to use IPv4. IPv6 VXLAN is not supported for Tenant networks.

2.5. Completing Overcloud Configuration

This completes the necessary steps to configure an IPv6-based Overcloud. The next chapter uses the openstack overcloud deploy command to create the Overcloud using the configuration from this chapter.