Chapter 2. Configuring the overcloud for IPv6

The following chapter provides the configuration required before running the openstack overcloud deploy command. This includes preparing nodes for provisioning, configuring an IPv6 address on the undercloud, and creating a network environment file to define the IPv6 parameters for the overcloud.

Prerequisites

2.1. Configuring an IPv6 address on the undercloud

The undercloud requires access to the overcloud Public API, which is on the External network. To accomplish this, the undercloud host requires an IPv6 address on the interface that connects to the External network.

Prerequisites

  • A successful undercloud installation. For more information, see Installing director on the undercloud.
  • Your network supports IPv6-native VLANs as well as IPv4-native VLANs.
  • An IPv6 address available to the undercloud.

Native VLAN or dedicated interface

If the undercloud uses a native VLAN or a dedicated interface attached to the External network, use the ip command to add an IPv6 address to the interface. In this example, the dedicated interface is eth0:

$ sudo ip link set dev eth0 up; sudo ip addr add 2001:db8::1/64 dev eth0

Trunked VLAN interface

If the undercloud uses a trunked VLAN on the same interface as the control plane bridge (br-ctlplane) to access the External network, create a new VLAN interface, attach it to the control plane, and add an IPv6 address to the VLAN. In this example, the External network VLAN ID is 100:

$ sudo ovs-vsctl add-port br-ctlplane vlan100 tag=100 -- set interface vlan100 type=internal
$ sudo ip l set dev vlan100 up; sudo ip addr add 2001:db8::1/64 dev vlan100

Confirming the IPv6 address

Confirm the addition of the IPv6 address with the ip command:

$ ip addr

The IPv6 address appears on the chosen interface.

Setting a persistent IPv6 address

To make the IPv6 address permanent, modify or create the appropriate interface file in /etc/sysconfig/network-scripts/. In this example, include the following lines in either ifcfg-eth0 or ifcfg-vlan100:

IPV6INIT=yes
IPV6ADDR=2001:db8::1/64

For more information, see How do I configure a network interface for IPv6? on the Red Hat Customer Portal.

2.2. Registering and inspecting nodes for IPv6 deployment

A node definition template (instackenv.json) is a JSON format file that contains the hardware and power management details for registering nodes. For example:

{
    "nodes":[
        {
            "mac":[
                "bb:bb:bb:bb:bb:bb"
            ],
            "cpu":"4",
            "memory":"6144",
            "disk":"40",
            "arch":"x86_64",
            "pm_type":"ipmi",
            "pm_user":"admin",
            "pm_password":"p@55w0rd!",
            "pm_addr":"192.0.2.205"
        },
        {
            "mac":[
                "cc:cc:cc:cc:cc:cc"
            ],
            "cpu":"4",
            "memory":"6144",
            "disk":"40",
            "arch":"x86_64",
            "pm_type":"ipmi",
            "pm_user":"admin",
            "pm_password":"p@55w0rd!",
            "pm_addr":"192.0.2.206"
        },
        {
            "mac":[
                "dd:dd:dd:dd:dd:dd"
            ],
            "cpu":"4",
            "memory":"6144",
            "disk":"40",
            "arch":"x86_64",
            "pm_type":"ipmi",
            "pm_user":"admin",
            "pm_password":"p@55w0rd!",
            "pm_addr":"192.0.2.207"
        },
        {
            "mac":[
                "ee:ee:ee:ee:ee:ee"
            ],
            "cpu":"4",
            "memory":"6144",
            "disk":"40",
            "arch":"x86_64",
            "pm_type":"ipmi",
            "pm_user":"admin",
            "pm_password":"p@55w0rd!",
            "pm_addr":"192.0.2.208"
        }
        {
            "mac":[
                "ff:ff:ff:ff:ff:ff"
            ],
            "cpu":"4",
            "memory":"6144",
            "disk":"40",
            "arch":"x86_64",
            "pm_type":"ipmi",
            "pm_user":"admin",
            "pm_password":"p@55w0rd!",
            "pm_addr":"192.0.2.209"
        }
        {
            "mac":[
                "gg:gg:gg:gg:gg:gg"
            ],
            "cpu":"4",
            "memory":"6144",
            "disk":"40",
            "arch":"x86_64",
            "pm_type":"ipmi",
            "pm_user":"admin",
            "pm_password":"p@55w0rd!",
            "pm_addr":"192.0.2.210"
        }
    ]
}

Prerequisites

Procedure

  1. After you create the node definition template, save the file to the home directory of the stack user (/home/stack/instackenv.json), then import it into the director:

    $ openstack overcloud node import ~/instackenv.json

    This command imports the template and registers each node from the template into director.

  2. Assign the kernel and ramdisk images to all nodes:

    $ openstack overcloud node configure

    The nodes are now registered and configured in director.

Verification steps

  • After registering the nodes, inspect the hardware attribute of each node:

    $ openstack overcloud node introspect --all-manageable
    Important

    The nodes must be in the manageable state. Make sure this process runs to completion. This process usually takes 15 minutes for bare metal nodes.

2.3. Tagging nodes for IPv6 deployment

After you register and inspect the hardware of your nodes, tag each node into a specific profile. These profile tags map your nodes to flavors, and in turn the flavors are assigned to a deployment role.

Prerequisites

Procedure

  1. Retrieve a list of your nodes to identify their UUIDs:

    $ ironic node-list
  2. Add a profile option to the properties/capabilities parameter for each node. For example, to tag three nodes to use a controller profile and three nodes to use a compute profile, use the following commands:

    $ ironic node-update 1a4e30da-b6dc-499d-ba87-0bd8a3819bc0 add properties/capabilities='profile:control,boot_option:local'
    $ ironic node-update 6faba1a9-e2d8-4b7c-95a2-c7fbdc12129a add properties/capabilities='profile:control,boot_option:local'
    $ ironic node-update 5e3b2f50-fcd9-4404-b0a2-59d79924b38e add properties/capabilities='profile:control,boot_option:local'
    $ ironic node-update 484587b2-b3b3-40d5-925b-a26a2fa3036f add properties/capabilities='profile:compute,boot_option:local'
    $ ironic node-update d010460b-38f2-4800-9cc4-d69f0d067efe add properties/capabilities='profile:compute,boot_option:local'
    $ ironic node-update d930e613-3e14-44b9-8240-4f3559801ea6 add properties/capabilities='profile:compute,boot_option:local'

    The addition of the profile:control and profile:compute options tag the nodes into each respective profile.

    Note

    As an alternative to manual tagging, use the automatic profile tagging to tag larger numbers of nodes based on benchmarking data.

2.4. Configuring IPv6 networking

This section examines the network configuration for the overcloud. This includes isolating the OpenStack services to use specific network traffic and configuring the overcloud with IPv6 options.

2.4.1. Configuring composable IPv6 networking

Prerequisites

Procedure

  1. Copy the default network_data file:

    $ cp /usr/share/openstack-tripleo-heat-templates/network_data.yaml /home/stack/.
  2. Edit the local copy of the network_data.yaml file and modify the parameters to suit your IPv6 networking requirements. For example, the External network contains the following default network details:

    - name: External
      vip: true
      name_lower: external
      vlan: 10
      ipv6: true
      ipv6_subnet: '2001:db8:fd00:1000::/64'
      ipv6_allocation_pools: [{'start': '2001:db8:fd00:1000::10', 'end': '2001:db8:fd00:1000:ffff:ffff:ffff:fffe'}]
      gateway_ipv6: '2001:db8:fd00:1000::1'
    • name is the only mandatory value, however you can also use name_lower to normalize names for readability. For example, changing InternalApi to internal_api.
    • vip: true creates a virtual IP address (VIP) on the new network with the remaining parameters setting the defaults for the new network.
    • ipv6 defines whether to enable IPv6.
    • ipv6_subnet and ipv6_allocation_pools, and gateway_ip6 set the default IPv6 subnet and IP range for the network.
  3. Include the custom network_data file with your deployment using the -n option. Without the -n option, the deployment command uses the default network details.

2.4.2. IPv6 network isolation in the overcloud

The overcloud assigns services to the provisioning network by default. However, director can divide overcloud network traffic into isolated networks. These networks are defined in a file that you include in the deployment command line, by default named network_data.yaml.

When services are listening on networks using IPv6 addresses, you must provide parameter defaults to indicate that the service is running on an IPv6 network. The network that each service runs on is defined by the file network/service_net_map.yaml, and can be overridden by declaring parameter defaults for individual ServiceNetMap entries. These services require the parameter default to be set in an environment file:

parameter_defaults:
  # Enable IPv6 for Ceph.
  CephIPv6: True
  # Enable IPv6 for Corosync. This is required when Corosync is using an IPv6 IP in the cluster.
  CorosyncIPv6: True
  # Enable IPv6 for MongoDB. This is required when MongoDB is using an IPv6 IP.
  MongoDbIPv6: True
  # Enable various IPv6 features in Nova.
  NovaIPv6: True
  # Enable IPv6 environment for RabbitMQ.
  RabbitIPv6: True
  # Enable IPv6 environment for Memcached.
  MemcachedIPv6: True
  # Enable IPv6 environment for MySQL.
  MysqlIPv6: True
  # Enable IPv6 environment for Manila
  ManilaIPv6: True
  # Enable IPv6 environment for Redis.
  RedisIPv6: True

The environments/network-isolation.j2.yaml file in the core heat templates is a Jinja2 file that defines all ports and VIPs for each IPv6 network in your composable network file. When rendered, it results in a network-isolation.yaml file in the same location with the full resource registry.

2.4.3. Configuring the IPv6 isolated network

The default heat template collection contains a Jinja2-based environment file for the default networking configuration. This file is environments/network-environment.j2.yaml. When rendered with our network_data file, it results in a standard YAML file called network-environment.yaml. Some parts of this file might require overrides.

Prerequisites

Procedure

  • Create a custom environment file (/home/stack/network-environment.yaml) with the following details:

    parameter_defaults:
      DnsServers: ["8.8.8.8","8.8.4.4"]
      ControlPlaneDefaultRoute: 192.0.2.1
      ControlPlaneSubnetCidr: "24"

    The parameter_defaults section contains the customization for certain services that remain on IPv4.

2.4.4. IPv6 network interface templates

The overcloud requires a set of network interface templates. Director contains a set of Jinja2-based Heat templates, which render based on your network_data file:

NIC directoryDescriptionEnvironment file

single-nic-vlans

Single NIC (nic1) with control plane and VLANs attached to default Open vSwitch bridge.

environments/net-single-nic-with-vlans-v6.j2.yaml

single-nic-linux-bridge-vlans

Single NIC (nic1) with control plane and VLANs attached to default Linux bridge.

environments/net-single-nic-linux-bridge-with-vlans-v6.yaml

bond-with-vlans

Control plane attached to nic1. Default Open vSwitch bridge with bonded NIC configuration (nic2 and nic3) and VLANs attached.

environments/net-bond-with-vlans-v6.yaml

multiple-nics

Control plane attached to nic1. Assigns each sequential NIC to each network defined in the network_data file. By default, this is Storage to nic2, Storage Management to nic3, Internal API to nic4, Tenant to nic5 on the br-tenant bridge, and External to nic6 on the default Open vSwitch bridge.

environments/net-multiple-nics-v6.yaml

2.5. Deploying an IPv6 overcloud

To deploy an overcloud that uses IPv6 networking, you must include additional arguments in the deployment command.

Prerequisites

Procedure

$ openstack overcloud deploy --templates \
  -e /usr/share/openstack-tripleo-heat-templates/environments/network-isolation.yaml \
  -e /usr/share/openstack-tripleo-heat-templates/environments/net-single-nic-with-vlans.yaml \
  -e /home/stack/templates/network-environment.yaml \
  --ntp-server pool.ntp.org \
  [ADDITIONAL OPTIONS]

The above command uses the following options:

  • --templates - Creates the overcloud from the default heat template collection.
  • -e /usr/share/openstack-tripleo-heat-templates/environments/network-isolation.yaml - Adds an additional environment file to the overcloud deployment. In this case, it is an environment file that initializes network isolation configuration for IPv6.
  • -e /usr/share/openstack-tripleo-heat-templates/environments/net-single-nic-with-vlans.yaml - Adds an additional environment file to the overcloud deployment. In this case, it is an environment file that initializes network isolation configuration for IPv6.
  • -e /home/stack/network-environment.yaml - Adds an additional environment file to the overcloud deployment. In this case, it includes overrides related to IPv6.

    Ensure that the network_data.yaml file includes the setting ipv6: true. Previous versions of Red Hat OpenStack director included two routes: one for IPv6 on the External network (default) and one for IPv4 on the Control Plane. To use both default routes, ensure that the Controller definition in the roles_data.yaml file contains both networks in the default_route_networks parameter. For example, default_route_networks: ['External', 'ControlPlane'].

  • --ntp-server pool.ntp.org - Sets the NTP server.

The overcloud creation process begins and director provisions the overcloud nodes. This process takes some time to complete. To view the status of the overcloud creation, open a separate terminal as the stack user and run:

$ source ~/stackrc
$ heat stack-list --show-nested

Accessing the overcloud

Director generates a script to configure and help authenticate interactions with your overcloud from the director host. The director saves this file (overcloudrc) in the home directory of the stack user. Run the following command to use this file:

$ source ~/overcloudrc

This loads the necessary environment variables to interact with your overcloud from the director host CLI. To return to interacting with the director host, run the following command:

$ source ~/stackrc