Chapter 1. Configuring the overcloud to use an external load balancer

In Red Hat OpenStack Platform (RHOSP), the overcloud uses multiple Controller nodes together as a high availability cluster to ensure maximum operational performance for your OpenStack services. The cluster also provides load balancing for OpenStack services, which evenly distributes traffic to the Controller nodes and reduces server overload for each node.

By default, the overcloud uses an open source tool called HAProxy to manage load balancing. HAProxy load balances traffic to the Controller nodes that run OpenStack services. The haproxy package contains the haproxy daemon that listens to incoming traffic, and includes logging features and sample configurations.

The overcloud also uses the high availability resource manager Pacemaker to control HAProxy as a highly available service. This means that HAProxy runs on each Controller node and distributes traffic according to a set of rules that you define in each configuration.

You can also use an external load balancer to perform this distribution. For example, your organization might use a dedicated hardware-based load balancer to handle traffic distribution to the Controller nodes. To define the configuration for an external load balancer and the overcloud creation, you perform the following processes:

  1. Install and configure an external load balancer.
  2. Configure and deploy the overcloud with heat template parameters to integrate the overcloud with the external load balancer. This requires the IP addresses of the load balancer and of the potential nodes.

Before you configure your overcloud to use an external load balancer, ensure that you deploy and run high availability on the overcloud.

1.1. Preparing your environment for an external load balancer

To prepare your environment for an external load balancer, first create a node definition template and register blank nodes with director. Then, inspect the hardware of all nodes and manually tag nodes into profiles.

Use the following workflow to prepare your environment:

  • Create a node definition template and register blank nodes with Red Hat OpenStack Platform director. A node definition template instackenv.json is a JSON format file and contains the hardware and power management details to register nodes.
  • Inspect the hardware of all nodes. This ensures that all nodes are in a manageable state.
  • Manually tag nodes into profiles. These profile tags match the nodes to flavors. The flavors are then assigned to a deployment role.

Procedure

  1. Log in to the director host as the stack user and source the director credentials:

    $ source ~/stackrc
  2. Create a node definition template instackenv.json and copy and edit the following example based on your environment:

    {
        "nodes":[
            {
                "mac":[
                    "bb:bb:bb:bb:bb:bb"
                ],
                "cpu":"4",
                "memory":"6144",
                "disk":"40",
                "arch":"x86_64",
                "pm_type":"ipmi",
                "pm_user":"admin",
                "pm_password":"p@55w0rd!",
                "pm_addr":"192.0.2.205"
            },
            {
                "mac":[
                    "cc:cc:cc:cc:cc:cc"
                ],
                "cpu":"4",
                "memory":"6144",
                "disk":"40",
                "arch":"x86_64",
                "pm_type":"ipmi",
                "pm_user":"admin",
                "pm_password":"p@55w0rd!",
                "pm_addr":"192.0.2.206"
            },
            {
                "mac":[
                    "dd:dd:dd:dd:dd:dd"
                ],
                "cpu":"4",
                "memory":"6144",
                "disk":"40",
                "arch":"x86_64",
                "pm_type":"ipmi",
                "pm_user":"admin",
                "pm_password":"p@55w0rd!",
                "pm_addr":"192.0.2.207"
            },
            {
                "mac":[
                    "ee:ee:ee:ee:ee:ee"
                ],
                "cpu":"4",
                "memory":"6144",
                "disk":"40",
                "arch":"x86_64",
                "pm_type":"ipmi",
                "pm_user":"admin",
                "pm_password":"p@55w0rd!",
                "pm_addr":"192.0.2.208"
            }
        ]
    }
  3. Save the file to the home directory of the stack user, /home/stack/instackenv.json, then import it into director and register the nodes to the director:

    $ openstack overcloud node import ~/instackenv.json
  4. Assign the kernel and ramdisk images to all nodes:

    $ openstack overcloud node configure
  5. Inspect the hardware attributes of each node:

    $ openstack overcloud node introspect --all-manageable
    Important

    The nodes must be in the manageable state. Ensure that this process runs to completion. This process usually takes 15 minutes for bare metal nodes.

  6. Get the list of your nodes to identify their UUIDs:

    $ openstack baremetal node list
  7. Manually tag each node to a specific profile by adding a profile option in the properties/capabilities parameter for each node. For example, to tag three nodes to use a Controller profile and one node to use a Compute profile, use the following commands:

    $ openstack baremetal node set 1a4e30da-b6dc-499d-ba87-0bd8a3819bc0 --property capabilities='profile:control,boot_option:local'
    $ openstack baremetal node set 6faba1a9-e2d8-4b7c-95a2-c7fbdc12129a --property capabilities='profile:control,boot_option:local'
    $ openstack baremetal node set 5e3b2f50-fcd9-4404-b0a2-59d79924b38e --property capabilities='profile:control,boot_option:local'
    $ openstack baremetal node set 58c3d07e-24f2-48a7-bbb6-6843f0e8ee13 --property capabilities='profile:compute,boot_option:local'

    The profile:compute and profile:control options tag the nodes into each respective profile.

Additional resources

1.2. Configuring the overcloud network for an external load balancer

To configure the network for the overcloud, isolate your services to use specific network traffic and then configure the network environment file for your local environment. This file is a heat environment file that describes the overcloud network environment, points to the network interface configuration templates, and defines the subnets and VLANs for your network and the IP address ranges.

Procedure

  1. To configure the node interfaces for each role, customize the following network interface templates:

    • To configure a single NIC with VLANs for each role, use the example templates in the following directory:

      /usr/share/openstack-tripleo-heat-templates/network/config/single-nic-vlans
    • To configure bonded NICs for each role, use the example templates in the following directory:

      /usr/share/openstack-tripleo-heat-templates/network/config/bond-with-vlans
  2. Create a network environment file by copying the file from /home/stack/network-environment.yaml and editing the content based on your environment.

1.3. Creating an external load balancer environment file

To deploy an overcloud with an external load balancer, create a new environment file with the required configuration. In this example file, several virtual IPs are configured on the external load balancer, one virtual IP on each isolated network, and one for the Redis service, before the overcloud deployment starts. Some of the virtual IPs can be identical if the overcloud node NICs configuration supports the configuration.

Procedure

  • Use the following example environment file external-lb.yaml to create the environment file, and edit the content based on your environment.

    parameter_defaults:
      ControlFixedIPs: [{'ip_address':'192.0.2.250'}]
      PublicVirtualFixedIPs: [{'ip_address':'172.16.23.250'}]
      InternalApiVirtualFixedIPs: [{'ip_address':'172.16.20.250'}]
      StorageVirtualFixedIPs: [{'ip_address':'172.16.21.250'}]
      StorageMgmtVirtualFixedIPs: [{'ip_address':'172.16.19.250'}]
      RedisVirtualFixedIPs: [{'ip_address':'172.16.20.249'}]
      # IPs assignments for the Overcloud Controller nodes. Ensure these IPs are from each respective allocation pools defined in the network environment file.
      ControllerIPs:
        external:
        - 172.16.23.150
        - 172.16.23.151
        - 172.16.23.152
        internal_api:
        - 172.16.20.150
        - 172.16.20.151
        - 172.16.20.152
        storage:
        - 172.16.21.150
        - 172.16.21.151
        - 172.16.21.152
        storage_mgmt:
        - 172.16.19.150
        - 172.16.19.151
        - 172.16.19.152
        tenant:
        - 172.16.22.150
        - 172.16.22.151
        - 172.16.22.152
        # CIDRs
        external_cidr: "24"
        internal_api_cidr: "24"
        storage_cidr: "24"
        storage_mgmt_cidr: "24"
        tenant_cidr: "24"
      RedisPassword: p@55w0rd!
      ServiceNetMap:
        NeutronTenantNetwork: tenant
        CeilometerApiNetwork: internal_api
        AodhApiNetwork: internal_api
        GnocchiApiNetwork: internal_api
        MongoDbNetwork: internal_api
        CinderApiNetwork: internal_api
        CinderIscsiNetwork: storage
        GlanceApiNetwork: storage
        GlanceRegistryNetwork: internal_api
        KeystoneAdminApiNetwork: internal_api
        KeystonePublicApiNetwork: internal_api
        NeutronApiNetwork: internal_api
        HeatApiNetwork: internal_api
        NovaApiNetwork: internal_api
        NovaMetadataNetwork: internal_api
        NovaVncProxyNetwork: internal_api
        SwiftMgmtNetwork: storage_mgmt
        SwiftProxyNetwork: storage
        HorizonNetwork: internal_api
        MemcachedNetwork: internal_api
        RabbitMqNetwork: internal_api
        RedisNetwork: internal_api
        MysqlNetwork: internal_api
        CephClusterNetwork: storage_mgmt
        CephPublicNetwork: storage
        ControllerHostnameResolveNetwork: internal_api
        ComputeHostnameResolveNetwork: internal_api
        BlockStorageHostnameResolveNetwork: internal_api
        ObjectStorageHostnameResolveNetwork: internal_api
        CephStorageHostnameResolveNetwork: storage
    Note
    • The parameter_defaults section contains the VIP and IP assignments for each network. These settings must match the same IP configuration for each service on the load balancer.
    • The parameter_defaults section defines an administrative password for the Redis service (RedisPassword) and contains the ServiceNetMap parameter, which maps each OpenStack service to a specific network. The load balancing configuration requires this services remap.

1.4. Configuring SSL for external load balancing

To configure encrypted endpoints for the external load balancer, create additional environment files that enable SSL to access endpoints and then install a copy of your SSL certificate and key on your external load balancing server. By default, the overcloud uses unencrypted endpoints services.

Prerequisites

  • If you are using an IP address or domain name to access the public endpoints, choose one of the following environment files to include in your overcloud deployment:

    • To access the public endpoints with a domain name service (DNS), use the file /usr/share/openstack-tripleo-heat-templates/environments/tls-endpoints-public-dns.yaml.
    • To access the public endpoints with an IP address, use the file /usr/share/openstack-tripleo-heat-templates/environments/tls-endpoints-public-ip.yaml.

Procedure

  1. If you use a self-signed certificate or if the certificate signer is not in the default trust store on the overcloud image, inject the certificate into the overcloud image by copying the inject-trust-anchor.yaml environment file from the heat template collection:

    $ cp -r /usr/share/openstack-tripleo-heat-templates/environments/inject-trust-anchor.yaml ~/templates/
  2. Open the file in a text editor and copy the contents of the root certificate authority file to the SSLRootCertificate parameter:

    parameter_defaults:
      SSLRootCertificate: |
      -----BEGIN CERTIFICATE-----
      MIIDgzCCAmugAwIBAgIJAKk46qw6ncJaMA0GCSqGSIb3DQEBCwUAMFgxCzAJBgNV
      ...
      sFW3S2roS4X0Af/kSSD8mlBBTFTCMBAj6rtLBKLaQbIxEpIzrgvp
      -----END CERTIFICATE-----
    Important

    The certificate authority content requires the same indentation level for all new lines.

  3. Change the resource URL for the OS::TripleO::NodeTLSCAData: parameter to an absolute URL:

    resource_registry:
      OS::TripleO::NodeTLSCAData: /usr/share/openstack-tripleo-heat-templates/puppet/extraconfig/tls/ca-inject.yaml
  4. Optional: If you use a DNS hostname to access the overcloud through SSL/TLS, create a new environment file ~/templates/cloudname.yaml and define the hostname of the overcloud endpoints in the following parameters:

    parameter_defaults:
      CloudName: overcloud.example.com
      DnsServers: 10.0.0.1

    Replace the following values with actual values in your environment:

    • CloudName: Replace overcloud.example.com with the DNS hostname for the overcloud endpoints.
    • DnsServers: List of the DNS servers that you want to use. The configured DNS servers must contain an entry for the configured CloudName that matches the IP for the Public API.

1.5. Deploying the overcloud with an external load balancer

To deploy an overcloud that uses an external load balancer, run the openstack overcloud deploy and include the additional environment files and configuration files for the external load balancer.

Prerequisites

Procedure

  1. Deploy the overcloud with all the environment and configuration files for an external load balancer:

    $ openstack overcloud deploy --templates /
      -e /usr/share/openstack-tripleo-heat-templates/environments/network-isolation.yaml /
      -e ~/network-environment.yaml /
      -e /usr/share/openstack-tripleo-heat-templates/environments/external-loadbalancer-vip.yaml /
      -e ~/external-lb.yaml --control-scale 3 --compute-scale 1 --control-flavor control --compute-flavor compute /
      -e <SSL/TLS endpoint environment file> /
      -e <DNS hostname environment file> /
      -e <root certificate injection environment file> /
      -e <additional_options_if_needed>

    Replace the values in angle brackets <> with the file paths you defined for your environment.

    Important

    You must add the network environment files to the command in the order listed in this example.

    This command includes the following environment files:

    • network-isolation.yaml: Network isolation configuration file.
    • network-environment.yaml: Network configuration file.
    • external-loadbalancer-vip.yaml: External load balancing virtual IP addresses configuration file.
    • external-lb.yaml: External load balancer configuration file. You can also set the following options for this file and adjust the values for your environment:

      • --control-scale 3: Scale the Controller nodes to three.
      • --compute-scale 3: Scale the Compute nodes to three.
      • --control-flavor control: Use a specific flavor for the Controller nodes.
      • --compute-flavor compute: Use a specific flavor for the Compute nodes.
    • SSL/TLS environment files:

      • SSL/TLS endpoint environment file: Environment file that defines how to connect to public endpoinst. Use tls-endpoints-public-dns.yaml or tls-endpoints-public-ip.yaml.
      • (Optional) DNS hostname environment file: The environment file to set the DNS hostname.
      • Root certificate injection environment file: The environment file to inject the root certificate authority.

    During the overcloud deployment process, Red Hat OpenStack Platform director provisions your nodes. This process takes some time to complete.

  2. To view the status of the overcloud deployment, enter the following commands:

    $ source ~/stackrc
    $ openstack stack list --nested

1.6. Additional resources