Chapter 16. Load Balancing-as-a-Service (LBaaS) with Octavia

The OpenStack Load-balancing service (Octavia) provides a Load Balancing-as-a-Service (LBaaS) version 2 implementation for Red Hat OpenStack platform director based installations. This section describes how to enable Octavia and assumes Octavia services are hosted on the same nodes as the Networking API server. By default, the Load-balancing services are on the controller nodes.

Note

Red Hat does not support a migration path from Neutron-LBaaS to Octavia. However, there are some unsupported open source tools that are available. For more information, see https://github.com/nmagnezi/nlbaas2octavia-lb-replicator/tree/stable_1.

Note

LBaaSv2 with Octavia does not currently support plugins. If you use commercial OpenStack load-balancing solutions, you must continue to use the LBaaSv2 API. See Chapter 15, Configure Load Balancing-as-a-Service with the Networking LBaaSv2 API for details.

16.1. Overview of Octavia

Octavia uses a set of instances on a Compute node called amphorae and communicates with the amphorae over a load-balancing management network (lb-mgmt-net).

Octavia includes the following:

API Controller(octavia_api container)
Communicates with the controller worker for configuration updates and to deploy, monitor, or remove amphora instances.
Controller Worker(octavia_worker container)
Send configuration and configuration updates to amphorae over the LB network.
Health Manager
Monitors the health of individual amphorae and handles failover events if amphorae fail unexpectedly.
Housekeeping Manager
Cleans up stale (deleted) database records, manages the spares pool, and manages amphora certificate rotation.
Loadbalancer
The top API object that represents the load balancing entity. The VIP address is allocated when the loadbalancer is created. When creating the loadbalancer, an Amphora instance is booted on the compute node.
Amphora
The instance that does the load balancing. Amphorae are typically instances running on the Compute nodes and are configured with load balancing parameters according to the listener, pool, health monitor, L7 policies, and members configuration. Amphorae send a periodic heartbeat to the Health Manager.
Listener
The listening endpoint,for example HTTP, of a load balanced service. A listener might refer to several pools (and switch between them using layer 7 rules).
Pool
A group of members that handle client requests from the load balancer (amphora). A pool is associated with only one listener.
Member
Compute instances that serve traffic behind the load balancer (amphora) in a pool.

The following diagram describes the flow of HTTPS traffic through to a pool member:

OpenStack Networking Guide 471659 0518 LBaaS Topology

16.2. Software Requirements

Octavia requires that you configure the following core OpenStack components:

  • Compute (nova)
  • Networking (enable allowed_address_pairs)
  • Image (glance)
  • Identity (keystone)
  • RabbitMQ
  • MySQL

16.3. Prerequisites for the undercloud

This section assumes that your undercloud is already installed and ready to deploy an overcloud with Octavia enabled. Only container deployments are supported. Octavia runs on your Controller node.

Note

If you want to enable the Octavia service on an existing overcloud deployment, you must prepare the undercloud. Failure to do so results in the overcloud installation being reported as successful yet without Octavia running. To prepare the undercloud, see Transitioning to Containerized Services.

16.3.1. Octavia support matrix

Table 16.1. Octavia support matrix

OSP 13 Version

ML2/OVS

ML2/OVN

ODL

L3 HA

DVR

L3 HA and composable network node

DVR and composable network node [a]

OVN

DVR

 

GA

      

z1

      

z2

  

  

z3

  

 

z4

  

z5

[a] Network node with OVS, metadata, DHCP, L3 and Octavia (worker, health monitor, house keeping).

16.3.2. Octavia limitations

Octavia does not support the following:

  • LBaaS v2 with haproxy. (LBaaS v2 API with 3rd-party vendors is supported.)
  • UDP networking.
  • TLS internal API for IPv6.
  • TLS-terminated listeners.
  • Active-standby load balancer topology.
  • Composable roles. [1]
  • Health monitor ping type.
  • Provider drivers. (The reference driver, amphora, is supported.)

16.4. Planning your Octavia deployment

Red Hat OpenStack Platform provides a workflow task to simplify the post-deployment steps for the Load-balancing service. The tripleo-common/workbooks/octavia_post.yaml workbook is configured from the tripleo-heat-templates/docker/services/octavia/octavia-deployment-config.yaml file.

This Octavia workflow runs a set of Ansible playbooks to provide the following post-deployment steps as the last phase of the overcloud deployment:

  • Configure certificates and keys.
  • Configure the load-balancing management network between the amphorae and the Octavia Controller worker and health manager.
Note

Do not modify the OpenStack heat templates directly. Create a custom environment file (for example, octavia-environment.yaml) to override default parameter values.

Amphora Image

The director automatically downloads the default amphora image, uploads it to the overcloud Image service, and then configures Octavia to use this amphora image. The director updates this image to the latest amphora image during a stack update or upgrade.

Note

Custom amphora images are not supported.

16.4.1. Configuring Octavia certificates and keys

Octavia containers require secure communication with load balancers and with each other. You can specify your own certificates and keys. Add the appropriate parameters to a custom environment file (such as octavia-environment.yaml).

Configuring user-provided certificates and keys

You can set the OctaviaGenerateCerts parameter to false to provide your own certificate and keys to Octavia.

This example shows the parameter settings for certificate and keys you provide:

parameter_defaults:
    OctaviaCaCert: |
      -----BEGIN CERTIFICATE-----
      MIIDgzCCAmugAwIBAgIJAKk46qw6ncJaMA0GCSqGSIb3DQEBCwUAMFgxCzAJBgNV
      [snip]
      sFW3S2roS4X0Af/kSSD8mlBBTFTCMBAj6rtLBKLaQbIxEpIzrgvp
      -----END CERTIFICATE-----

    OctaviaCaKey: |
      -----BEGIN RSA PRIVATE KEY-----
      Proc-Type: 4,ENCRYPTED
      [snip]
      -----END RSA PRIVATE KEY-----[

    OctaviaClientCert: |
      Certificate:
        Data:
            Version: 3 (0x2)
            Serial Number: 3 (0x3)
      [snip]
      -----END PRIVATE KEY-----

    OctaviaCaKeyPassphrase:
        b28c519a-5880-4e5e-89bf-c042fc75225d

    OctaviaGenerateCerts: false
[rest of file snipped]
Note

The certificates and keys are multi-line values, and all of the lines must be indented to the same level.

16.5. Deploying Octavia

To deploy Octavia, determine the appropriate values for the Octavia parameters and set any overriding values in a custom environment file, for example, octavia-environment.yaml.

16.5.1. Configuring the load balancing network

You can use the default values below to configure the load balancing network:

  OctaviaControlNetwork: 'lb-mgmt-net'
  OctaviaControlSubnet: 'lb-mgmt-subnet'
  OctaviaControlSecurityGroup: 'lb-mgmt-sec-group'
  OctaviaControlSubnetCidr: '172.24.0.0/16'
  OctaviaControlSubnetGateway: '172.24.0.1'
  OctaviaControlSubnetPoolStart: '172.24.0.2'
  OctaviaControlSubnetPoolEnd: '172.24.255.254'

16.5.2. Setting the management port

This procedure is optional. If you do not want to use the default settings, override the following parameter in a custom environment file, for example, octavia-environment.yaml:

OctaviaMgmtPortDevName: "o-hm0"

16.5.3. Deploying Octavia with director

Ensure that your environment has access to the Octavia image. For more information, see the registry methods.

To deploy Octavia in the overcloud:

    openstack overcloud deploy --templates \
    -e /usr/share/openstack-tripleo-heat-templates/environments/services/octavia.yaml
Note

The director updates the amphora image to the latest amphora image during a stack update or upgrade.

16.6. Configuring an HTTP load balancer

To configure a simple HTTP load balancer:

  1. Create the load balancer on a subnet:

    $ openstack loadbalancer create --name lb1 --vip-subnet-id private-subnet
  2. Monitor the state of the load balancer:

    $ openstack loadbalancer show lb1

    When you see a status of Active and ONLINE, the load balancer is created and running and you can go to the next step.

    Note

    To check load balancer status from the Compute service (nova), use the openstack server list --all | grep amphora command. Creating load balancers can appear to be a slow process (status displaying as PENDING) because load balancers are virtual machines (VMs) and not containers.

  3. Create a listener:

    $ openstack loadbalancer listener create --name listener1 --protocol HTTP --protocol-port 80 lb1
  4. Create the listener default pool:

    $ openstack loadbalancer pool create --name pool1 --lb-algorithm ROUND_ROBIN --listener listener1 --protocol HTTP
  5. Create a health monitor on the pool to test the “/healthcheck” path:

    $ openstack loadbalancer healthmonitor create --delay 5 --max-retries 4 --timeout 10 --type HTTP --url-path /healthcheck pool1
  6. Add load balancer members to the pool:

    $ openstack loadbalancer member create --subnet-id private-subnet --address 192.0.2.10 --protocol-port 80 pool1
    $ openstack loadbalancer member create --subnet-id private-subnet --address 192.0.2.11 --protocol-port 80 pool1
  7. Create a floating IP address on a public subnet:

    $ openstack floating ip create public
  8. Associate this floating IP with the load balancer VIP port:

    $ openstack floating ip set --port `_LOAD_BALANCER_VIP_PORT_` `_FLOATING_IP_`
    Tip

    To locate LOAD_BALANCER_VIP_PORT, run this command: openstack loadbalancer show lb1.

16.7. Verifying the load balancer

To verify the load balancer:

  1. Use the openstack loadbalancer show command to verify the load balancer settings:

    (overcloud) [stack@undercloud-0 ~]$ openstack loadbalancer show lb1
    +---------------------+--------------------------------------+
    | Field               | Value                                |
    +---------------------+--------------------------------------+
    | admin_state_up      | True                                 |
    | created_at          | 2018-04-18T12:28:34                  |
    | description         |                                      |
    | flavor              |                                      |
    | id                  | 788fe121-3dec-4e1b-8360-4020642238b0 |
    | listeners           | 09f28053-fde8-4c78-88b9-0f191d84120e |
    | name                | lb1                                  |
    | operating_status    | ONLINE                               |
    | pools               | 627842b3-eed8-4f5f-9f4a-01a738e64d6a |
    | project_id          | dda678ca5b1241e7ad7bf7eb211a2fd7     |
    | provider            | octavia                              |
    | provisioning_status | ACTIVE                               |
    | updated_at          | 2018-04-18T14:03:09                  |
    | vip_address         | 192.168.0.11                         |
    | vip_network_id      | 9bca13be-f18d-49a5-a83d-9d487827fd16 |
    | vip_port_id         | 69a85edd-5b1c-458f-96f2-b4552b15b8e6 |
    | vip_qos_policy_id   | None                                 |
    | vip_subnet_id       | 5bd7334b-49b3-4849-b3a2-b0b83852dba1 |
    +---------------------+--------------------------------------+
  2. Use the amphora list command to find the UUID of the amphora associated with load balancer lb1:

    (overcloud) [stack@undercloud-0 ~]$ openstack loadbalancer amphora list | grep <UUID of loadbalancer lb1>
  3. Use the amphora show command with the amphora UUID to view amphora information:

    (overcloud) [stack@undercloud-0 ~]$ openstack loadbalancer amphora show 62e41d30-1484-4e50-851c-7ab6e16b88d0
    +-----------------+--------------------------------------+
    | Field           | Value                                |
    +-----------------+--------------------------------------+
    | id              | 62e41d30-1484-4e50-851c-7ab6e16b88d0 |
    | loadbalancer_id | 53a497b3-267d-4abc-968f-94237829f78f |
    | compute_id      | 364efdb9-679c-4af4-a80c-bfcb74fc0563 |
    | lb_network_ip   | 192.168.0.13                         |
    | vrrp_ip         | 10.0.0.11                            |
    | ha_ip           | 10.0.0.10                            |
    | vrrp_port_id    | 74a5c1b4-a414-46b8-9263-6328d34994d4 |
    | ha_port_id      | 3223e987-5dd6-4ec8-9fb8-ee34e63eef3c |
    | cert_expiration | 2020-07-16T12:26:07                  |
    | cert_busy       | False                                |
    | role            | BACKUP                               |
    | status          | ALLOCATED                            |
    | vrrp_interface  | eth1                                 |
    | vrrp_id         | 1                                    |
    | vrrp_priority   | 90                                   |
    | cached_zone     | nova                                 |
    | created_at      | 2018-07-17T12:26:07                  |
    | updated_at      | 2018-07-17T12:30:36                  |
    +-----------------+--------------------------------------+
  4. Use the openstack loadbalancer listener show command to view the listener details:

    (overcloud) [stack@undercloud-0 ~]$ openstack loadbalancer listener show listener1
    +---------------------------+------------------------------------------------------------------------+
    | Field                     | Value                                                                  |
    +---------------------------+------------------------------------------------------------------------+
    | admin_state_up            | True                                                                   |
    | connection_limit          | -1                                                                     |
    | created_at                | 2018-04-18T12:51:25                                                    |
    | default_pool_id           | 627842b3-eed8-4f5f-9f4a-01a738e64d6a                                   |
    | default_tls_container_ref | http://10.0.0.101:9311/v1/secrets/7eafeabb-b4a1-4bc4-8098-b6281736bfe2 |
    | description               |                                                                        |
    | id                        | 09f28053-fde8-4c78-88b9-0f191d84120e                                   |
    | insert_headers            | None                                                                   |
    | l7policies                |                                                                        |
    | loadbalancers             | 788fe121-3dec-4e1b-8360-4020642238b0                                   |
    | name                      | listener1                                                              |
    | operating_status          | ONLINE                                                                 |
    | project_id                | dda678ca5b1241e7ad7bf7eb211a2fd7                                       |
    | protocol                  | TERMINATED_HTTPS                                                       |
    | protocol_port             | 443                                                                    |
    | provisioning_status       | ACTIVE                                                                 |
    | sni_container_refs        | []                                                                     |
    | updated_at                | 2018-04-18T14:03:09                                                    |
    +---------------------------+------------------------------------------------------------------------+
  5. Use the openstack loadbalancer pool show command to view the pool and load-balancer members:

    (overcloud) [stack@undercloud-0 ~]$ openstack loadbalancer pool show pool1
    +---------------------+--------------------------------------+
    | Field               | Value                                |
    +---------------------+--------------------------------------+
    | admin_state_up      | True                                 |
    | created_at          | 2018-04-18T12:53:49                  |
    | description         |                                      |
    | healthmonitor_id    |                                      |
    | id                  | 627842b3-eed8-4f5f-9f4a-01a738e64d6a |
    | lb_algorithm        | ROUND_ROBIN                          |
    | listeners           | 09f28053-fde8-4c78-88b9-0f191d84120e |
    | loadbalancers       | 788fe121-3dec-4e1b-8360-4020642238b0 |
    | members             | b85c807e-4d7c-4cbd-b725-5e8afddf80d2 |
    |                     | 40db746d-063e-4620-96ee-943dcd351b37 |
    | name                | pool1                                |
    | operating_status    | ONLINE                               |
    | project_id          | dda678ca5b1241e7ad7bf7eb211a2fd7     |
    | protocol            | HTTP                                 |
    | provisioning_status | ACTIVE                               |
    | session_persistence | None                                 |
    | updated_at          | 2018-04-18T14:03:09                  |
    +---------------------+--------------------------------------+
  6. Use the openstack floating ip list command to verify the floating IP address:

    (overcloud) [stack@undercloud-0 ~]$ openstack floating ip list
    +--------------------------------------+---------------------+------------------+--------------------------------------+--------------------------------------+----------------------------------+
    | ID                                   | Floating IP Address | Fixed IP Address | Port                                 | Floating Network                     | Project                          |
    +--------------------------------------+---------------------+------------------+--------------------------------------+--------------------------------------+----------------------------------+
    | 89661971-fa65-4fa6-b639-563967a383e7 | 10.0.0.213          | 192.168.0.11     | 69a85edd-5b1c-458f-96f2-b4552b15b8e6 | fe0f3854-fcdc-4433-bc57-3e4568e4d944 | dda678ca5b1241e7ad7bf7eb211a2fd7 |
    +--------------------------------------+---------------------+------------------+--------------------------------------+--------------------------------------+----------------------------------+
  7. Verify HTTPS traffic flows across the load balancer:

    (overcloud) [stack@undercloud-0 ~]$ curl -v https://10.0.0.213 --insecure
    * About to connect() to 10.0.0.213 port 443 (#0)
    *   Trying 10.0.0.213...
    * Connected to 10.0.0.213 (10.0.0.213) port 443 (#0)
    * Initializing NSS with certpath: sql:/etc/pki/nssdb
    * skipping SSL peer certificate verification
    * SSL connection using TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384
    * Server certificate:
    * 	subject: CN=www.example.com,O=Dis,L=Springfield,ST=Denial,C=US
    * 	start date: Apr 18 09:21:45 2018 GMT
    * 	expire date: Apr 18 09:21:45 2019 GMT
    * 	common name: www.example.com
    * 	issuer: CN=www.example.com,O=Dis,L=Springfield,ST=Denial,C=US
    > GET / HTTP/1.1
    > User-Agent: curl/7.29.0
    > Host: 10.0.0.213
    > Accept: */*
    >
    < HTTP/1.1 200 OK
    < Content-Length: 30
    <
    * Connection #0 to host 10.0.0.213 left intact

16.8. Accessing Amphora logs

Amphora is the instance that performs load balancing. You can view Amphora logging information in the systemd journal.

  1. Start the ssh-agent, and add your user’s identity key to the agent:

    [stack@undercloud-0] $ eval `ssh-agent -s`
    [stack@undercloud-0] $ ssh-add
  2. Use SSH to connect to the Amphora instance:

    [stack@undercloud-0] $ ssh -A -t heat-admin@<controller node IP address> ssh cloud-user@<IP address of Amphora in load-balancing management network>
  3. View the systemd journal:

    [cloud-user@amphora-f60af64d-570f-4461-b80a-0f1f8ab0c422 ~] $ sudo journalctl

    Refer to the journalctl man page for information about filtering journal output.

  4. When you are finished viewing the journal, and have closed your connections to the Amphora instance and the Controller node, make sure that you stop the SSH agent:

    [stack@undercloud-0] $ exit

16.9. Updating running amphora instances

16.9.1. Overview

Periodically, you must update a running load balancing instance (amphora) with a newer image. Some events that might cause you to update your amphora instances are:

  • An update or upgrade of Red Hat OpenStack Platform.
  • A security update to your system.
  • A change to a different flavor for the underlying virtual machine.

Updating an amphora image requires failing over the load balancer, and then waiting for the load balancer to regain an active state. When the load balancer is again active, it is running the new image.

16.9.2. Prerequisites

New images for amphora are available during an OpenStack update or upgrade.

16.9.3. Update amphora instances with new images

During an OpenStack update or upgrade, director automatically downloads the default amphora image, uploads it to the overcloud Image service (glance), and then configures Octavia to use the new image. When you failover the load balancer, you are forcing Octavia to start the new amphora image.

  1. Make sure that you have reviewed the prerequisites before you begin updating amphora.
  2. List the IDs for all the load balancers that you want to update:

    $ openstack loadbalancer list -c id -f value
  3. Failover each load balancer:

    $ openstack loadbalancer failover <loadbalancer_id>
    Note

    When you start failing over the load balancers, monitor system utilization, and as needed, adjust the rate at which you perform failovers. A load balancer failover creates new virtual machines and ports, which might temporarily increase the load on OpenStack Networking.

  4. Monitor the state of the failed over load balancer:

    $ openstack loadbalancer show <loadbalancer_id>

    The update is complete when the load balancer status is ACTIVE.



[1] The only supported configuration is: L3/DHCP, health manager, housekeeping, and Octavia worker service running on the network node, and the Octavia API running on the controller node.