Chapter 16. Load Balancing-as-a-Service (LBaaS) with Octavia
The OpenStack Load-balancing service (octavia) provides a Load Balancing-as-a-Service (LBaaS) version 2 implementation for Red Hat OpenStack platform director based installations. This section describes how to enable Octavia and assumes Octavia services are hosted on the same nodes as the Networking API server. By default, the Load-balancing services are on the controller nodes.
LBaaSv2 with Octavia does not currently support plugins. If you use commercial OpenStack load balancing solutions, you must continue to use the LBaaSv2 API. See Chapter 15, Configure Load Balancing-as-a-Service with the Networking LBaaSv2 API for details.
While Octavia as provider of LBaas v2 is not supported in any Red Hat Openstack Platform version, the Octavia project as a standalone component is supported.
16.1. Overview of Octavia
Octavia uses a set of instances on a Compute node called amphorae and communicates with the amphorae over a load-balancing management network (lb-mgmt-net).
Octavia includes the following:
- API Controller(
octavia_api container) - Communicates with the controller worker for configuration updates and to deploy, monitor, or remove amphora instances.
- Controller Worker(
octavia_worker container) - Send configuration and configuration updates to amphorae over the LB network.
- Health Manager
- Monitors the health of individual amphorae and handles failover events if amphorae fail unexpectedly.
- Housekeeping Manager
- Cleans up stale (deleted) database records, manages the spares pool, and manages amphora certificate rotation.
- Loadbalancer
- The top API object that represents the load balancing entity. The VIP address is allocated when the loadbalancer is created. When creating the loadbalancer, an Amphora instance is booted on the compute node.
- Amphora
- The instance that does the load balancing. Amphorae are typically instances running on the Compute nodes and are configured with loadbalancing parameters according to the listener, pool, health monitor, L7 policies, and members configuration. Amphorae send a periodic heartbeat to the Health Manager.
- Listener
- The listening endpoint,for example HTTP, of a load balanced service. A listener might refer to several pools (and switch between them using layer 7 rules).
- Pool
- A group of members that handle client requests from the load balancer (amphora). A pool is associated with only one listener.
- Member
- Compute instances that serve traffic behind the load balancer (amphora) in a pool.
The following diagram describes the flow of HTTPS traffic through to a pool member:

16.2. Software Requirements
Octavia requires that you configure the following core OpenStack components:
- Compute (nova)
-
Networking (enable
allowed_address_pairs) - Image (glance)
- Key Manager (barbican) if TLS offloading functionality is enabled
- Identity (keystone)
- RabbitMQ
- MySQL
16.3. Prerequisites for the undercloud
This section assumes that your undercloud is already installed and ready to deploy an overcloud with Octavia enabled. Only container deployments are supported. Octavia runs on your Controller node.
16.4. Planning your Octavia deployment
Red Hat OpenStack Platform provides a workflow task to simplify the post-deployment steps for the Load-balancing service. The tripleo-common/workbooks/octavia_post.yaml workbook is configured from the tripleo-heat-templates/docker/services/octavia/octavia-deployment-config.yaml file.
This Octavia workflow runs a set of Ansible playbooks to provide the following post-deployment steps as the last phase of the overcloud deployment:
- Configure certificates and keys.
- Configure the load-balancing management network between the amphorae and the Octavia Controller worker and health manager.
Do not modify the OpenStack heat templates directly. Create a custom environment file (for example, octavia-environment.yaml) to override default parameter values.
Amphora Image
The director automatically downloads the default amphora image, uploads it to the overcloud Image service, and then configures Octavia to use this amphora image. The director updates this image to the latest amphora image during a stack update or upgrade.
Custom amphora images are not supported.
16.4.1. Configuring Octavia certificates and keys
Octavia containers require secure communication with load balancers and with each other. You can specify your own certificates and keys. Add the appropriate parameters to a custom environment file (such as octavia-environment.yaml).
Configuring user-provided certificates and keys
You can set the OctaviaGenerateCerts parameter to false to provide your own certificate and keys to Octavia.
This example shows the parameter settings for certificate and keys you provide:
parameter_defaults:
OctaviaCaCert: |
-----BEGIN CERTIFICATE-----
MIIDgzCCAmugAwIBAgIJAKk46qw6ncJaMA0GCSqGSIb3DQEBCwUAMFgxCzAJBgNV
[snip]
sFW3S2roS4X0Af/kSSD8mlBBTFTCMBAj6rtLBKLaQbIxEpIzrgvp
-----END CERTIFICATE-----
OctaviaCaKey: |
-----BEGIN RSA PRIVATE KEY-----
Proc-Type: 4,ENCRYPTED
[snip]
-----END RSA PRIVATE KEY-----[
OctaviaClient: |
Certificate:
Data:
Version: 3 (0x2)
Serial Number: 3 (0x3)
[snip]
-----END PRIVATE KEY-----
OctaviaCaKeyPassphrase:
b28c519a-5880-4e5e-89bf-c042fc75225d
OctaviaGenerateCerts: false
[rest of file snipped]The certificates and keys are multi-line values, and all of the lines must be indented to the same level.
16.5. Deploying Octavia
To deploy Octavia, you should determine the appropriate values for the Octavia parameters and set any overriding values in a custom environment file (for example, octavia-environment.yaml.)
16.5.1. Configuring the load balancing network
You can use the default values below to configure the load balancing network:
OctaviaControlNetwork:
description: The name for the neutron network used for the amphora
control network
type: string
default: 'lb-mgmt-net'
OctaviaControlSubnet:
description: The name for the neutron subnet used for the amphora
control network
type: string
default: 'lb-mgmt-subnet'
OctaviaControlSecurityGroup:
description: The name for the neutron security group used to
control access on the amphora control network
type: string
default: 'lb-mgmt-sec-group'
OctaviaControlSubnetCidr:
description: Subnet for amphora control subnet in CIDR form.
type: string
default: '172.24.0.0/16'
OctaviaControlSubnetGateway:
description: IP address for control network gateway
type: string
default: '172.24.0.1'
OctaviaControlSubnetPoolStart:
description: First address in amphora control subnet address
pool.
type: string
default: '172.24.0.2'
OctaviaControlSubnetPoolEnd:
description: First address in amphora control subnet address
pool.
type: string
default: '172.24.255.254'16.5.2. Setting the management port
This procedure is optional. If you do not want to use the default settings, override the following parameter in a custom environment file (for example, octavia-environment.yaml):
OctaviaMgmtPortDevName:
type: string
default: "o-hm0"
description: Name of the octavia management network interface using
for communication between octavia worker/health-manager
with the amphora machine.16.5.3. Deploying Octavia with director
Ensure that your environment has access to the Octavia image. For more information, see the registry methods.
To deploy Octavia in the overcloud:
openstack overcloud deploy --templates \
-e /usr/share/openstack-tripleo-heat-templates/environments/services-docker/octavia.yamlThe director updates the amphora image to the latest amphora image during a stack update or upgrade.
16.6. Configuring an HTTP load balancer
To configure a simple HTTP load balancer:
Create the load balancer on a subnet:
$ openstack loadbalancer create --name lb1 --vip-subnet-id private-subnet
Wait until the load balancer is created. You can optionally use the
openstack loadbalancer show lb1command to see when the load balancer status is Active and ONLINE. You use the VIP port ID in a later step.Create a listener:
$ openstack loadbalancer listener create --name listener1 --protocol HTTP --protocol-port 80 lb1
Create the listener default pool:
$ openstack loadbalancer pool create --name pool1 --lb-algorithm ROUND_ROBIN --listener listener1 --protocol HTTP
Create a health monitor on the pool to test the “/healthcheck” path:
$ openstack loadbalancer healthmonitor create --delay 5 --max-retries 4 --timeout 10 --type HTTP --url-path /healthcheck pool1
Add load balancer members to the pool:
$ openstack loadbalancer member create --subnet-id private-subnet --address 192.0.2.10 --protocol-port 80 pool1 $ openstack loadbalancer member create --subnet-id private-subnet --address 192.0.2.11 --protocol-port 80 pool1
Create a floating IP address on a public subnet:
$ openstack floating ip create public
Associate this floating IP with the load balancer VIP port:
$ openstack floating ip set --port `_LOAD_BALANCER_VIP_PORT_` `_FLOATING_IP_`
16.7. Verifying the load balancer
To verify the load balancer:
Use the
openstack loadbalancer showcommand to verify the load balancer settings:(overcloud) [stack@undercloud-0 ~]$ openstack loadbalancer show lb1 +---------------------+--------------------------------------+ | Field | Value | +---------------------+--------------------------------------+ | admin_state_up | True | | created_at | 2018-04-18T12:28:34 | | description | | | flavor | | | id | 788fe121-3dec-4e1b-8360-4020642238b0 | | listeners | 09f28053-fde8-4c78-88b9-0f191d84120e | | name | lb1 | | operating_status | ONLINE | | pools | 627842b3-eed8-4f5f-9f4a-01a738e64d6a | | project_id | dda678ca5b1241e7ad7bf7eb211a2fd7 | | provider | octavia | | provisioning_status | ACTIVE | | updated_at | 2018-04-18T14:03:09 | | vip_address | 192.168.0.11 | | vip_network_id | 9bca13be-f18d-49a5-a83d-9d487827fd16 | | vip_port_id | 69a85edd-5b1c-458f-96f2-b4552b15b8e6 | | vip_qos_policy_id | None | | vip_subnet_id | 5bd7334b-49b3-4849-b3a2-b0b83852dba1 | +---------------------+--------------------------------------+
Use the
amphora showcommand to view amphora information:$ openstack loadbalancer amphora show 62e41d30-1484-4e50-851c-7ab6e16b88d0 +-----------------+--------------------------------------+ | Field | Value | +-----------------+--------------------------------------+ | id | 62e41d30-1484-4e50-851c-7ab6e16b88d0 | | loadbalancer_id | 53a497b3-267d-4abc-968f-94237829f78f | | compute_id | 364efdb9-679c-4af4-a80c-bfcb74fc0563 | | lb_network_ip | 192.168.0.13 | | vrrp_ip | 10.0.0.11 | | ha_ip | 10.0.0.10 | | vrrp_port_id | 74a5c1b4-a414-46b8-9263-6328d34994d4 | | ha_port_id | 3223e987-5dd6-4ec8-9fb8-ee34e63eef3c | | cert_expiration | 2020-07-16T12:26:07 | | cert_busy | False | | role | BACKUP | | status | ALLOCATED | | vrrp_interface | eth1 | | vrrp_id | 1 | | vrrp_priority | 90 | | cached_zone | nova | | created_at | 2018-07-17T12:26:07 | | updated_at | 2018-07-17T12:30:36 | +-----------------+--------------------------------------+
Use the
openstack loadbalancer listener showcommand to view the listener details:(overcloud) [stack@undercloud-0 ~]$ openstack loadbalancer listener show listener1 +---------------------------+------------------------------------------------------------------------+ | Field | Value | +---------------------------+------------------------------------------------------------------------+ | admin_state_up | True | | connection_limit | -1 | | created_at | 2018-04-18T12:51:25 | | default_pool_id | 627842b3-eed8-4f5f-9f4a-01a738e64d6a | | default_tls_container_ref | http://10.0.0.101:9311/v1/secrets/7eafeabb-b4a1-4bc4-8098-b6281736bfe2 | | description | | | id | 09f28053-fde8-4c78-88b9-0f191d84120e | | insert_headers | None | | l7policies | | | loadbalancers | 788fe121-3dec-4e1b-8360-4020642238b0 | | name | listener1 | | operating_status | ONLINE | | project_id | dda678ca5b1241e7ad7bf7eb211a2fd7 | | protocol | TERMINATED_HTTPS | | protocol_port | 443 | | provisioning_status | ACTIVE | | sni_container_refs | [] | | updated_at | 2018-04-18T14:03:09 | +---------------------------+------------------------------------------------------------------------+
Use the
openstack loadbalancer pool showcommand to view the pool and load-balancer members:(overcloud) [stack@undercloud-0 ~]$ openstack loadbalancer pool show pool1 +---------------------+--------------------------------------+ | Field | Value | +---------------------+--------------------------------------+ | admin_state_up | True | | created_at | 2018-04-18T12:53:49 | | description | | | healthmonitor_id | | | id | 627842b3-eed8-4f5f-9f4a-01a738e64d6a | | lb_algorithm | ROUND_ROBIN | | listeners | 09f28053-fde8-4c78-88b9-0f191d84120e | | loadbalancers | 788fe121-3dec-4e1b-8360-4020642238b0 | | members | b85c807e-4d7c-4cbd-b725-5e8afddf80d2 | | | 40db746d-063e-4620-96ee-943dcd351b37 | | name | pool1 | | operating_status | ONLINE | | project_id | dda678ca5b1241e7ad7bf7eb211a2fd7 | | protocol | HTTP | | provisioning_status | ACTIVE | | session_persistence | None | | updated_at | 2018-04-18T14:03:09 | +---------------------+--------------------------------------+
Use the
openstack floating ip listcommand to verify the floating IP address:(overcloud) [stack@undercloud-0 ~]$ openstack floating ip list +--------------------------------------+---------------------+------------------+--------------------------------------+--------------------------------------+----------------------------------+ | ID | Floating IP Address | Fixed IP Address | Port | Floating Network | Project | +--------------------------------------+---------------------+------------------+--------------------------------------+--------------------------------------+----------------------------------+ | 89661971-fa65-4fa6-b639-563967a383e7 | 10.0.0.213 | 192.168.0.11 | 69a85edd-5b1c-458f-96f2-b4552b15b8e6 | fe0f3854-fcdc-4433-bc57-3e4568e4d944 | dda678ca5b1241e7ad7bf7eb211a2fd7 | +--------------------------------------+---------------------+------------------+--------------------------------------+--------------------------------------+----------------------------------+
Verify HTTPS traffic flows across the load balancer:
(overcloud) [stack@undercloud-0 ~]$ curl -v https://10.0.0.213 --insecure * About to connect() to 10.0.0.213 port 443 (#0) * Trying 10.0.0.213... * Connected to 10.0.0.213 (10.0.0.213) port 443 (#0) * Initializing NSS with certpath: sql:/etc/pki/nssdb * skipping SSL peer certificate verification * SSL connection using TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 * Server certificate: * subject: CN=www.example.com,O=Dis,L=Springfield,ST=Denial,C=US * start date: Apr 18 09:21:45 2018 GMT * expire date: Apr 18 09:21:45 2019 GMT * common name: www.example.com * issuer: CN=www.example.com,O=Dis,L=Springfield,ST=Denial,C=US > GET / HTTP/1.1 > User-Agent: curl/7.29.0 > Host: 10.0.0.213 > Accept: */* > < HTTP/1.1 200 OK < Content-Length: 30 < * Connection #0 to host 10.0.0.213 left intact
