Chapter 3. Rally

Rally4 is a benchmarking tool created to answer the underlying question of "How does OpenStack work at scale?". Rally is able to achieve the answer to this question by automating the processes that entails the OpenStack deployment, cloud verification, benchmarking, and profiling. While Rally has the capabilities to offer an assortment of Rally actions to test and validate the OpenStack cloud, this reference environment focuses specifically on using Rally as a benchmarking tool to test specific scenarios using an existing RHEL-OSP cloud and generate HTML reports based upon the captured results.

Note

At the time of this writing, Rally is not supported with RHEL-OSP 7, however, it is a key component to benchmark the RHEL-OSP 7 environment.

4 https://wiki.openstack.org/wiki/Rally

3.1. Before you Begin

Ensure the following prerequisites are met:

  • Successful deployment of RHEL-OSP 7 as described in Deploying Red Hat Enterprise Linux OpenStack Platform 7 with RHEL-OSP director 7.1
  • Deployed virtual machine where the Rally binaries and benchmarking results are to reside
  • Rally virtual machine requires access to the Internal API network, Tenant network, and Provisioning network
Note

Please refer to Creating Guests with Virt Manager for more information on creating a virtual machine.

3.2. Enabling the Required Channels for the Rally VM

The following repositories are required for a successful installation of the Rally environment.

Table 3.1. Required Channels - Rally VM

ChannelRepository Name

Extra Packages for Enterprise Linux 7 - x86_64

epel

Red Hat Enterprise Linux 7 Server (RPMS)

rhel-7-server-rpms

Red Hat Enterprise Linux OpenStack Platform 7.0 (RPMS)

rhel-7-server-openstack-7.0-rpms

Red Hat Enterprise Linux High Availability (for RHEL 7 Server) (RPMs)

rhel-ha-for-rhel-7-server-rpms

The following steps are required to subscribe to the appropriate channels listed above.

  1. Register the Rally VM with the Content Delivery Network (CDN), using your Customer Portal user name and password credentials.

    $ sudo subscription-manager register
  2. Register to the appropriate entitlement pool for the Red Hat Enterprise Linux OpenStack Platform director.

    $ sudo subscription-manager list --available --all
  3. Use the pool ID located to attach to the Red Hat Enterprise Linux OpenStack Platform 7 entitlements.

    $ sudo subscription-manager attach --pool=<ID>
  4. Disable all default repositories.

    $ sudo subscription-manager repos --disable=* 
  5. Enable only the required repositories.

    $ sudo subscription-manager repos --enable=rhel-7-server-rpms --enable=rhel-7-server-openstack-7.0-rpms --enable=rhel-ha-for-rhel-7-server-rpms
  6. Install the epel package to enable the repository.

    $ yum install https://dl.fedoraproject.org/pub/epel/7/x86_64/e/epel-release-7-5.noarch.rpm
    Note

    The epel repository is only required within the Rally VM.

3.3. Rally Installation

With the Rally VM setup with the appropriate repositories, the next step involves the installation of Rally. The steps below provide a step-by-step on the installation process.

Within the Rally virtual machine, as the root user,

  1. Create a working directory to clone the rally git repository.

    # mkdir /path/to/myrally
  2. If not already installed, install git using the yum command.

    # yum install git
  3. Access the working directory.

    # cd /path/to/myrally/rally
  4. Clone the Rally repository using git.

    # git clone https://github.com/openstack/rally.git
  5. Run the install_rally.sh script and follow the prompts to install the required dependencies.

    # /path/to/myrally/rally/install_rally.sh
    [ ... Output Abbreviated ...]
    Installing rally-manage script to /usr/bin
    Installing rally script to /usr/bin
    ======================================================================
    Information about your Rally installation:
     * Method: system
     * Database at: /var/lib/rally/database
     * Configuration file at: /etc/rally
    ======================================================================

3.4. Rally Configuration

With a successful Rally install, the next steps are to configure the Rally environment by providing Rally access to the overcloud environment. A step-by-step on the configuration process can be found below.

  1. Export the RHEL-OSP 7 environment variables with the proper credentials. The credentials for this reference environment are as follows:

    # export OS_USERNAME=admin
    # export OS_TENANT_NAME=admin
    # export OS_PASSWORD=  # export OS_AUTH_URL=http://20.0.0.29:35357/v2.0/*
    Note

    The OS_PASSWORD is omitted.

  2. Add the existing RHEL-OSP 7 deployment to the Rally database using --fromenv.

    # rally deployment create --fromenv --name=<name>

3.4.1. Rally Extending Tenant Network (Optional)

The following steps are optional and only required if the Rally benchmark scenario being run requires direct access to the guest. Direct access refers to ssh capability using the Tenant network instead of floating IPs. When running the NovaServers.boot_server scenario, extending the Tenant network is not required as this specific scenario does not ssh into the guests but simply launches the guest instances.

In order to enable direct access to the launched guests, please follow the instructions below. The following steps all reside within the benchmarking server. The benchmarking server is hosting the Rally VM.

  1. If the current RHEL-OSP 7 deployment does not have any floating IPs available, extend the Tenant network to the benchmarking server. Run the following to install the Neutron Open vSwitch agent package within the benchmarking server.

    # yum install openstack-neutron-openvswitch
  2. Install the openstack-selinux package if it does not already reside within benchmarking server.

    # yum install openstack-selinux
  3. Copy the Neutron configuration files from a RHEL-OSP 7 compute node that has been deployed by the RHEL-OSP 7 undercloud node to the benchmarking server.

    # scp <Compute_Node_IP>:/etc/neutron/* /etc/neutron/
  4. Ensure the permissions for the copied /etc/neutron directory are of owner root and group neutron.

    # chown -R root.neutron /etc/neutron
  5. Set an available IP address to the interface that would have been associated with the Tenant network had it been part of the RHEL-OSP 7 cluster. Within this reference environment, nic3 is the network interface used as noted in the Table 1.2, “Network Isolation” table. Edit the /etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini file and change local_ip to an IP address that resides on the Tenant network. This reference environment’s Tenant network resides in the 172.16.4.x subnet.

    local_ip =172.16.4.250

  6. The default ovs_neutron_plugin.ini looks for a bridge labeled br-ex. If it does not already exist, create a bridge labeled br-ex that resides within the benchmarking server.

    # brctl addbr br-ex
  7. Restart the openvswitch and neutron-openvswitch-agent services.

    # systemctl restart openvswitch
    # systemctl restart neutron-openvswitch-agent

The following steps should be performed on the undercloud server.

  1. As the stack user, source the overcloudrc file to set environment variables for the overcloud.

    # su - stack
    # source overcloudrc
  2. List the Neutron networks.

    # neutron net-list
    +--------------------------------------+----------------------------------------------------+-------------------------------------------------------+
    | id                                   | name                                               | subnets                                               |
    +--------------------------------------+----------------------------------------------------+-------------------------------------------------------+
    | 0fd1b597-7ed0-45cf-b9e2-a5dfbee80377 | demo_net                                           | f5a4fbf8-1d86-4d64-ad9e-4d012a5fd1b7 172.16.5.0/24    |
    | cee2c56b-2bb1-476d-9905-567af6e86978 | HA network tenant cc5e33d027b847d480957f5e30d04620 | 6dcf6e82-0f13-4418-bca9-9791b11da05a 169.254.192.0/18 |
    | 330c5120-9e36-4049-a036-6997733af443 | ext-net                                            | d9200dac-a99b-41ad-88be-f79bb2f06676 10.19.136.0/21   |
    +--------------------------------------+----------------------------------------------------+-------------------------------------------------------+
  3. Capture the demo_net uuid and export the variable netid with the specified value

    # export netid=0fd1b597-7ed0-45cf-b9e2-a5dfbee80377
  4. Verify the netid is the value expected.

    # echo $netid
    0fd1b597-7ed0-45cf-b9e2-a5dfbee80377
  5. Export a variable labeled hostid with the hostname value.

    # export hostid=iaas-vms.cloud.lab.eng.bos.redhat.com
    Note

    The benchmarking servers hostname is iaas-vms.cloud.lab.eng.bos.redhat.com for this reference environment. This server hosts the Rally VM.

  6. Verify the hostid value.

    # echo $hostid
    iaas-vms.cloud.lab.eng.bos.redhat.com
  7. Create a neutron port labeled rally-port that binds to the host_id and creates the port within the network of the associated netid.

    # neutron port-create --name rally-port --binding:host_id=$hostid $netid
    Created a new port:
    +-----------------------+-----------------------------------------------------------------------------------+
    | Field                 | Value                                                                             |
    +-----------------------+-----------------------------------------------------------------------------------+
    | admin_state_up        | True                                                                              |
    | allowed_address_pairs |                                                                                   |
    | binding:host_id       | iaas-vms.cloud.lab.eng.bos.redhat.com                                             |
    | binding:profile       | {}                                                                                |
    | binding:vif_details   | {"port_filter": true, "ovs_hybrid_plug": true}                                    |
    | binding:vif_type      | ovs                                                                               |
    | binding:vnic_type     | normal                                                                            |
    | device_id             |                                                                                   |
    | device_owner          |                                                                                   |
    | extra_dhcp_opts       |                                                                                   |
    | fixed_ips             | {"subnet_id": "f5a4fbf8-1d86-4d64-ad9e-4d012a5fd1b7", "ip_address": "172.16.5.9"} |
    | id                    | db6cfb2f-ad04-402c-b5d9-216b8a716841                                              |
    | mac_address           | fa:16:3e:f2:72:9b                                                                 |
    | name                  | rally-port                                                                        |
    | network_id            | 0fd1b597-7ed0-45cf-b9e2-a5dfbee80377                                              |
    | security_groups       | 38807133-c370-47ae-9b04-16a882de1212                                              |
    | status                | DOWN                                                                              |
    | tenant_id             | 67b93212a2a34ccfb2014fdc34f4275e                                                  |
    +-----------------------+-----------------------------------------------------------------------------------+
  8. Within the benchmarking server, modify the Rally VM XML with the following:

    # virsh edit rally
    ...
        <interface type='bridge'>
          <mac address='fa:16:3e:f2:72:9b'/>
          <source bridge='br-int'/>
          <virtualport type='openvswitch'>
            <parameters interfaceid='neutron-port-id'/>
          </virtualport>
          <model type='virtio'/>
          <address type='pci' domain='0x0000' bus='0x00' slot='0x0f' function='0x0'/>
        </interface>
    ...
    Note

    Ensure to update the value neutron-port-id with the id and parameter mac address with the value of mac_address located in the previous step when creating the rally-port.

  9. Once the XML file changes have been applied to the Rally guest, shutdown the guest and start the guest back up.

    Note

    Simply doing a reboot will not apply the changes.

  10. Once all the above steps are completed, login to the Rally VM and run a sample scenario to test the environment.

    # rally task start /path/to/rally/samples/tasks/scenarios/keystone/create-user.json

3.5. Benchmarking with Rally

The following section describes the different test case scenarios used to benchmark the existing RHEL-OSP environment. The results captured by these tests are analyzed in section Chapter 4, Analyzing RHEL-OSP 7 Benchmark Results with Rally

Rally runs different types of scenarios based on the information provided by a user defined .json file. While Rally consists of many scenarios, this reference environment consists of showing the following scenarios that focus on end user usability of the RHEL-OSP cloud.

  • Keystone.create-user (setup validation)
  • Authenticate.validate_nova
  • NovaServers.boot_and_list_server

The user defined .json files associated with these Rally scenarios are provided starting with the Appendix D: Rally Validate Nova JSON File

In order to properly create the user defined .json files, understanding how to assign parameter values is critical. The following example breaks down an existing .json file that runs the NovaServers.boot_and_list_server scenario.

{% set flavor_name = flavor_name or "m1.small" %}
{
    "NovaServers.boot_and_list_server": [
        {
            "args": {
                "flavor": {
                    "name": "{{flavor_name}}"
                },
                "nics": [{
                    "net-id": "0fd1b597-7ed0-45cf-b9e2-a5dfbee80377"
                }],
                "image": {
                    "name": "rhel-server7"
                },
                "detailed": true
            },
            "runner": {
                "concurrency": 1,
                "times": 1,
                "type": "constant"
            },
            "context": {
                "users": {
                    "tenants": 1,
                    "users_per_tenant": 1
                },
                "quotas": {
                    "neutron": {
                        "network": -1,
                        "port": -1
                    },
                    "nova": {
                        "instances": -1,
                        "cores": -1,
                        "ram": -1
                    }
                }
            }
        }
    ]
}

A .json file consists of the following:

  • A curly bracket {, followed by the name of the Rally scenario, e.g. "NovaServers.boot_server", followed by a colon : and bracket [. The syntax is critical when creating a .json file otherwise the Rally task fails. Each value assigned requires a comma , unless it is the final argument in a section.
  • The next piece of the syntax are the arguments, args.

    • args consists of parameters that are assigned user defined values. The most notable parameters include:

      • nics - The UUID of the shared network to use in order to boot and delete instances.
      • flavor - The size of the guest instances to be created, e.g. m1.small.
      • image - The name of the image file used for creating guest instances.
      • quotas - Specification of quotas for the CPU cores, instances, and memory (RAM). Setting a value of -1 for cores, instances, and ram allows for use of all the resources available within the RHEL-OSP 7 cloud.
      • tenants - amount of total tenants to be created.
      • users_per_tenant - amount of users to be created within each tenant.
      • concurrency - amount of guest instances to run on each iteration.
      • times - amount of iterations to perform.
  • The closing syntax of a json file are the ending bracket ] and curly bracket bracket }.

Once defining a json file is complete, the next step into properly benchmarking with Rally is to run a scenario. Using the provided json file, a user can ensure that their RHEL-OSP 7 environment can properly create guest instances.

# rally task start simple-boot-server.json
[ ... Output Abbreviated ... ]
+------------------------------------------------------------------------------------------+
|                                   Response Times (sec)                                   |
+------------------+--------+--------+--------+--------+--------+--------+---------+-------+
| action           | min    | median | 90%ile | 95%ile | max    | avg    | success | count |
+------------------+--------+--------+--------+--------+--------+--------+---------+-------+
| nova.boot_server | 53.182 | 53.182 | 53.182 | 53.182 | 53.182 | 53.182 | 100.0%  | 1     |
| total            | 53.182 | 53.182 | 53.182 | 53.182 | 53.182 | 53.182 | 100.0%  | 1     |
+------------------+--------+--------+--------+--------+--------+--------+---------+-------+
Load duration: 53.2088990211
Full duration: 78.8793258667

HINTS:
* To plot HTML graphics with this data, run:
        rally task report cd34b115-51e2-4e02-83ee-d5780ecdded4 --out output.html

* To generate a JUnit report, run:
        rally task report cd34b115-51e2-4e02-83ee-d5780ecdded4 --junit --out output.xml

* To get raw JSON output of task results, run:
        rally task results cd34b115-51e2-4e02-83ee-d5780ecdded4

In Chapter 4, Analyzing RHEL-OSP 7 Benchmark Results with Rally, the focal point is on running different Rally scenarios and analyzing those results.

For more information regarding Rally, please visit http://rally.readthedocs.org/en/latest/overview.html