Chapter 3. Capacity Planning

3.1. Capacity and Utilization Collection

Red Hat CloudForms server can collect and analyze capacity and utilization data from your virtual infrastructure. Use this data to understand the limitations of your current environment and plan for growth.

For some capacity and utilization data, Red Hat CloudForms calculates and shows trend lines in the charts. Trend lines are created using linear regression, which is calculated using the capacity and utilization data collected by Red Hat CloudForms during the interval you specify for the chart. The more data you have the better the predictive value of the trend line.

There are three server roles associated with the collection and metric creation of capacity and utilization.

  • The Capacity & Utilization Coordinator role checks to see if it is time to collect data, somewhat like a scheduler. If it is time, a job is queued for the capacity and utilization data collector. The coordinator role is required to complete capacity and utilization data collection. If more than one Server in a specific zone has this role, only one will be active at a time.
  • The Capacity & Utilization Data Collector performs the actual collection of capacity and utilization data. This role has a dedicated worker, and there can be more than one server with this role in a zone.
  • The Capacity & Utilization Data Processor processes all of the data collected, allowing Red Hat CloudForms to create charts. This role has a dedicated worker, and there can be more than one server with this role in a zone.

3.2. Assigning the Capacity and Utilization Server Roles

  1. Navigate to SettingsConfiguration, and select the server to configure from SettingsZone in the left pane of the appliance.
  2. Navigate to the Server Roles list in the ServerServer Control section. From there, set the appropriate Capacity and Utilization roles to ON. Namely:

    1. Capacity & Utilization Coordinator
    2. Capacity & Utilization Data Collector
    3. Capacity & Utilization Data Processor
  3. Click Save.

Data collection is enabled immediately. However, the first collection begins 5 minutes after the server is started, and every 10 minutes after that. Therefore, the longest the collection takes after enabling the Capacity & Utilization collector server role is 10 minutes. The first collection from a particular provider may take a few minutes since Red Hat CloudForms is gathering data points going one month back in time.

Note

In addition to setting the server role, you must also select which clusters and datastores to collect data for. For more information, see General Configuration. You must have super administrator rights to edit these settings.

3.3. Data Collection for Red Hat Enterprise Virtualization

Note

The procedure applies to version numbers Red Hat Enterprise Virtualization 3.x and Red Hat Virtualization 4.x.

To collect capacity and utilization data for Red Hat Enterprise Virtualization, you must add Red Hat CloudForms as a user to the RHEV-M database.

Perform this procedure on the PostgreSQL server where the history database is located. Usually, this is the RHEV-M server.

  1. Using SSH, access the RHEV-M database server as the root user:

    $ ssh root@example.postgres.server
  2. Switch to the postgres user:

    # su - postgres
  3. Access the database prompt:

    # psql ovirt_engine_history
  4. Create a new user for the Red Hat CloudForms and grant read-only access to tables and views:

    ovirt_engine_history=# CREATE ROLE cfme with LOGIN ENCRYPTED PASSWORD 'password';
    
    ovirt_engine_history=# SELECT 'GRANT SELECT ON ' || relname || ' TO cfme;' FROM pg_class JOIN pg_namespace ON pg_namespace.oid = pg_class.relnamespace WHERE nspname = 'public' AND relkind IN ('r', 'v', 'S');
  5. Exit to the RHEV-M database server prompt:

    ovirt_engine_history=# \q
    # exit
  6. Update the server’s firewall to accept TCP communication on port 5432.

    • For Red Hat Enterprise Virtualization 3.x:

      # iptables -I INPUT -p tcp -m tcp --dport 5432 -j ACCEPT
      # service iptables save
    • For Red Hat Virtualization 4.x:

      # firewall-cmd --add-port=5432/tcp --permanent
  7. Enable external md5 authentication by appending the following line to /var/lib/pgsql/data/pg_hba.conf:

    host    all      all    0.0.0.0/0     md5
  8. Enable PostgreSQL to listen for remote connections by updating the listen_addresses line in /var/lib/pgsql/data/postgresql.conf:

    listen_addresses  =  '*'
  9. Reload the PostgreSQL configuration.

    • For Red Hat Enterprise Virtualization 3.x:

      # service postgresql reload
    • For Red Hat Virtualization 4.x:

      # systemctl reload postgresql

3.4. Adding Database Credentials for Data Collection

After creating the new user, add the user’s credentials to the settings for the provider.

  1. From ComputeInfrastructureProviders, select an infrastructure provider to update its settings.
  2. Click 1847 Configuration, and then 1851 Edit Selected Infrastructure Provider.
  3. In the Credentials area, click C & U Database.
  4. Type in the credentials for the new database user you created.
  5. Click Save.
  6. Restart the capacity and utilization data collector.

3.5. Data Collection for Red Hat Enterprise Linux OpenStack Platform

Before you can collect data from a Red Hat Enterprise Linux OpenStack Platform (RHEL-OSP) provider, you must install Ceilometer and configure it to accept queries from external systems.

These instructions require a Red Hat Enterprise Linux 6.4 @base installation of RHEL-OSP and registration to a satellite that has access to both the RHEL-OSP and RHEL Server Optional channels. Perform all steps on your RHEL-OSP system.

  1. Add the required channels and update your system:

    # rhn-channel --add -c rhel-x86_64-server-6-ost-3 -c rhel-x86_64-server-optional-6
    # yum update -y
    # reboot
  2. Install Ceilometer:

    # yum install *ceilometer*
  3. Install and start the MongoDB store:

    # yum install mongodb-server
    # sed -i '/--smallfiles/!s/OPTIONS=\"/OPTIONS=\"--smallfiles /' /etc/sysconfig/mongod
    # service mongod start
  4. Create the following users and roles:

    # SERVICE_TENANT=$(keystone tenant-list | grep services | awk '{print $2}')
    # ADMIN_ROLE=$(keystone role-list | grep ' admin ' | awk '{print $2}')
    # SERVICE_PASSWORD=servicepass
    # CEILOMETER_USER=$(keystone user-create --name=ceilometer \
    --pass="$SERVICE_PASSWORD" \
    --tenant_id $SERVICE_TENANT \
    --email=ceilometer@example.com | awk '/ id / {print $4}')
    # RESELLER_ROLE=$(keystone role-create --name=ResellerAdmin | awk '/ id / {print $4}')
    # ADMIN_ROLE=$(keystone role-list | awk '/ admin / {print $2}')
    # for role in $RESELLER_ROLE $ADMIN_ROLE ; do
    keystone user-role-add --tenant_id $SERVICE_TENANT \
    --user_id $CEILOMETER_USER --role_id $role
    done
  5. Configure the authtoken in ceilometer.conf:

    # openstack-config --set /etc/ceilometer/ceilometer.conf keystone_authtoken auth_host 127.0.0.1
    # openstack-config --set /etc/ceilometer/ceilometer.conf keystone_authtoken auth_port 35357
    # openstack-config --set /etc/ceilometer/ceilometer.conf keystone_authtoken auth_protocol http
    # openstack-config --set /etc/ceilometer/ceilometer.conf keystone_authtoken admin_tenant_name services
    # openstack-config --set /etc/ceilometer/ceilometer.conf keystone_authtoken admin_user ceilometer
    # openstack-config --set /etc/ceilometer/ceilometer.conf keystone_authtoken admin_password $SERVICE_PASSWORD
  6. Configure the user credentials in ceilometer.conf:

    # openstack-config --set /etc/ceilometer/ceilometer.conf DEFAULT os_auth_url http://127.0.0.1:35357/v2.0
    # openstack-config --set /etc/ceilometer/ceilometer.conf DEFAULT os_tenant_name services
    # openstack-config --set /etc/ceilometer/ceilometer.conf DEFAULT os_password $SERVICE_PASSWORD
    # openstack-config --set /etc/ceilometer/ceilometer.conf DEFAULT os_username ceilometer
  7. Start the Ceilometer services:

    # for svc in compute central collector api ; do
      service openstack-ceilometer-$svc start
      done
  8. Register an endpoint with the service catalog. Replace $EXTERNALIFACE with the IP address of your external interface:

    # keystone service-create --name=ceilometer \
    --type=metering --description="Ceilometer Service"
    # CEILOMETER_SERVICE=$(keystone service-list | awk '/ceilometer/ {print $2}')
    # keystone endpoint-create \
    --region RegionOne \
    --service_id $CEILOMETER_SERVICE \
    --publicurl "http://$EXTERNALIFACE:8777/" \
    --adminurl "http://$EXTERNALIFACE:8777/" \
    --internalurl "http://localhost:8777/"
  9. Enable access to Ceilometer from external systems:

    # iptables -I INPUT -p tcp -m multiport --dports 8777 -m comment --comment "001 ceilometer incoming" -j ACCEPT
    # iptables save
  10. Confirm the status of OpenStack and the Ceilometer services:

    # openstack-status
    # for svc in compute central collector api ; do
      service openstack-ceilometer-$svc status
      done
  11. Verify Ceilometer is working correctly by authenticating as a user with instances running, for example admin. Pipe the sample for the CPU meter to count lines, and confirm that the value changes according to the interval specified in /etc/ceilometer/pipeline.yaml. The default interval is 600 seconds.

    # . ~/keystonerc_admin
    # ceilometer sample-list -m cpu |wc -l
  12. Add the configured OpenStack provider to Red Hat CloudForms. See Adding OpenStack Providers in Managing Providers. After adding the provider, capacity and utilization data for your instances populate in a few minutes.

3.6. Data Collected

Red Hat CloudForms generates charts from the collected data which can be used to plan your hardware and virtual machine needs. Depending on the type of data, these charts may include lines for averages, maximums, minimums, and trends.

Note

For reporting of daily capacity and utilization data, incomplete days (days with less than 24 hourly data points from midnight to midnight) that are at the beginning or end of the requested interval are excluded. Days with less than 24 hourly data points would be inaccurate and including them would skew trend lines. Therefore, at least one full day of hourly data from midnight to midnight is necessary for displaying the capacity and utilization charts under the ComputeInfrastructure tab.

3.6.1. Capacity and Utilization Charts for Host, Clusters, and Virtual Machines

Table 3.1. Capacity and Utilization Charts for Host, Clusters, and Virtual Machines

Resource TypeCPU UsageCPU StatesDisk I/OMemory UsageNetwork I/ORunning VMSRunning Hosts

Host

Y

Y

Y

Y

Y

Y

NA

Cluster

Y

Y

Y

Y

Y

Y

Y

Virtual Machine

Y

Y

Y

Y

Y

NA

NA

For procedures to view capacity and utilization charts for hosts, clusters, and virtual machines, see the following sections in Managing Infrastructure and Inventory:

3.6.2. Capacity and Utilization Charts for Datastores

Charts created include:

Table 3.2. Capacity and Utilization Charts for Datastores

Space by VM TypeVirtual Machines and Hosts

Used Space

Number of VMs by Type

Disk files Space

Hosts

Snapshot Files Space

Virtual Machines

Memory Files Space

 

Non-VM Files

 

Used Disk Space

 

See Viewing Capacity and Utilization Charts for a Datastore in Managing Infrastructure and Inventory for more information.

3.7. Chart Features

Each chart provides its own set of special features including zooming in on a chart and shortcut menus.

3.7.1. Zooming into a Chart

  1. Navigate to the chart you want to zoom. If you hover anywhere on the chart, two dashed lines will appear to target a coordinate of the chart.
  2. Click 2251 (Click to zoom in) in the lower left corner of the chart to zoom into it.
  3. To go back to the regular view click 2252 (Click to zoom out) on the enlarged chart.

3.7.2. Drilling into Chart Data

  1. Navigate to the chart you want to get more detail from.
  2. Hover over a data point to see the coordinates.
  3. Click on a data point to open a shortcut menu for the chart. In this example, we can use the shortcut menu to go to the hourly chart or display the virtual machines that were running at the time the data was captured.

    • If you are viewing the CPU, Disk, Memory, or Network charts, selecting from the Chart option will change all of the charts on the page to the new interval selected.
    • If you are viewing the CPU, Disk, Memory, or Network charts, selecting from the Display option will allow you to drill into the virtual machines or Hosts that were running at the time.
    • If you are viewing the VM or Hosts chart, the Display menu will allow you to view running or stopped virtual machines. The time of the data point will be displayed in addition to the virtual machines that apply. From here, click on a virtual machine to view its details.

3.8. Optimization

Red Hat CloudForms’s optimization functions allow you to view utilization trends, and identify and project bottlenecks in your environment. In addition, you can predict where you have capacity for additional virtual machines.

Note

For reporting of daily optimization data, incomplete days (days with less than 24 hourly data points from midnight to midnight) that are at the beginning or end of the requested interval are excluded. Days with less than 24 hourly data points would be inaccurate and including them would skew trend lines. Therefore, the Optimize page requires at least two full days of daily data because all the charted values are derived from trend calculations and that requires at least two data points

3.10. Planning Where to Put a New Virtual Machine

You can use the data collected in the VMDB to plan where you can put additional virtual machines. Red Hat CloudForms allows you to use a reference virtual machine as an example to plan on which hosts and clusters you can place a new virtual machine.

  1. Navigate to OptimizePlanning.
  2. From Reference VM Selection, use the dropdowns to select the virtual machine that is most like the one that you want to add.

    2254

  3. Select the required VM Options for what you want to base the calculations on.

    2255

    From the Source list, select the type of data to use as the source for your projections. For example, select Allocation to calculate based on the current allocation values of each resource (CPU, memory, or disk space) for the reference virtual machine. Use Reservation to project based on the current guaranteed value of the specific resource (CPU Speed, CPU count, memory, or disk space) although that amount may not be allocated to the virtual machine at a specific moment in time. Select Usage if you want to calculate based on usage history of the reference virtual machine. Use Manual Input to enter your own set of parameters for each resource.

  4. From Target Options / Limits, select if you want to use clusters or hosts as your targets.

    2256

    Also, select the limit of how high the projection can go for CPU, memory, and datastore space. If you are targeting hosts, you will be able to select a filter for which hosts can be targets.

  5. From Trend Options, select how far back you want to use the trend data for, a Time Profile and Time Zone if applicable. Note that Time Profile will only show if the logged on user has a Time Profile available.
  6. Click Submit.

The Summary tab shows the best clusters or hosts on which to place the virtual machines. The Report tab shows the best fit and statistics on the reference virtual machine in a tabular format. From the Report tab, you can also create a PDF of the report or download the data in txt or csv format.

3.11. Bottlenecks

Red Hat CloudForms can show where bottlenecks occur in your virtual infrastructure. You can view them either on a timeline or as a report which can be downloaded for further analysis.

3.11.1. Prerequisites

  • Bottleneck reports use the same mechanism to gather data as Capacity and Utilization reports. To enable data collection in Red Hat CloudForms, see the following sections:

  • Additionally, configure your Red Hat CloudForms for collecting capacity and utilization reports for clusters and datastores by following this procedure:

    1. Navigate to SettingsConfiguration.
    2. Select Region from the Settings tab in the left pane of the appliance.
    3. In the right pane, under the C & U Collection tab, check the boxes for Collect for All Clusters under Clusters and Collect for All Datastores under Datastores, or check the boxes for the clusters/datastores you desire.

      Note

      Collect for All Clusters must be checked to be able to collect capacity and utilization data from cloud providers such as Red Hat OpenStack or Amazon EC2.

    4. Click Save.
  • For bottleneck reports to work as expected, the data collection for Capacity and Utilization reports should also be enabled for the relevant backend provider. See the following documentation to enable data collection for Red Hat Enterprise Virtualization and Red Hat Enterprise Linux OpenStack Platform provider:

Note

For reporting of daily bottleneck data, incomplete days (days with less than 24 hourly data points from midnight to midnight) that are at the beginning or end of the requested interval are excluded. Days with less than 24 hourly data points would be inaccurate and including them would skew trend lines. Therefore, at least one full day of hourly data from midnight to midnight is necessary for displaying the bottleneck charts under the Optimize tab.

3.11.2. Viewing the Bottleneck Summary

To find out more about bottleneck capacity or utilization, view a bottleneck summary.

  1. Navigate to OptimizeBottlenecks.
  2. Click Summary if it is not already selected.
  3. Expand the tree on the left side, until you can see the desired providers, clusters, or datastores.
  4. Click on the item.
  5. Use the Options section to change the characteristics of the data. 2257

    • Use Event Groups to select if you want to see bottlenecks based on capacity, utilization or both.
    • Select a Time Zone.

      Data is processed, and a timeline appears. Click on an icon in the timeline to see specific information on the bottleneck.

3.11.3. Viewing a Report of the Bottlenecks Trend

  1. Navigate to OptimizeBottlenecks.
  2. Click Report.
  3. Expand the tree on the left side, until you can see the desired providers, clusters, or datastores.
  4. Click on the item.
  5. Use the Options section to change the characteristics of the data. 2258

    • Use Event Groups to select if you want to see bottlenecks based on capacity, utilization or both.
    • Select a Time Zone.
  6. Expand the tree on the left side, until you can see the enterprise, provider, or datastore that you want to see the trend for.