Red Hat Training

A Red Hat training course is available for Red Hat OpenStack Platform

Chapter 2. Monitoring Using the Telemetry Service

For help with the ceilometer command, use:

# ceilometer help

For help with the subcommands, use:

# ceilometer help subcommand

2.1. View Existing Alarms

The alarms, originally provided by the ceilometer service, are newly handled by the aodh service. To list configured Telemetry alarms, use:

# aodh alarm list

To list configured meters for a resource, use:

# ceilometer meter-list --query resource=UUID
+--------------------------+------------+-----------+-----------+----------+----------+
| Name                     | Type       | Unit      | Resource  | User ID  | Project  |
+--------------------------+------------+-----------+-----------+----------+----------+
| cpu                      | cumulative | ns        | 5056eda...| b0e500...| f23524...|
| cpu_util                 | gauge      | %         | 5056eda...| b0e500...| f23524...|
| disk.ephemeral.size      | gauge      | GB        | 5056eda...| b0e500...| f23524...|
| disk.read.bytes          | cumulative | B         | 5056eda...| b0e500...| f23524...|

| instance                 | gauge      | instance  | 5056eda...| b0e500...| f23524...|
| instance:m1.tiny         | gauge      | instance  | 5056eda...| b0e500...| f23524...|
| memory                   | gauge      | MB        | 5056eda...| b0e500...| f23524...|
| vcpus                    | gauge      | vcpu      | 5056eda...| b0e500...| f23524...|
+--------------------------+------------+-----------+---------------------------------+

Where UUID is the resource ID for an existing resource (for example, an instance, image, or volume).

2.2. Configure an Alarm

To configure an alarm to activate when a threshold value is crossed, use the aodh alarm create command with the following syntax:

# aodh alarm create --name ALARM_NAME [--description TEXT] --metric METER_NAME --threshold value

Example

You can create an alarm, that activates and adds a log entry when the average CPU utilization for an individual instance exceeds 80%. A query is used to isolate the specific instance’s id (94619081-abf5-4f1f-81c7-9cedaa872403) for monitoring purposes:

# aodh alarm create --type gnocchi_aggregation_by_resources_threshold --name cpu_usage_high --metric cpu_util --threshold 80 --aggregation-method sum --resource-type instance --query '{"=": {"id": "94619081-abf5-4f1f-81c7-9cedaa872403"}}' --alarm-action 'log://'

In this example, the notification action is a log message.

To edit an existing threshold alarm, use the aodh alarm update command. For example, to increase the alarm threshold to 75%:

Example

To increase the alarm threshold to 75%, use:

# aodh alarm update --name cpu_usage_high --threshold 75

2.3. Disable or Delete an Alarm

To disable an alarm, use:

# aodh alarm update --name ALARM_NAME --enabled=false

To delete an alarm, use:

# aodh alarm delete --name ALARM_NAME

2.4. View Samples

To list all the samples for a particular meter name, use:

# ceilometer sample-list --meter METER_NAME

To list samples only for a particular resource within a range of time stamps, use:

# ceilometer sample-list --meter METER_NAME --query 'resource_id=INSTANCE_ID;timestamp>_START_TIME_;timestamp>=END_TIME'

Where START_TIME and END_TIME are in the form iso-dateThh:mm:ss.

Example

To query an instance for samples taken between 13:10:00 and 14:25:00, use:

# ceilometer sample-list --meter cpu --query 'resource_id=5056eda6-8a24-4f52-9cc4-c3ddb6fb4a69;timestamp>2015-01-12T13:10:00;timestamp>=2015-01-12T14:25:00'
+-------------------+------+------------+---------------+------+---------------------+
| Resource ID       | Name | Type       | Volume        | Unit | Timestamp           |
+-------------------+------+------------+---------------+------+---------------------+
| 5056eda6-8a24-... | cpu  | cumulative | 3.5569e+11    | ns   | 2015-01-12T14:21:44 |
| 5056eda6-8a24-... | cpu  | cumulative | 3.0041e+11    | ns   | 2015-01-12T14:11:45 |
| 5056eda6-8a24-... | cpu  | cumulative | 2.4811e+11    | ns   | 2015-01-12T14:01:54 |
| 5056eda6-8a24-... | cpu  | cumulative | 1.3743e+11    | ns   | 2015-01-12T13:30:54 |
| 5056eda6-8a24-... | cpu  | cumulative | 84710000000.0 | ns   | 2015-01-12T13:20:54 |
| 5056eda6-8a24-... | cpu  | cumulative | 31170000000.0 | ns   | 2015-01-12T13:10:54 |
+-------------------+------+------------+---------------+------+---------------------+

2.5. Create a Sample

Samples can be created for sending to the Telemetry service and they need not correspond to a previously defined meter. Use the following syntax:

# ceilometer sample-create --resource_id RESOURCE_ID --meter-name METER_NAME --meter-type METER_TYPE --meter-unit METER_UNIT --sample-volume SAMPLE_VOLUME

Where METER_TYPE can be one of:

  • Cumulative — a running total
  • Delta — a change or difference over time
  • Gauge — a discrete value

Example

# ceilometer sample-create -r 5056eda6-8a24-4f52-9cc4-c3ddb6fb4a69 -m On_Time_Mins --meter-type cumulative --meter-unit mins --sample-volume 0
+-------------------+--------------------------------------------+
| Property          | Value                                      |
+-------------------+--------------------------------------------+
| message_id        | 521f138a-9a84-11e4-8058-525400ee874f       |
| name              | On_Time_Mins                               |
| project_id        | f2352499957d4760a00cebd26c910c0f           |
| resource_id       | 5056eda6-8a24-4f52-9cc4-c3ddb6fb4a69       |
| resource_metadata | {}                                         |
| source            | f2352499957d4760a00cebd26c910c0f:openstack |
| timestamp         | 2015-01-12T17:56:23.179729                 |
| type              | cumulative                                 |
| unit              | mins                                       |
| user_id           | b0e5000684a142bd89c4af54381d3722           |
| volume            | 0.0                                        |
+-------------------+--------------------------------------------+

Where volume, normally the value obtained as a result of the sampling action, is in this case the value being created by the command.

Note

Samples are not updated because the moment a sample is created, it is sent to the Telemetry service. Samples are essentially messages, which is why they have a message ID. To create new samples, repeat the sample-create command and update the --sample-volume value.

2.6. View Cloud Usage Statistics

OpenStack administrators can use the dashboard to view cloud statistics.

  1. As an admin user in the dashboard, select Admin > System > Resource Usage.
  2. Click one of the following:

    • Daily Report — View a report of daily usage per project. Select the date range and a limit for the number of projects, and click Generate Report; the daily usage report is displayed.
    • Stats — View a graph of metrics grouped by project. Select the values and time period using the drop-down menus; the displayed graph is automatically updated.

The ceilometer command line client can also be used for viewing cloud usage statics.

Example

To view all the statistics for the cpu_util meter, use:

# ceilometer statistics --meter cpu_util
+--------+----------------+---------------+-----+-----+------+-------+------+--------
| Period | Period Start   |Period End     | Max | Min | Avg  | Sum   | Count| Dura...
+--------+----------------+---------------+-----+-----+------+-------+------+--------
| 0      | 2015-01-09T14: |2015-01-09T14:2| 9.44| 0.0 | 6.75 | 337.94| 50   | 2792...
+--------+----------------+---------------+-----+-----+------+-------+------+--------

Example

Statistics can be restricted to a specific resource by means of the --query option, and restricted to a specific range by means of the timestamp option.

# ceilometer statistics --meter cpu_util --query 'resource_id=5056eda6-8a24-4f52-9cc4-c3ddb6fb4a69;timestamp>2015-01-12T13:00:00;timestamp<=2015-01-13T14:00:00'
+--------+----------------+---------------+-----+-----+------+-------+------+--------
| Period | Period Start   |Period End     | Max | Min | Avg  | Sum   | Count| Dura...
+--------+----------------+---------------+-----+-----+------+-------+------+--------
| 0      | 2015-01-12T20:1|2015-01-12T20:1| 9.44| 5.95| 8.90 | 347.10| 39   | 2465...
+--------+----------------+---------------+-----+-----+------+-------+------+--------

2.7. Using the Time-Series-Database-as-a-Service

Time-Series-Database-as-a-Service (gnocchi) is a multi-tenant, metrics and resource database. It is designed to store metrics at a very large scale while providing access to metrics and resources information to operators and users.

Currently, the TSDaaS uses the Identity service for authentication, and Ceph, Object Storage to store data.

TDSaaS provides the statsd deamon that is compatible with the statsd protocol and can listen to the metrics sent over the network, named gnocchi-statsd. In order to enable statsd support in TDSaaS, you need to configure the [statsd] option in the configuration file. The resource ID parameter is used as the main generic resource where all the metrics are attached, a user and project ID that are associated with the resource and metrics, and an archive policy name thatis used to create the metrics.

All the metrics will be created dynamically as the metrics are sent to gnocchi-statsd, and attached with the provided name to the resource ID you configured. For more information on installing and configuring TSDaaS, see the Install Time-Series-Database-as-a-Service chapter in the Installation Reference Guide available at: https://access.redhat.com/documentation/en/red-hat-enterprise-linux-openstack-platform/

2.7.1. Running Time-Series-Database-as-a-Service

Run Time-Series-Database-as-a-Service (TSDaaS) by running the HTTP server and metric daemon:

# gnocchi-api
# gnocchi-metricd

2.7.2. Running As A WSGI Application

You can run the TSDaaS through a WSGI service such as mod_wsgi or any other WSGI application. The file gnocchi/rest/app.wsgi provided with TSDaaS allows you to enable Gnocchi as a WSGI application.

The TSDaaS API tier runs using WSGI. This means it can be run using Apache httpd and mod_wsgi, or another HTTP daemon such as uwsgi. You should configure the number of processes and threads according to the number of CPUs you have, usually around 1.5 × number of CPUs. If one server is not enough, you can spawn any number of new API servers to scale Gnocchi out, even on different machines.

2.7.3. metricd Workers

By default, the gnocchi-metricd daemon spans all your CPU power in order to maximize CPU utilisation when computing metric aggregation. You can use the gnocchi status command to query the HTTP API and get the cluster status for metric processing. This command displays the number of metrics to process, known as the processing backlog for the gnocchi-metricd. As long as this backlog is not continuously increasing, that means that gnocchi-metricd is able to cope with the amount of metric that are being sent. If the number of measure to process is continuously increasing, you will need to (maybe temporarily) increase the number of the gnocchi-metricd daemons. You can run any number of metricd daemons on any number of servers.

2.7.4. Monitoring the Time-Series-Database-as-a-Service

The /v1/status endpoint of the HTTP API returns various information, such as the number of measures to process (measures backlog), which you can easily monitor. Making sure that the HTTP server and the gnocchi-metricd daemon are running and are not writing anything alarming in their logs is a sign of good health of the overall system.

2.7.5. Backing up and Restoring Time-Series-Database-as-a-Service

In order to be able to recover from an unfortunate event, you need to backup both the index and the storage. That means creating a database dump (PostgreSQL or MySQL) and doing snapshots or copies of your data storage (Ceph, Swift or your file system). The procedure to restore is: restore your index and storage backups, reinstall TSDaaS if necessary, and restart it.