Red Hat OpenStack Platform Operational Tools
Centralized Logging and Monitoring of an OpenStack Environment
Abstract
Preface
Red Hat OpenStack Platform comes with an optional suite of tools designed to help operators maintain an OpenStack environment. The tools perform the following functions:
- Centralized logging
- Availability monitoring
- Performance monitoring
This document describes the preparation and installation of these tools.
The Red Hat OpenStack Platform Operational Tool Suite is currently a Technology Preview. For more information on Red Hat Technology Previews, see Technology Preview Features Support Scope.
Chapter 1. Architecture
1.1. Centralized Logging
The centralized logging toolchain consists of a number of components, including:
- A Log Collection Agent (Fluentd)
- A Log Relay/Transformer (Fluentd)
- A Data Store (Elasticsearch)
- An API/Presentation Layer (Kibana)
These components and their interactions are laid out in the following diagrams:
Figure 1.1. Centralized logging architecture at a high level

Figure 1.2. Single-node deployment for Red Hat OpenStack Platform

Figure 1.3. HA deployment for Red Hat OpenStack Platform

1.2. Availability Monitoring
The availability monitoring toolchain consists of a number of components, including:
- A Monitoring Agent (Sensu)
- A Monitoring Relay/Proxy (RabbitMQ)
- A Monitoring Controller/Server (Sensu)
- An API/Presentation Layer (Uchiwa)
These components and their interactions are laid out in the following diagrams:
Figure 1.4. Availability monitoring architecture at a high level

Figure 1.5. Single-node deployment for Red Hat OpenStack Platform

Figure 1.6. HA deployment for Red Hat OpenStack Platform

1.3. Performance Monitoring
The performance monitoring toolchain consists of a number of components, including:
- A Collection Agent (collectd)
- A Collection Aggregator/Relay (Graphite)
- A Data Store (whisperdb)
- An API/Presentation Layer (Grafana)
These components and their interactions are laid out in the following diagrams:
Figure 1.7. Performance monitoring architecture at a high level

Figure 1.8. Single-node deployment for Red Hat OpenStack Platform

Figure 1.9. HA deployment for Red Hat OpenStack Platform

Chapter 2. Installing the Centralized Logging Suite
2.1. Installing the Centralized Log Relay/Transformer
Locate a bare metal system that meets the following minimum specifications:
- 8 GB of memory
- Single-socket Xeon class CPU
- 500 GB of disk space
- Install Red Hat Enterprise Linux 7.
Allow the system to access the Operational Tools packages:
Register the system and subscribe it:
# subscription-manager register # subscription-manager list --consumed
If an OpenStack subscription is not attached automatically, see the documentation for manually attaching subscriptions.
Disable initially enabled repositories and enable only the ones appropriate for the Operational Tools:
# subscription-manager repos --disable=* # subscription-manager repos --enable=rhel-7-server-rpms --enable=rhel-7-server-optional-rpms --enable=rhel-7-server-openstack-8-optools-rpms
NoteThe base OpenStack repository (rhel-7-server-openstack-8-rpms) must not be enabled on this node. This repository may contain newer versions of certain Operational Tools dependencies which may be incompatible with the Operational Tools packages.
Install the
Elasticsearch,Fluentd, andKibanasoftware by running the following command:# yum install elasticsearch fluentd rubygem-fluent-plugin-elasticsearch kibana httpd
Enable Cross-origin resource sharing (CORS) in
Elasticsearch. To do so, edit/etc/elasticsearch/elasticsearch.ymland add the following lines to the end of the file:http.cors.enabled: true http.cors.allow-origin: "/.*/"
NoteThis will allow the
ElasticsearchJavaScript applications to be called from any web page on any web server. To allow CORS from yourKibanaserver only, use:http.cors.allow-origin: "http://LOGGING_SERVER"Replace LOGGING_SERVER with the IP address or the host name of your
Kibanaserver, depending on whether you are going to accessKibanausing the IP address or the host name. However, if theElasticsearchservice is only accessible from trusted hosts, it is safe to use"/.*/".Start the
Elasticsearchinstance and enable it at boot:# systemctl start elasticsearch # systemctl enable elasticsearch
To confirm the
Elasticsearchinstance is working, run the following command:# curl http://localhost:9200/
This should give a response similar to the following:
{ "status" : 200, "name" : "elasticsearch.example.com", "cluster_name" : "elasticsearch", "version" : { "number" : "1.5.2", "build_hash" : "c88f77ffc81301dfa9dfd81ca2232f09588bd512", "build_timestamp" : "2015-02-19T13:05:36Z", "build_snapshot" : false, "lucene_version" : "4.10.3" }, "tagline" : "You Know, for Search" }Configure
Fluentdto accept log data and write it toElasticsearch. Edit/etc/fluentd/fluent.confand replace its content with the following:# In v1 configuration, type and id are @ prefix parameters. # @type and @id are recommended. type and id are still available for backward compatibility <source> @type forward port 4000 bind 0.0.0.0 </source> <match **> @type elasticsearch host localhost port 9200 logstash_format true flush_interval 5 </match>
Start
Fluentdand enable it at boot:# systemctl start fluentd # systemctl enable fluentd
TipCheck the journal for
Fluentdand ensure it has no errors at start:# journalctl -u fluentd -l -f
Configure
Kibanato point to theElasticsearchinstance. Create/etc/httpd/conf.d/kibana3.confand place the following content inside:<VirtualHost *:80> DocumentRoot /usr/share/kibana <Directory /usr/share/kibana> Require all granted Options -Multiviews </Directory> </VirtualHost>If you want to restrict access to
KibanaandElasticsearchto authorized users only, for example, because these services are running on a system in an open network, secure the virtual host using HTTP Basic authentication and moveElasticseachbehind a proxy. To do so, follow these steps:Create (or rewrite) the
/etc/httpd/conf.d/kibana3.conffile with the following content:<VirtualHost *:80> DocumentRoot /usr/share/kibana <Directory /usr/share/kibana> Options -Multiviews AuthUserFile /etc/httpd/conf/htpasswd-kibana AuthName "Restricted Kibana Server" AuthType Basic Require valid-user </Directory> # Proxy for Elasticsearch <LocationMatch "^/(_nodes|_aliases|.*/_aliases|_search|.*/_search|_mapping|.*/_mapping)$"> ProxyPassMatch http://127.0.0.1:9200/$1 ProxyPassReverse http://127.0.0.1:9200/$1 </LocationMatch> # Proxy for kibana-int/{dashboard,temp} <LocationMatch "^/(kibana-int/dashboard/|kibana-int/temp)(.*)$"> ProxyPassMatch http://127.0.0.1:9200/$1$2 ProxyPassReverse http://127.0.0.1:9200/$1$2 </LocationMatch> </Virtualhost>NoteYou can use a different path for the
AuthUserFileoption as well as any other description for theAuthNameoption.Create a pair of user name and password which will be allowed to access
Kibana. To do so, run the following command:# htpasswd -c /etc/httpd/conf/htpasswd-kibana user_nameNoteIf you are using a different path for the
AuthUserFileoption, change the command accordingly.Replace user_name with a user name of your choice. When prompted, enter the password that will be used with this user name. You will be prompted to re-type the password.
Optionally, create more users with their own passwords by running the following command:
# htpasswd /etc/httpd/conf/htpasswd-kibana another_user_nameConfigure
Elasticsearchto listen on thelocalhostinterface only. To do so, open the/etc/elasticsearch/elasticsearch.ymlfile in an editor and append the following option:network.host: 127.0.0.1
You must also configure
Elasticsearchto allow using HTTP Basic authentication data with CORS by appending the following option to/etc/elasticsearch/elasticsearch.yml:http.cors.allow-credentials: true
Restart
Elasticsearchfor these changes to take effect:# systemctl restart elasticsearch
Finally, ensure that the
Elasticsearchfiles are downloaded using the proxy, and that the HTTP Basic authentication data is sent by the browser. To do so, edit the/usr/share/kibana/config.jsfile. Find the following line in this file:elasticsearch: "http://"+window.location.hostname+":9200",
and change it as follows:
elasticsearch: {server: "http://"+window.location.hostname, withCredentials: true},
Enable
Kibana(inside Apache) to connect toElasticsearch, and then start Apache and enable it at boot:# setsebool -P httpd_can_network_connect 1 # systemctl start httpd # systemctl enable httpd
Open the firewall on the system to allow connections to
Fluentdandhttpd:# firewall-cmd --zone=public --add-port=4000/tcp --permanent # firewall-cmd --zone=public --add-service=http --permanent # firewall-cmd --reload
Moreover, if you have not configured HTTP authentication and
Elasticsearchproxy, open the firewall to allow direct connections toElasticsearch:# firewall-cmd --zone=public --add-port=9200/tcp --permanent # firewall-cmd --reload
ImportantIf you do not restrict access to
KibanaandElasticseachusing HTTP authentication, the information provided by Kibana and Elasticsearch is available to anyone without any authentication. To secure the data, ensure that the system or the open TCP ports (80, 4000, and 9200) are only accessible from trusted hosts.
2.2. Installing the Log Collection Agent on All Nodes
To collect the logs from all the systems in the OpenStack environment and send them to your centralized logging server, run the following commands on all the OpenStack systems.
Enable the Operational Tools repository:
# subscription-manager repos --enable=rhel-7-server-openstack-8-optools-rpms
Install
fluentdandrubygem-fluent-plugin-add:# yum install fluentd rubygem-fluent-plugin-add
Configure the
Fluentduser so it has permissions to read all the OpenStack log files. Do this by running the following command:# for user in {keystone,nova,neutron,cinder,glance}; do usermod -a -G $user fluentd; doneNote that you may get an error on some nodes about missing groups. This can be disregarded as not all the nodes run all the services.
Configure
Fluentd. Make sure/etc/fluentd/fluent.conflooks like the following; be sure to replace LOGGING_SERVER with the host name or IP address of your centralized logging server configured above:# In v1 configuration, type and id are @ prefix parameters. # @type and @id are recommended. type and id are still available for backward compatibility # Nova compute <source> @type tail path /var/log/nova/nova-compute.log tag nova.compute format /(?<time>[^ ]* [^ ]*) (?<pid>[^ ]*) (?<loglevel>[^ ]*) (?<class>[^ ]*) \[(?<context>.*)\] (?<message>.*)/ time_format %F %T.%L </source> <match nova.compute> type add <pair> service nova.compute hostname "#{Socket.gethostname}" </pair> </match> # Nova API <source> @type tail path /var/log/nova/nova-api.log tag nova.api format multiline format_firstline /(?<time>[^ ]* [^ ]*) (?<pid>[^ ]*) (?<loglevel>[^ ]*) (?<class>[^ ]*) \[(?<context>.*)\] (?<message>.*)/ format /(?<time>[^ ]* [^ ]*) (?<pid>[^ ]*) (?<loglevel>[^ ]*) (?<class>[^ ]*) \[(?<context>.*)\] (?<message>.*)/ time_format %F %T.%L </source> <match nova.api> type add <pair> service nova.api hostname "#{Socket.gethostname}" </pair> </match> # Nova Cert <source> @type tail path /var/log/nova/nova-cert.log tag nova.cert format multiline format_firstline /(?<time>[^ ]* [^ ]*) (?<pid>[^ ]*) (?<loglevel>[^ ]*) (?<class>[^ ]*) \[(?<context>.*)\] (?<message>.*)/ format /(?<time>[^ ]* [^ ]*) (?<pid>[^ ]*) (?<loglevel>[^ ]*) (?<class>[^ ]*) \[(?<context>.*)\] (?<message>.*)/ time_format %F %T.%L </source> <match nova.cert> type add <pair> service nova.cert hostname "#{Socket.gethostname}" </pair> </match> # Nova Conductor <source> @type tail path /var/log/nova/nova-conductor.log tag nova.conductor format multiline format_firstline /(?<time>[^ ]* [^ ]*) (?<pid>[^ ]*) (?<loglevel>[^ ]*) (?<class>[^ ]*) \[(?<context>.*)\] (?<message>.*)/ format /(?<time>[^ ]* [^ ]*) (?<pid>[^ ]*) (?<loglevel>[^ ]*) (?<class>[^ ]*) \[(?<context>.*)\] (?<message>.*)/ time_format %F %T.%L </source> <match nova.conductor> type add <pair> service nova.conductor hostname "#{Socket.gethostname}" </pair> </match> # Nova Consoleauth <source> @type tail path /var/log/nova/nova-consoleauth.log tag nova.consoleauth format multiline format_firstline /(?<time>[^ ]* [^ ]*) (?<pid>[^ ]*) (?<loglevel>[^ ]*) (?<class>[^ ]*) \[(?<context>.*)\] (?<message>.*)/ format /(?<time>[^ ]* [^ ]*) (?<pid>[^ ]*) (?<loglevel>[^ ]*) (?<class>[^ ]*) \[(?<context>.*)\] (?<message>.*)/ time_format %F %T.%L </source> <match nova.consoleauth> type add <pair> service nova.consoleauth hostname "#{Socket.gethostname}" </pair> </match> # Nova Scheduler <source> @type tail path /var/log/nova/nova-scheduler.log tag nova.scheduler format multiline format_firstline /(?<time>[^ ]* [^ ]*) (?<pid>[^ ]*) (?<loglevel>[^ ]*) (?<class>[^ ]*) \[(?<context>.*)\] (?<message>.*)/ format /(?<time>[^ ]* [^ ]*) (?<pid>[^ ]*) (?<loglevel>[^ ]*) (?<class>[^ ]*) \[(?<context>.*)\] (?<message>.*)/ time_format %F %T.%L </source> <match nova.scheduler> type add <pair> service nova.scheduler hostname "#{Socket.gethostname}" </pair> </match> # Neutron Openvswitch Agent <source> @type tail path /var/log/neutron/openvswitch-agent.log tag neutron.openvswitch format /(?<time>[^ ]* [^ ]*) (?<pid>[^ ]*) (?<loglevel>[^ ]*) (?<class>[^ ]*) \[(?<context>.*)\] (?<message>.*)/ time_format %F %T.%L </source> <match neutron.openvswitch> type add <pair> service neutron.openvswitch hostname "#{Socket.gethostname}" </pair> </match> # Neutron Server <source> @type tail path /var/log/neutron/server.log tag neutron.server format multiline format_firstline /(?<time>[^ ]* [^ ]*) (?<pid>[^ ]*) (?<loglevel>[^ ]*) (?<class>[^ ]*) \[(?<context>.*)\] (?<message>.*)/ format /(?<time>[^ ]* [^ ]*) (?<pid>[^ ]*) (?<loglevel>[^ ]*) (?<class>[^ ]*) \[(?<context>.*)\] (?<message>.*)/ time_format %F %T.%L </source> <match neutron.server> type add <pair> service neutron.server hostname "#{Socket.gethostname}" </pair> </match> # Neutron DHCP Agent <source> @type tail path /var/log/neutron/dhcp-agent.log tag neutron.dhcp format multiline format_firstline /(?<time>[^ ]* [^ ]*) (?<pid>[^ ]*) (?<loglevel>[^ ]*) (?<class>[^ ]*) \[(?<context>.*)\] (?<message>.*)/ format /(?<time>[^ ]* [^ ]*) (?<pid>[^ ]*) (?<loglevel>[^ ]*) (?<class>[^ ]*) \[(?<context>.*)\] (?<message>.*)/ time_format %F %T.%L </source> <match neutron.dhcp> type add <pair> service neutron.dhcp hostname "#{Socket.gethostname}" </pair> </match> # Neutron L3 Agent <source> @type tail path /var/log/neutron/l3-agent.log tag neutron.l3 format multiline format_firstline /(?<time>[^ ]* [^ ]*) (?<pid>[^ ]*) (?<loglevel>[^ ]*) (?<class>[^ ]*) \[(?<context>.*)\] (?<message>.*)/ format /(?<time>[^ ]* [^ ]*) (?<pid>[^ ]*) (?<loglevel>[^ ]*) (?<class>[^ ]*) \[(?<context>.*)\] (?<message>.*)/ time_format %F %T.%L </source> <match neutron.l3> type add <pair> service neutron.l3 hostname "#{Socket.gethostname}" </pair> </match> # Neutron Metadata Agent <source> @type tail path /var/log/neutron/metadata-agent.log tag neutron.metadata format multiline format_firstline /(?<time>[^ ]* [^ ]*) (?<pid>[^ ]*) (?<loglevel>[^ ]*) (?<class>[^ ]*) \[(?<context>.*)\] (?<message>.*)/ format /(?<time>[^ ]* [^ ]*) (?<pid>[^ ]*) (?<loglevel>[^ ]*) (?<class>[^ ]*) \[(?<context>.*)\] (?<message>.*)/ time_format %F %T.%L </source> <match neutron.metadata> type add <pair> service neutron.metadata hostname "#{Socket.gethostname}" </pair> </match> # Keystone <source> @type tail path /var/log/keystone/keystone.log tag keystone format multiline format_firstline /(?<time>[^ ]* [^ ]*) (?<pid>[^ ]*) (?<loglevel>[^ ]*) (?<class>[^ ]*) \[(?<context>.*)\] (?<message>.*)/ format /(?<time>[^ ]* [^ ]*) (?<pid>[^ ]*) (?<loglevel>[^ ]*) (?<class>[^ ]*) \[(?<context>.*)\] (?<message>.*)/ time_format %F %T.%L </source> <match keystone> type add <pair> service keystone hostname "#{Socket.gethostname}" </pair> </match> # Glance API <source> @type tail path /var/log/glance/api.log tag glance.api format multiline format_firstline /(?<time>[^ ]* [^ ]*) (?<pid>[^ ]*) (?<loglevel>[^ ]*) (?<class>[^ ]*) \[(?<context>.*)\] (?<message>.*)/ format /(?<time>[^ ]* [^ ]*) (?<pid>[^ ]*) (?<loglevel>[^ ]*) (?<class>[^ ]*) \[(?<context>.*)\] (?<message>.*)/ time_format %F %T.%L </source> <match glance.api> type add <pair> service glance.api hostname "#{Socket.gethostname}" </pair> </match> # Glance Registry <source> @type tail path /var/log/glance/registry.log tag glance.registry format multiline format_firstline /(?<time>[^ ]* [^ ]*) (?<pid>[^ ]*) (?<loglevel>[^ ]*) (?<class>[^ ]*) \[(?<context>.*)\] (?<message>.*)/ format /(?<time>[^ ]* [^ ]*) (?<pid>[^ ]*) (?<loglevel>[^ ]*) (?<class>[^ ]*) \[(?<context>.*)\] (?<message>.*)/ time_format %F %T.%L </source> <match glance.registry> type add <pair> service glance.registry hostname "#{Socket.gethostname}" </pair> </match> # Cinder API <source> @type tail path /var/log/cinder/api.log tag cinder.api format multiline format_firstline /(?<time>[^ ]* [^ ]*) (?<pid>[^ ]*) (?<loglevel>[^ ]*) (?<class>[^ ]*) \[(?<context>.*)\] (?<message>.*)/ format /(?<time>[^ ]* [^ ]*) (?<pid>[^ ]*) (?<loglevel>[^ ]*) (?<class>[^ ]*) \[(?<context>.*)\] (?<message>.*)/ time_format %F %T.%L </source> <match cinder.api> type add <pair> service cinder.api hostname "#{Socket.gethostname}" </pair> </match> # Cinder Scheduler <source> @type tail path /var/log/cinder/scheduler.log tag cinder.scheduler format multiline format_firstline /(?<time>[^ ]* [^ ]*) (?<pid>[^ ]*) (?<loglevel>[^ ]*) (?<class>[^ ]*) \[(?<context>.*)\] (?<message>.*)/ format /(?<time>[^ ]* [^ ]*) (?<pid>[^ ]*) (?<loglevel>[^ ]*) (?<class>[^ ]*) \[(?<context>.*)\] (?<message>.*)/ time_format %F %T.%L </source> <match cinder.scheduler> type add <pair> service cinder.scheduler hostname "#{Socket.gethostname}" </pair> </match> # Cinder Volume <source> @type tail path /var/log/cinder/volume.log tag cinder.volume format multiline format_firstline /(?<time>[^ ]* [^ ]*) (?<pid>[^ ]*) (?<loglevel>[^ ]*) (?<class>[^ ]*) \[(?<context>.*)\] (?<message>.*)/ format /(?<time>[^ ]* [^ ]*) (?<pid>[^ ]*) (?<loglevel>[^ ]*) (?<class>[^ ]*) \[(?<context>.*)\] (?<message>.*)/ time_format %F %T.%L </source> <match cinder.volume> type add <pair> service cinder.volume hostname "#{Socket.gethostname}" </pair> </match> <match greped.**> @type forward heartbeat_type tcp <server> name LOGGING_SERVER host LOGGING_SERVER port 4000 </server> </match>Now that
Fluentdhas been configured, start theFluentdservice and enable it at boot:# systemctl start fluentd # systemctl enable fluentd
You should now be able to access Kibana running at http://LOGGING_SERVER/index.html#/dashboard/file/logstash.json and see logs start to populate. If you have enabled HTTP Basic authentication in the Kibana configuration, you must enter a valid user name and password to access this page.
By default, the front page of the logging server, http://LOGGING_SERVER/, is a Kibana welcome screen providing technical requirements and additional configuration information. If you want the logs to be available here, replace the default.json file in the Kibana application directory with logstash.json, but first create a backup copy of default.json in case you need this file again in the future:
# mv /usr/share/kibana/app/dashboards/default.json /usr/share/kibana/app/dashboards/default.json.orig # cp /usr/share/kibana/app/dashboards/logstash.json /usr/share/kibana/app/dashboards/default.json
Chapter 3. Installing the Availability Monitoring Suite
3.1. Installing the Monitoring Relay/Controller
Locate a bare metal system that meets the following minimum specifications:
- 4 GB of memory
- Single-socket Xeon class CPU
- 100 GB of disk space
- Install Red Hat Enterprise Linux 7.
Allow the system to access the Operational Tools packages:
Register the system and subscribe it:
# subscription-manager register # subscription-manager list --consumed
If an OpenStack subscription is not attached automatically, see the documentation for manually attaching subscriptions.
Disable initially enabled repositories and enable only the ones appropriate for the Operational Tools:
# subscription-manager repos --disable=* # subscription-manager repos --enable=rhel-7-server-rpms --enable=rhel-7-server-optional-rpms --enable=rhel-7-server-openstack-8-optools-rpms
NoteThe base OpenStack repository (rhel-7-server-openstack-8-rpms) must not be enabled on this node. This repository may contain newer versions of certain Operational Tools dependencies which may be incompatible with the Operational Tools packages.
Open the firewall on the system to allow connections to
RabbitMQandUchiwa:# firewall-cmd --zone=public --add-port=5672/tcp --permanent # firewall-cmd --zone=public --add-port=3000/tcp --permanent # firewall-cmd --reload
Install the components needed for the monitoring server:
# yum install sensu uchiwa redis rabbitmq-server
Configure
RabbitMQandRedis, which are the backbone services. Start bothRedisandRabbitMQand enable them at boot:# systemctl start redis # systemctl enable redis # systemctl start rabbitmq-server # systemctl enable rabbitmq-server
Configure a new
RabbitMQvirtual host forsensu, with a user name and password combination that can access the host:# rabbitmqctl add_vhost /sensu # rabbitmqctl add_user sensu sensu # rabbitmqctl set_permissions -p /sensu sensu ".*" ".*" ".*"
Now that the base services are running and configured, configure the
Sensumonitoring server. Create/etc/sensu/conf.d/rabbitmq.jsonwith the following contents:{ "rabbitmq": { "port": 5672, "host": "localhost", "user": "sensu", "password": "sensu", "vhost": "/sensu" } }Next, create
/etc/sensu/conf.d/redis.jsonwith the following contents:{ "redis": { "port": 6379, "host": "localhost" } }Finally, create
/etc/sensu/conf.d/api.jsonwith the following contents:{ "api": { "bind": "0.0.0.0", "port": 4567, "host": "localhost" } }Start and enable all
Sensuservices:# systemctl start sensu-server # systemctl enable sensu-server # systemctl start sensu-api # systemctl enable sensu-api
Configure
Uchiwa, which is the web interface forSensu. To do this, edit/etc/uchiwa/uchiwa.jsonand replace its default contents with the following:{ "sensu": [ { "name": "Openstack", "host": "localhost", "port": 4567 } ], "uchiwa": { "host": "0.0.0.0", "port": 3000, "refresh": 5 } }Start and enable the
Uchiwaweb interface:# systemctl start uchiwa # systemctl enable uchiwa
3.2. Installing the Availability Monitoring Agent on All Nodes
To monitor all the systems in the OpenStack environment, run the following commands on all of them.
Enable the Operational Tools repository:
# subscription-manager repos --enable=rhel-7-server-openstack-8-optools-rpms
Install
Sensu:# yum install sensu
Configure the
Sensuagent. Edit/etc/sensu/conf.d/rabbitmq.jsonto have the following content; remember to replace MONITORING_SERVER with the host name or IP address of your monitoring server configured in the previous section:{ "rabbitmq": { "port": 5672, "host": "MONITORING_SERVER", "user": "sensu", "password": "sensu", "vhost": "/sensu" } }Edit
/etc/sensu/conf.d/client.jsonto include the following content; remember to replace FQDN with the host name of the machine, and ADDRESS with the public IP address of the machine:{ "client": { "name": "FQDN", "address": "ADDRESS", "subscriptions": [ "all" ] } }Finally, start and enable the
Sensuclient:# systemctl start sensu-client # systemctl enable sensu-client
You should now be able to access Uchiwa running at http://MONITORING_SERVER:3000.
Chapter 4. Installing the Performance Monitoring Suite
4.1. Installing the Collection Aggregator/Relay
Locate a bare metal system that meets the following minimum specifications:
- 4 GB of memory
- Single-socket Xeon class CPU
- 500 GB of disk space
- Install Red Hat Enterprise Linux 7.
Allow the system to access the Operational Tools packages:
Register the system and subscribe it:
# subscription-manager register # subscription-manager list --consumed
If an OpenStack subscription is not attached automatically, see the documentation for manually attaching subscriptions.
Disable initially enabled repositories and enable only the ones appropriate for the Operational Tools:
# subscription-manager repos --disable=* # subscription-manager repos --enable=rhel-7-server-rpms --enable=rhel-7-server-optional-rpms --enable=rhel-7-server-openstack-8-optools-rpms
NoteThe base OpenStack repository (rhel-7-server-openstack-8-rpms) must not be enabled on this node. This repository may contain newer versions of certain Operational Tools dependencies which may be incompatible with the Operational Tools packages.
Open the firewall on the system to allow connections to
GraphiteandGrafana:# firewall-cmd --zone=public --add-port=2003/tcp --permanent # firewall-cmd --zone=public --add-port=3030/tcp --permanent # firewall-cmd --reload
Once that is done, install the
GraphiteandGrafanasoftware by running the following command:# yum install python-carbon graphite-web grafana httpd
Configure the
Grafanaweb interface to allow access. Edit/etc/httpd/conf.d/graphite-web.confand modify theRequireline as follows:... <Directory "/usr/share/graphite/"> <IfModule mod_authz_core.c> # Apache 2.4 Require all granted </IfModule> ...-
Edit
/etc/grafana/grafana.ini, and changehttp_portto3030. Synchronize the database behind
Graphiteweb. Run the following command; when prompted if you want to create a super user, chooseno:# sudo -u apache /usr/bin/graphite-manage syncdb --noinput
Start and enable all the
GraphiteandGrafanaservices:# systemctl start httpd # systemctl enable httpd # systemctl start carbon-cache # systemctl enable carbon-cache # systemctl start grafana-server # systemctl enable grafana-server
Configure
Grafanato talk to yourGraphiteinstance:-
Go to
http://PERFORMANCE_MONITORING_HOST:3030/. You should be presented with theGrafanalogin page. -
Enter the default credentials of
admin/adminto log in to the system. -
After you are logged in, click on the
Grafanalogo in the top left corner of the screen, then choose Data Sources. At the top of the page, click Add new, and enter the following details:
Name
graphiteDefault
yes (select)Type
GraphiteUrl
http://localhost/Access
proxyBasic Auth
no (unselected)- Finally, click the Add button at the bottom.
-
Go to
4.2. Installing the Performance Monitoring Collection Agent on All Nodes
To monitor the performance of all the systems in the OpenStack environment, run the following commands on all of them.
Enable the Operational Tools repository:
# subscription-manager repos --enable=rhel-7-server-openstack-8-optools-rpms
Install
collectd:# yum install collectd
Configure
collectdto send the data to the performance monitoring aggregator/relay. To do so, create/etc/collectd.d/10-write_graphite.confwith the following contents, where PERFORMANCE_MONITORING_HOST is the host name or IP address of the host that was configured previously to be the performance monitoring aggregator/relay:<LoadPlugin write_graphite> Globals false </LoadPlugin> <Plugin write_graphite> <Carbon> Host "PERFORMANCE_MONITORING_HOST" Port "2003" Prefix "collectd." EscapeCharacter "_" StoreRates true LogSendErrors true Protocol "tcp" </Carbon> </Plugin>If you are using SELinux, allow
collectdto tcp network connect:# setsebool -P collectd_tcp_network_connect=1
Start and enable
collectd:# systemctl start collectd # systemctl enable collectd
After a while, you should see metrics in the Graphite web user interface running at http://PERFORMANCE_MONITORING_HOST:3030/.
