-
Language:
English
-
Language:
English
Red Hat Training
A Red Hat training course is available for Red Hat Virtualization
Metrics Store Installation Guide
Installing Metrics Store for Red Hat Virtualization
Abstract
Chapter 1. Metrics Store installation overview
The Metrics Store installation involves the following key steps:
Metrics Store architecture and workflow
The Metrics Store architecture is based on the OpenShift EFK logging stack, running on OpenShift Container Platform 3.11.
The workflow involves the following steps and services, running on the hosts or the Metrics Store virtual machines:
- collectd (hosts) collects metrics from hosts, virtual machines, and databases in the Red Hat Virtualization environment.
- fluentd (hosts) gathers the metrics and log data, enriches the data with metadata, and sends the enriched data to Elasticsearch.
- Elasticsearch (Metrics Store virtual machine) stores and indexes the data.
- Kibana (Metrics Store virtual machine) provides dashboards, charts, and data analysis.
Figure 1.1. Metrics Store Architecture
Chapter 2. Installing Metrics Store
Prerequisites
Computing resources:
- 4 CPU cores
- 30 GB RAM
- 500 GB SSD disk
For the Metrics Store Installation virtual machine:
- 4 CPU cores
- 8 GB RAM
NoteThe computing resource requirements are for an all-in-one installation, with a single Metrics Store virtual machine. The all-in-one installation can collect data from up to 50 hosts, each running 20 virtual machines.
- Operating system: Red Hat Enterprise Linux 7.6 or later
- Software: Red Hat Virtualization 4.2 or later
- Network configuration: see Network_configuration_for_metrics_store_virtual_machines
2.1. Creating the Metrics Store virtual machines
Creating the Metrics Store virtual machines involves the following steps:
-
Configuring the Metrics Store installation with
metrics-store-config.yml
Creating the following Metrics Store virtual machines:
- The Metrics Store installer, a temporary virtual machine for deploying OpenShift and services on the Metrics Store virtual machines
- One or more Metrics Store virtual machines
- Verifying the Metrics Store virtual machines
Procedure
- Log in to the Manager machine using SSH.
Copy
metrics-store-config.yml.example
to createmetrics-store-config.yml
:# cp /etc/ovirt-engine-metrics/metrics-store-config.yml.example /etc/ovirt-engine-metrics/config.yml.d/metrics-store-config.yml
-
Edit the parameters in
metrics-store-config.yml
and save the file. The parameters are documented in the file. On the Manager machine, copy
/etc/ovirt-engine-metrics/secure_vars.yaml.example
to/etc/ovirt-engine-metrics/secure_vars.yaml
:# cp /etc/ovirt-engine-metrics/secure_vars.yaml.example /etc/ovirt-engine-metrics/secure_vars.yaml
-
Update the values of
/etc/ovirt-engine-metrics/secure_vars.yaml
to match the details of your specific environment: Encrypt the secure_vars.yaml file:
# ansible-vault encrypt /etc/ovirt-engine-metrics/secure_vars.yaml
Go to the
ovirt-engine-metrics
directory:# cd /usr/share/ovirt-engine-metrics
Run the
ovirt-metrics-store-installation
playbook to create the virtual machines:# ANSIBLE_JINJA2_EXTENSIONS="jinja2.ext.do" ./configure_ovirt_machines_for_metrics.sh --playbook=ovirt-metrics-store-installation.yml --ask-vault-pass
- Log in to the Administration Portal.
-
Click Compute → Virtual Machines to verify the successful creation of the
metrics-store-installer
virtual machine and the Metrics Store virtual machines.
2.2. Deploying OpenShift and Metrics Store services
Deploy OpenShift, Elasticsearch, Curator (for managing Elasticsearch indices and snapshots), and Kibana on the Metrics Store virtual machines.
Procedure
-
Log in to the
metrics-store-installer
virtual machine. Run the
install_okd
playbook to deploy OpenShift and Metrics Store services to the Metrics Store virtual machines:# ANSIBLE_CONFIG="/usr/share/ansible/openshift-ansible/ansible.cfg" \ ANSIBLE_ROLES_PATH="/usr/share/ansible/roles/:/usr/share/ansible/openshift-ansible/roles" \ ansible-playbook -i integ.ini install_okd.yaml -e @vars.yaml -e @secure_vars.yaml --ask-vault-pass
Verify the deployment by logging in to each Metrics Store virtual machine:
Log in to the
openshift-logging
project:# oc project openshift-logging
Check that the Elasticsearch, Curator, and Kibana pods are running:
# oc get pods
If Elasticsearch is not running, see Troubleshooting related to ElasticSearch in the OpenShift Container Platform 3.11 documentation.
Check the Kibana host name and record it so that you can access the Kibana console in Chapter 4, Verifying the Metrics Store installation:
# oc get routes
Optional Cleanup
- Log in to the Administration Portal.
-
Click Compute → Virtual Machines and delete the
metrics-store-installer
virtual machine.
2.3. Network Configuration for Metrics Store virtual machines
Network configuration prerequisites:
-
Create a wildcard DNS record (
*.example.com
) for the DNS zone of the Metrics Store virtual machines. - Add the Host names of the Metrics Store virtual machines to your DNS server.
To set a static MAC Address for the Virtual Machine (optional)
- Log in to the Administration Portal.
- Click Compute → Virtual Machines, and select the virtual machine to configure.
-
Select the
Network Interfaces
tab, select a NIC, and clickEdit
. -
Select
Custom MAC Address
, and enter the MAC address you want to assign to this NIC. -
Click
OK
to save the configuration. - Reboot the virtual machine for the change to take effect.
Chapter 3. Deploying collectd
and fluentd
Deploy collectd
and fluentd
on the Red Hat Virtualization hosts to collect logs and metrics.
You do not need to repeat this procedure if you create new hosts. The Manager configures the hosts automatically.
Procedure
- Log in to the Manager machine using SSH.
Copy
/etc/ovirt-engine-metrics/config.yml.example
to create/etc/ovirt-engine-metrics/config.yml.d/config.yml
:# cp /etc/ovirt-engine-metrics/config.yml.example /etc/ovirt-engine-metrics/config.yml.d/config.yml
-
Edit the
ovirt_env_name
andelasticsearch_host
parameters inconfig.yml
and save the file. These parameters are mandatory and are documented in the file. Optionally, if you need to connect an additional Red Hat Virtualization Manager or an additional elasticsearch installation, run the following commands to copy the engine public key to your Metrics Store virtual machine:
# mytemp=$(mktemp -d) # cp /etc/pki/ovirt-engine/keys/engine_id_rsa $mytemp # ssh-keygen -y -f $mytemp/engine_id_rsa > $mytemp/engine_id_rsa.pub # ssh-copy-id -i $mytemp/engine_id_rsa.pub root@{elasticsearch_host} # rm -rf $mytemp
Deploy
collectd
andfluentd
on the hosts:# /usr/share/ovirt-engine-metrics/setup/ansible/configure_ovirt_machines_for_metrics.sh
Chapter 4. Verifying the Metrics Store installation
Verify the Metrics Store installation using the Kibana console. You can view the collected logs and create data visualizations.
Procedure
Log in to the Kibana console using the URL (
https://kibana.example.com
) that you recorded in Section 2.2, “Deploying OpenShift and Metrics Store services”. Use the defaultadmin
user, and the password you defined during the metrics store installation.Optionally, you can access the OpenShift Container Platform portal at
https://example.com:8443
(using the sameadmin
user credentials).In the Discover tab, check that you can view the project.ovirt-logs-ovirt_env_name-uuid index.
See the Discover section in the Kibana User Guide for information about working with logs.
In the Visualization tab, you can create data visualizations for the project.ovirt-metrics-ovirt_env_name-uuid and the project.ovirt-logs-ovirt_env_name-uuid indexes.
The Metrics Store User Guide describes the available parameters. See the Visualize section of the Kibana User Guide for information about visualizing logs and metrics.
Appendix A. Adding a Read-only Kibana User
If you want to allow users without administrator privileges to view the collected logs and metrics, you can create a read-only kibana user. The following is an example for creating a user named user name with view (read-only) permissions.
Create a new user.
# oc create user user name # oc create identity allow_all: user name # oc create useridentitymapping allow_all: user name user name
Log in to openshift-logging.
# oc project openshift-logging
Add a user role with read-only permissions.
# oc adm policy add-role-to-user view user name
Assign a password to the new user.
# oc login --username=user name --password=password
A new read-only kibana user is created.