Red Hat Ansible Inside Reporting Guide

Red Hat Ansible Inside 1.1

Understand reporting in Red Hat Inside

Red Hat Customer Content Services

Abstract

Use reporting to develop insights into how Red Hat Ansible Inside is used.

Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright’s message.

Chapter 1. Reporting in Red Hat Ansible Inside

To comply with your Red Hat Ansible Inside subscription requirements, you must send metrics files about your automation usage to Red Hat. This guide describes the data that is collected, how to implement its collection, and how the data is returned to Red Hat.

1.1. About reporting metrics

Ansible Inside saves high-level usage data about your automation jobs. Red Hat uses this data to identify where to focus effort in product improvements and new features.

After an automation job is completed, metrics are gathered to save the following information about the job:

  • The type, duration, and time of the automation job.
  • The names of the collections and roles used in the job, and the number of times they are used.
  • The number of nodes that were installed, updated, failed, and skipped.
  • The events that occurred during the job.
Note

Red Hat does not gather Personal Identifiable Information (PII), such as IP addresses, location, user details, or operating system specification.

The following architecture diagram illustrates how the data is saved to the persistent data storage outside the Python application so that it can be sent to Red Hat.

The project, credential and inventory sources pass data to the Ansible SDK sync library in the Python application. The automation data receiver sends data from the sync library to the persistent data storage, which sends the data to reporting and analytics.

1.2. About the metrics files

The metrics data is bundled into a tarball file. A separate file is generated for each job. You can view the contents of the files: they are not encrypted.

Each tarball file contains the following unencrypted CSV files:

  • jobs.csv records the duration and status of the automation job, the number of tasks executed, and the number of hosts that were affected by the job.
  • modules.csv records the module name, the task count and the duration.
  • collections.csv: records the collection name, the task count and the duration.
  • roles.csv records the role name, the task count and the duration.
  • playbook_on_stats.csv records the event ID and data.

Chapter 2. Metrics file locations

Reporting metrics to Red Hat is a requirement. Logging metrics for your automation jobs is automatically enabled when you install Ansible SDK. You cannot disable it.

Every time an automation job runs, a new tarball is created. You are responsible for scraping the data from the storage location and for monitoring the size of the directory.

You can customize the metrics storage location for each Python file that runs a playbook, or you can use the default location.

2.1. Default location for metrics files

When you install Ansible SDK, the default metrics storage location is set to the ~/.ansible/metrics directory.

After an automation job is complete, the metrics are written to a tarball in the directory. Ansible SDK creates the directory if it does not already exist.

2.2. Customizing the metrics storage location

You can specify the path to the directory to store your metrics files in the Python file that runs your playbook.

You can set a different directory path for every Python automation job file, or you can store the tarballs for multiple jobs in one directory. If you do not set the path in a Python file, the tarballs for the jobs that it runs will be saved in the default directory (~/.ansible/metrics).

Procedure

  1. Decide on a location on your file system to store the metrics data. Ensure that the location is readable and writable. Ansible SDK creates the directory if it does not already exist.
  2. In the job_options in the main() function of your Python file, set the metrics_output_path parameter to the directory where the tarballs are to be stored.

    In the following example, the metrics files are stored in the /tmp/metrics directory after the pb.yml playbook has been executed:

    async def main():
        executor = AnsibleSubprocessJobExecutor()
        executor_options = AnsibleSubprocessJobOptions()
        job_options = {
            'playbook': 'pb.yml',
            # Change the default job-related data path
            'metrics_output_path': '/tmp/metrics',
        }

2.3. Viewing metrics files

After an automation job has completed, navigate to the directory that you specified for storing the data and list the files.

The data for the newly-completed job is contained in a tarball file whose name begins with the date and time that the automation job was run. For example, the following file records data for an automation job executed on 8 March 2023 at 2.30AM.

$ ls

2023_03_08_02_30_24__aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa_job_data.tar.gz

To view the files in the tarball, run tar xvf.

$ tar xvf 2023_03_08_02_30_24__aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa_job_data.tar.gz

x jobs.csv
x modules.csv
x collections.csv
x roles.csv
x playbook_on_stats.csv

The folowing example shows the jobs.csv file.

$ cat jobs.csv

job_id,job_type,started,finished,job_state,hosts_ok,hosts_changed,hosts_skipped,hosts_failed,hosts_unreachable,task_count,task_duration
84896567-a586-4215-a914-7503010ef281,local,2023-03-08 02:30:22.440045,2023-03-08 02:30:24.316458,,5,0,0,0,0,2,0:00:01.876413

When a parameter value is not available, the corresponding entry in the CSV file is empty. In the jobs.csv file above, the job_state value is not available.

Chapter 3. Reporting data to Red Hat

Your subscription contract requires you to send your .tar files to Red Hat for accounting purposes. Your Red Hat partner representative will instruct you on how to send the files to Red Hat.

Legal Notice

Copyright © 2023 Red Hat, Inc.
The text of and illustrations in this document are licensed by Red Hat under a Creative Commons Attribution–Share Alike 3.0 Unported license ("CC-BY-SA"). An explanation of CC-BY-SA is available at http://creativecommons.org/licenses/by-sa/3.0/. In accordance with CC-BY-SA, if you distribute this document or an adaptation of it, you must provide the URL for the original version.
Red Hat, as the licensor of this document, waives the right to enforce, and agrees not to assert, Section 4d of CC-BY-SA to the fullest extent permitted by applicable law.
Red Hat, Red Hat Enterprise Linux, the Shadowman logo, the Red Hat logo, JBoss, OpenShift, Fedora, the Infinity logo, and RHCE are trademarks of Red Hat, Inc., registered in the United States and other countries.
Linux® is the registered trademark of Linus Torvalds in the United States and other countries.
Java® is a registered trademark of Oracle and/or its affiliates.
XFS® is a trademark of Silicon Graphics International Corp. or its subsidiaries in the United States and/or other countries.
MySQL® is a registered trademark of MySQL AB in the United States, the European Union and other countries.
Node.js® is an official trademark of Joyent. Red Hat is not formally related to or endorsed by the official Joyent Node.js open source or commercial project.
The OpenStack® Word Mark and OpenStack logo are either registered trademarks/service marks or trademarks/service marks of the OpenStack Foundation, in the United States and other countries and are used with the OpenStack Foundation's permission. We are not affiliated with, endorsed or sponsored by the OpenStack Foundation, or the OpenStack community.
All other trademarks are the property of their respective owners.