Menu Close

Getting Started with the Subscriptions Service

Subscription Central 2021

Red Hat Customer Content Services

Abstract

This guide is for users who want to understand how the subscriptions service reports usage data for their Red Hat subscriptions at the Red Hat account level. Procurement, operational, and technical teams can use the subscriptions service to help them understand where Red Hat technology is being used, how much of it is being used, and whether they can use more or need to purchase more.

Providing feedback on Red Hat documentation

We appreciate your feedback on our documentation. To provide feedback, highlight text in a document and add comments.

Prerequisites

  • You are logged in to the Red Hat Customer Portal.
  • In the Red Hat Customer Portal, the document is in the Multi-page HTML viewing format.

Procedure

To provide your feedback, perform the following steps:

  1. Click the Feedback button in the top-right corner of the document to see existing feedback.

    Note

    The feedback feature is enabled only in the Multi-page HTML format.

  2. Highlight the section of the document where you want to provide feedback.
  3. Click the Add Feedback pop-up that appears near the highlighted text.

    A text box appears in the feedback section on the right side of the page.

  4. Enter your feedback in the text box and click Submit.

    A documentation issue is created.

  5. To view the issue, click the issue link in the feedback view.

Part I. About the subscriptions service

The subscriptions service in the Hybrid Cloud Console provides a visual representation of the subscription experience across your hybrid infrastructure in a dashboard-based application. The subscriptions service is intended to simplify how you interact with your subscriptions, providing both a historical look-back at your subscription usage and an ability to make informed, forward-facing decisions based on that usage and your remaining subscription capacity.

Note

The April 2021 release of the subscriptions service includes the following changes for how you access the subscriptions service:

  • The subscription watch tool has a new name, and is now known as the subscriptions service.
  • The primary navigation for the Hybrid Cloud Console at cloud.redhat.com has been redesigned. The subscriptions service has been relocated within the navigation tree for the individual product portfolios that it works with, Red Hat Enterprise Linux, Red Hat OpenShift, and Red Hat Cloud Services. Product page views generated by the subscriptions service are located within the Subscriptions submenu. This Subscriptions submenu might also include other subscription-related pages that are not directly related to the subscriptions service.

Learn more

Chapter 1. What is the subscriptions service?

The subscriptions service provides reporting of subscription usage information for the following product portfolios:

  • Unified reporting of Red Hat Enterprise Linux subscription usage information across the constituent parts of your hybrid infrastructure, including physical, virtual, on-premise, and cloud. This unified reporting model enhances your ability to consume, track, report, and reconcile your Red Hat subscriptions with your purchasing agreements and deployment types.
  • Reporting of Red Hat OpenShift Container Platform subscription usage information. The subscriptions service uses data available from Red Hat internal subscription services, in addition to data from Red Hat OpenShift reporting tools, to show aggregated cluster usage data in the context of different Red Hat OpenShift subscription types.
  • Reporting of Red Hat Cloud Services subscription usage information. The subscriptions service also uses data available from some of the Red Hat OpenShift reporting tools to show usage of these services. These services consume resources differently, but in general usage is represented as a combination of one or more metrics such as data transfer and data storage for workload activities, and instance availability as the consumption of control plane resources.

The simplified, consistent subscription reporting experience shows your account-wide Red Hat subscriptions compared to your total inventory across all deployments and programs. It is an at-a-glance impression of both your account’s remaining subscription capacity measured against a subscription threshold and the historical record of your software usage.

The subscriptions service provides increased and ongoing visibility of your subscription usage. By implementing it, you might be eligible to shift away from the challenges of the current content enforcement model for subscriptions. This older model can be error-prone and inconvenient for your operational workload requirements, while the newer model of content access and consumption results in fewer barriers to content deployment. The simple content access tool enables this shift to the newer model.

You can choose to use neither, either, or both of these services. However, the subscriptions service and simple content access are designed as complementary services and function best when they are used in tandem. Simple content access simplifies the subscription experience by allowing more flexible ways of consuming content. The subscriptions service provides account-wide visibility of usage across your subscription profile, adding governance capabilities to this flexible content consumption.

To learn more about the simple content access tool and how you can use it with the subscriptions service, see the Getting Started with Simple Content Access guide.

Note

As of April 2021, simple content access is now available to customers who manage subscriptions through Red Hat Satellite or Red Hat Subscription Management. Previously, simple content access was available only to Satellite customers. In addition, the previous restrictions that limited the use of simple content access to certain geographical regions during the early development of simple content access have now been lifted. Customers in all geographical regions can now use simple content access.

Chapter 2. What are the benefits of the subscriptions service?

The subscriptions service provides these benefits:

  • Tracks selected Red Hat product usage and capacity at the fleet or account level in a unified inventory and provides a daily snapshot of that data in a digestible, filterable dashboard at cloud.redhat.com.
  • Tracks data over time for self-governance and analytics that can inform purchasing and renewal decisions, ongoing capacity planning, and mitigation for high-risk scenarios.
  • Helps procurement officers make data-driven choices with portfolio-centered reporting dashboards that show both inventory-occupying subscriptions and current subscription limits across the entire organization.
  • With its robust reporting capabilities, enables the transition to simple content access tooling that features broader, organizational-level subscription enforcement instead of system-level quantity enforcement.

Chapter 3. What does the subscriptions service track?

The subscriptions service currently tracks and reports usage information for Red Hat Enterprise Linux, some Red Hat OpenShift products, and some Red Hat Cloud Services services.

3.1. Red Hat Enterprise Linux

The subscriptions service tracks RHEL on physical systems, virtual systems, and public cloud. If your RHEL installations predate certificate-based subscription management, the subscriptions service will not track that inventory.

  • Tracks physical RHEL usage in CPU sockets, organized by architecture and variants for x86.
  • Tracks virtualized RHEL by installed socket count for standard guest subscriptions and by socket count of the hypervisor host node for virtual data center (VDC) subscriptions and similar virtualized environments.
  • Tracks public cloud RHEL usage in sockets, where one instance equals one socket.

3.2. Red Hat OpenShift

Generally, the subscriptions service tracks Red Hat OpenShift usage as cluster size on physical and virtual systems. The cluster size is the sum of all subscribed nodes. A subscribed node is a compute or worker node that runs workloads, as opposed to a control plane or infrastructure node that manages the cluster.

However, beyond this general rule, tracking is dependent on several factors:

  • The Red Hat OpenShift product
  • The type of subscription that was purchased for that product
  • The version of that product
  • The unit of measurement for the product, as defined by the subscription terms, that determines how cluster size and overall usage is calculated

3.2.1. Red Hat OpenShift Container Platform with a traditional Annual subscription

The subscriptions service tracks Red Hat OpenShift Container Platform usage in CPU cores or sockets for clusters and aggregates this data into an account view, as refined by the following version support:

  • RHOCP 4.1 and later with Red Hat Enterprise Linux CoreOS based nodes or a mixed environment of Red Hat Enterprise Linux CoreOS and RHEL based nodes
  • RHOCP 3.11

For RHOCP subscription usage, there was a change in reporting models between the major 3 and 4 versions. Version 3 usage is considered at the node level and version 4 usage is considered at the cluster level.

The difference in reporting models for the RHOCP major versions also results in some differences in how the subscriptions service and the associated services in the Cloud Services platform calculate usage. For RHOCP version 4, the subscriptions service recognizes and ignores the parts of the cluster that perform overhead tasks. These parts of the cluster are commonly called infrastructure nodes, and can include master, router, registry, metrics, logging, etcd, and similar nodes. The subscriptions service recognizes and tracks only the parts of the cluster that contain compute nodes, also commonly called worker nodes.

However, for RHOCP version 3.11, the version 3 era reporting model cannot distinguish and ignore the infrastructure nodes. Therefore, for RHOCP version 3.11, you can assume that approximately 15% of the subscription data reported by the subscriptions service is overhead for infrastructure nodes. This percentage is based on analysis of cluster overhead in RHOCP version 3 installations. In this particular case, usage results that show up to 15% over capacity are likely to still be in compliance.

3.2.2. Red Hat OpenShift Container Platform or Red Hat OpenShift Dedicated with a pay-as-you-go On-Demand subscription

  • RHOCP or OpenShift Dedicated 4.7 and later

The subscriptions service tracks RHOCP or OpenShift Dedicated 4.7 and later usage from a pay-as-you-go On-Demand subscription in core hours, a measurement of cluster size in CPU cores over a range of time. For OpenShift Dedicated On-Demand subscription, consumption of control plane resources by the availability of the service instance is tracked in instance hours. The subscriptions service ultimately aggregates all cluster core hour and instance hour data in the account into a monthly total, the unit of time that is used by the billing service for Red Hat Marketplace.

As described in the information about RHOCP 4.1 and later, The subscriptions service recognizes and tracks only the parts of the cluster that contain compute nodes, also commonly called worker nodes.

3.3. Red Hat Cloud Services

Because the services in the Red Hat Cloud Services portfolio consume different types of resources while handling different types of workloads, the subscriptions service tracks usage of these services in different ways.

3.3.1. Red Hat OpenShift Streams for Apache Kafka with a pay-as-you-go On-Demand subscription

The subscriptions service tracks Red Hat OpenShift Streams for Apache Kafka usage by the resources that are consumed by the instances of the service, including by the incoming and outgoing network traffic, as data transfer; by the volume of stored data, as data storage; and by the availability of service instances for workloads, as instance hours. The subscriptions service aggregates this data into monthly totals that are then consumed by the billing service of Red Hat Marketplace.

Part II. Requirements and your responsibilities

Before you start using the subscriptions service, review the hardware and software requirements and your responsibilities when you use the service.

Learn more

  • Review information about your responsibilities when you use the subscriptions service:

Chapter 4. Requirements

To begin using the subscriptions service, you must meet the following software requirements. For more complete information about these requirements, contact your Red Hat account team.

4.1. Red Hat Enterprise Linux

You must meet at least one of the following requirements for Red Hat Enterprise Linux management:

  • RHEL managed by Satellite.

    • The minimum Satellite version is 6.7 or later (versions that are under full support).
  • RHEL managed by Red Hat Insights.
  • RHEL managed by Red Hat Subscription Management.

4.2. Red Hat OpenShift

You must meet the following requirements for Red Hat OpenShift management, based on your product version and subscription type:

  • Red Hat OpenShift Container Platform with an Annual subscription

    • RHOCP version 4.1 or later managed with the monitoring stack tools and OpenShift Cluster Manager.
    • RHOCP version 3.11 with RHEL nodes managed by Insights, Satellite, or Red Hat Subscription Management.
  • RHOCP with a pay-as-you-go On-Demand subscription

    • RHOCP version 4.7 or later managed with the monitoring stack tools and OpenShift Cluster Manager.
  • Red Hat OpenShift Dedicated with a pay-as-you-go On-Demand subscription

    • OpenShift Dedicated version 4.7 or later. The monitoring stack tools and OpenShift Cluster Manager are always in use for OpenShift Dedicated.

4.3. Red Hat Cloud Services

The Red Hat Cloud Services portfolio includes managed services that rely on Red Hat infrastructure. Part of that infrastructure is the Red Hat OpenShift monitoring stack tools that, among other jobs, supply data about subscription usage to the subscriptions service.

Note

Some of the services in the Red Hat Cloud Services portfolio might also gather and display their own usage data that is independent of the data that is gathered by the Red Hat OpenShift monitoring stack tools and displayed in the subscriptions service. The data displayed in these service-level dashboards is designed more for the needs of the owners of individual clusters, instances, and so on. For example, Red Hat OpenShift Streams for Apache Kafka displays its own instance-level usage data in a dashboard when you view instance details. However, the Red Hat OpenShift platform core capabilities provided by the monitoring stack tools typically gather and process the data that is used in the subscriptions service.

  • Red Hat OpenShift Streams for Apache Kafka with a pay-as-you-go On-Demand subscription

    • For the Red Hat OpenShift Streams for Apache Kafka service, no user setup of the monitoring stack tools is necessary.

Chapter 5. How to select the right data collection tool

To display data about your subscription usage, the subscriptions service requires a data collection tool to obtain that data. The various data collection tools each have distinguishing characteristics that determine their effectiveness in a particular type of environment.

It is possible that the demands of your environment require more than one of the data collection tools to be running. When more than one data collection tool is supplying data to the services in the Cloud Services platform, the tools that process this data are able to analyze and deduplicate the information from the various data collection tools into standardized facts, or canonical facts.

The following information can help you determine the best data collection tool or tools for your environment.

5.1. Red Hat Insights

Insights as a data collection tool is ideal for the always-connected customer. If you fit this profile, you are interested in using Insights not only as a data collection tool, but also as a solution that provides analytic, threat identification, remediation, and reporting capabilities.

With the inclusion of Insights with every Red Hat Enterprise Linux subscription beginning with version 8, and with the availability of Red Hat Insights for Red Hat OpenShift in April 2021, the use of Insights as your data collection tool becomes even more convenient.

However, using Insights as the data collection tool is not ideal if the Insights agent cannot connect directly to the cloud.redhat.com website or if Red Hat Satellite cannot be used as a proxy for that connection. In addition, it cannot be used as the sole solution if hypervisor host-guest mapping is required for virtual data centers (VDCs) or similar virtualized environments. In that case, Insights must be used in conjunction with Satellite.

5.2. Red Hat Subscription Management

Red Hat Subscription Management is an ideal data collection tool for the connected customer who uses the Subscription Manager agent to send data to Red Hat Subscription Management on the Red Hat Customer Portal.

For customers that are using the subscriptions service, Red Hat Subscription Management automatically synchronizes its data with the Cloud Services platform tools. Therefore, in situations where Red Hat Subscription Management is in use, or required, such as with RHEL 7 or later, it is being used as a data collection tool.

5.3. Red Hat Satellite

The use of Satellite as the data collection tool is useful for customers who have specific needs in their environment that either inhibit or prohibit the use of the Insights agent or the Subscription Manager agent for data collection.

For example, you might be able to connect to the Cloud Services platform directly, but you might find the connection and maintenance of a per-organization Satellite installation is more convenient than the per-system installation of Insights. The use of Satellite also enables you to inspect the information that is being sent to the Cloud Services platform on an organization-wide basis instead of a system-only basis.

As another example, your Satellite installation might not be able to connect directly to the Cloud Services platform because you are running Satellite from a disconnected network. In that case, you must export the Satellite reports to a connected system and then upload that data to the Cloud Services platform. To do this, you must use a minimum of Satellite 6.7 or later (versions that are under full support). You must also install the Satellite inventory upload plugin on your Satellite server.

Finally, you might have a need to view the subscriptions service results for RHEL usage from a virtual data center (VDC) subscription or similar virtualized environments. To do so, you must obtain accurate hypervisor host-guest mapping information as part of the data that is collected for analysis. This type of data collection requires the use of Satellite in combination with the Satellite inventory upload plugin and the virt-who tool.

5.4. Red Hat OpenShift monitoring stack and other tools for Red Hat OpenShift data collection

The data collection for Red Hat OpenShift usage is dependent on several tools, including tools developed by the Red Hat OpenShift development team. One tool is Red Hat OpenShift Cluster Manager. Another set of tools is known as the monitoring stack. This set of tools is based on the open source Prometheus project and its ecosystem, and includes Prometheus, Telemetry, Thanos, Observatorium, and others.

The subscriptions service is designed to work with customers who use Red Hat OpenShift 4.1 and later products in connected environments. For the Red Hat OpenShift version 4.1 and later products that the subscriptions service can track, Red Hat OpenShift Cluster Manager and the monitoring stack tools are used to gather and process cluster data before sending it to Red Hat Subscription Management. Red Hat Subscription Management provides the relevant usage data to the Cloud Services platform tools such as inventory and the subscriptions service.

Customers with disconnected environments can use the Red Hat OpenShift data collection tools by manually creating each cluster in Red Hat OpenShift Cluster Manager. This workaround enables customers with disconnected environments to simulate an account-level view of their Red Hat OpenShift usage. For example, an organization with disconnected clusters distributed across several departments might find this workaround useful.

For Red Hat OpenShift Container Platform version 3.11, data collection is dependent on an older, RHEL based reporting model. Therefore, data collection is dependent upon the connection of the RHEL nodes to one of the RHEL data collection tools, such as Insights, Red Hat Subscription Management, or Satellite.

5.5. Red Hat OpenShift monitoring stack and other tools for Red Hat Cloud Services data collection

The Red Hat Cloud Services portfolio includes managed services that rely on Red Hat infrastructure. Part of that infrastructure is the monitoring stack tools that, among other jobs, supply data about subscription usage to the subscriptions service. No additional user action is necessary to set up these data collection tools for the following managed services:

  • Red Hat OpenShift Streams for Apache Kafka with a pay-as-you-go On-Demand subscription

Additional resources

  • For additional help with decisions on which data collection tool or tools to use, see the Red Hat Subscription Watch Helper. This Red Hat Customer Portal Labs application is available at https://access.redhat.com/labs/rhsw/. The application guides you through a series of questions to determine the data collection tools that are the best fit for your environment.
  • For additional information about registering version 4.1 disconnected clusters in Red Hat OpenShift Cluster Manager, see the chapter about cluster subscriptions and registration in the Managing Clusters guide.

Chapter 6. How to set subscription attributes

Red Hat subscriptions combine technology with use cases to help procurement and technical teams make the best purchasing and deployment decisions for their business needs. When the same product is offered in two different subscriptions, these use cases differentiate between the options. They inform the decision-making process at the time of purchase and remain associated with the subscription throughout its life cycle to help determine how the subscription is used.

Red Hat provides a method for you to associate use case information with products through the application of subscription attributes. These subscription attributes can be supplied at product installation time or as an update to the product.

The subscriptions service helps you to align your software deployments with the use cases that support them and compare actual consumption to the capacity provided by the subscription profile of your account. Proper, automated maintenance of the subscription attributes for your inventory is important to the accuracy of the subscriptions service reporting.

Subscription attributes can generally be organized into the following use cases:

technical use case
Attributes that describe how the product will be used upon deployment. Examples include role information for RHEL used as a server or alternatively used as a workstation.
business use case
Attributes that describe how the product will be used in relation to your business environment and workflows. Examples include usage as part of a production environment or alternatively as part of a disaster recovery environment.
operational use case
Attributes that describe various operational characteristics such as how the product will be supported. Examples include a service level agreement (SLA) of premium, or a service type of L1-L3.

The subscription attributes might be configured from the operating system or its management tools, or they might be configured from settings within the product itself. Collectively, these subscription attributes might be known as system purpose, subscription settings, or similar names across all of these tools.

Subscription attributes are used by the Cloud Services platform tools such as the inventory tool to build the most accurate usage profile for products in your inventory. The subscriptions tool uses the subscription attributes found and reported by these other tools to filter data about your subscriptions, enabling you to view this data with more granularity. For example, filtering your RHEL subscriptions to show only those with an SLA of premium could help you determine the current usage of those premium subscriptions compared to your overall capacity for premium subscriptions.

The quality of subscription attribute data can greatly affect the accuracy and usefulness of the subscriptions service data. Therefore, a best practice is to ensure that these attributes are properly set, both for current use and any possible future expansion of subscription attribute use within the subscriptions service.

6.1. Setting subscription attributes for RHEL

You can set subscription attributes for the RHEL product from RHEL, Red Hat Subscription Management, or Satellite.

You should set the subscription attributes from only one tool. If you use multiple tools, there is a possibility for mismatched settings. Because these tools report data to the Cloud Services platform tools at different intervals, or heartbeats, and because the subscriptions service shows its results as a once-per-day snapshot based on last-reported data, adding subscription attributes to more than one tool could potentially affect the quality of the subscriptions service data.

Setting the subscription attributes from RHEL

For RHEL 8 and later, you can use a few different methods to set subscription attributes. These methods, which include using the syspurpose command line tool, are described in a few different contexts in the RHEL 8 documentation. For more information, see the following links:

Note

The syspurpose command line tool has also been added to RHEL 7.7 and later.

Setting the subscription attributes from Red Hat Subscription Management

For Red Hat Subscription Management, the methods to set subscription attributes are contained in the section for registering a system and the descriptions of register commands, but are more fully described in the section related to using system purpose. For more information, see the following link:

Setting the subscription attributes from Satellite

For Satellite, the methods to set subscription attributes are described in instructions for creating a host and editing the system purpose of a host. For more information, see the following link:

  • See the section about administering hosts in the Managing Hosts guide.

6.2. Setting subscription attributes for Red Hat OpenShift

You can set subscription attributes from Red Hat OpenShift Cluster Manager for version 4. For version 3, you use the same reporting tools as those defined for RHEL.

Setting the subscription attributes for Red Hat OpenShift 4

You can set subscription attributes at the cluster level from Red Hat OpenShift Cluster Manager, where the attributes are described as subscription settings.

  1. From the Clusters view, select a cluster to display the cluster details.
  2. Click Edit Subscription Settings on the cluster details page or from the Actions menu.
  3. Make any needed changes to the values for the subscription attributes and then save those changes.

Setting the subscription attributes for Red Hat OpenShift 3

You can set subscription attributes at the node level by using the same methods that you use for RHEL, setting these values from RHEL itself, Red Hat Subscription Management, or Satellite. As described in that section, set subscription attributes by using only one method so that the settings are not duplicated.

If your subscription contains a mix of socket-based and core-based nodes, you can also set subscription attributes that identify this fact for each node. As you view your Red Hat OpenShift usage, you can use a filter to switch between cores and sockets as the unit of measurement.

To set this subscription attribute data, run the applicable command for each node:

  • For core-based nodes:

    # echo '{"ocm.units":"Cores/vCPU"}' | sudo tee /etc/rhsm/facts/openshift-units.facts
  • For socket-based nodes:

    #  echo '{"ocm.units":"Sockets"}' | sudo tee /etc/rhsm/facts/openshift-units.facts

6.3. Setting subscription attributes for Red Hat Cloud Services

Because the current offerings for Red Hat Cloud Services services are of one subscription type only, setting subscription attributes for these services, including Red Hat OpenShift Streams for Apache Kafka, is not required.

Chapter 7. Your responsibilities

The subscriptions service and the features that make up this service are new and are rapidly evolving. During this rapid development phase, you have the ability to view, and more importantly contribute to, the newest capabilities early in the process. Your feedback is valued and welcome. Work with your Red Hat account team, for example, your technical account manager (TAM) or customer success manager (CSM), to provide this feedback. You might also be asked to provide feedback or request features from within the subscriptions service itself.

As you use the subscriptions service, note the following agreements and contractual responsibilities that remain in effect:

  • Customers are responsible for monitoring subscription utilization and complying with applicable subscription terms. The subscriptions service is a customer benefit to manage and view subscription utilization. Red Hat does not intend to create new billing events based on the subscriptions service tooling, rather the tooling will help the customer gain visibility into utilization so it can keep track of its environment.

Part III. Setting up the subscriptions service for data collection

To set up the environment for the subscriptions service data collection, connect your Red Hat Enterprise Linux and Red Hat OpenShift systems to the Cloud Services platform services through one or more data collection tools.

After you complete the steps to set up this environment, you can continue with the steps to activate and open the subscriptions service.

Do these steps

  1. To gather Red Hat Enterprise Linux usage data, complete at least one of the following three steps to connect your Red Hat Enterprise Linux systems to the Cloud Services platform by enabling a data collection tool. This connection enables subscription usage data to show in the subscriptions service.

    1. Deploy Insights on every RHEL system that is managed by Red Hat Satellite:

    2. Ensure that Satellite is configured to manage your RHEL systems and install the Satellite inventory upload plugin:

    3. Ensure that Red Hat Subscription Management is configured to manage your RHEL systems:

  2. To gather Red Hat OpenShift usage data, complete the following step for Red Hat OpenShift data collection on the Cloud Services platform.

    1. Set up the connection between Red Hat OpenShift and the subscriptions service based upon the operating system that is used for clusters:

  3. To gather high-precision public cloud usage data for Red Hat Enterprise Linux based instances on Amazon Web Services, complete the following step:

    1. Add Amazon Web Services sources that activate the data-gathering capabilities of the public cloud metering tool.

Chapter 8. Deploying Red Hat Insights

If you are using Red Hat Insights as the data collection tool, deploy Red Hat Insights on every RHEL system that is managed by Red Hat Satellite.

Do these steps

  1. To install Red Hat Insights, see the following information:

Learn more

8.1. Installing Red Hat Insights

Install Red Hat Insights to collect information about your inventory.

Procedure

  1. Install the Insights client on every RHEL system that is managed by Red Hat Satellite by using the following instructions:

Note

The Insights client is installed by default on RHEL 8 and later systems unless the minimal installation option was used to install RHEL. However, the client must still be registered, as documented in the client installation instructions.

8.2. What data does Red Hat Insights collect?

When the Red Hat Insights client is installed on a system, it collects data about that system on a daily basis and sends it to the Red Hat Insights cloud application. The data might also be shared with other applications on the Cloud Services platform, such as inventory or subscription watch. Insights provides configuration and command options, including options for data obfuscation and data redaction, to manage that data.

For more information, see the Client Configuration Guide for Red Hat Insights, available with the Red Hat Insights product documentation.

You might also want to examine the types of data that Insights collects and sends to Red Hat or add controls to the data that is sent. For additional information that supplements the information available in the product documentation, see the following articles:

Chapter 9. Installing the Satellite inventory upload plugin

If you are using Red Hat Satellite as the data collection tool, and you do not use Satellite plus Red Hat Insights to send data to the Cloud Services platform tools for processing, then you must install the Satellite inventory upload plugin to send this data.

You must also use the Satellite inventory upload plugin in combination with the virt-who tool for accurate reporting of hypervisor host-guest mapping information for virtual data center (VDC) subscriptions and similar virtualized environments.

Note

In the following information, the actions that you do and the options that appear in the interface might vary according to your Satellite version.

Prerequisites

Red Hat Satellite 6.7 or later (versions that are under full support)

Procedure

  1. Install the Satellite inventory upload plugin on the Satellite Server.

    • For Satellite 6.8 and 6.9: The Satellite inventory upload plugin (rh_cloud) is installed for you during Satellite installation or upgrade unless it was explicitly disabled during the installation or upgrade process. In addition, for new installations of 6.9, the plugin is both installed and enabled by default.
    • For Satellite 6.7: Use the following command. This command runs the satellite-installer process and then restarts the Satellite services.

      # satellite-maintain packages install tfm-rubygem-foreman_rh_cloud
  2. Depending upon the Satellite version, you might have to activate the plugin to start automatic collection of data. To activate the plugin, click RH Inventory or RH Cloud in the navigation, and then enable the Allow Auto Upload option.

Verification steps

After a successful installation and restart (as needed per version), the RH Inventory or RH Cloud navigation option displays in the Red Hat Satellite interface, where you can view the status of the extract and upload actions.

Usage tips

When the auto upload option is enabled, the Satellite inventory upload plugin automatically reports once per day by default. You can also manually send data.

The Satellite inventory upload plugin includes reporting settings that you can use to address data privacy concerns. Use the Configure option in the Satellite navigation to configure the plugin to exclude certain packages, obfuscate host names, and obfuscate host addresses.

Additional resources

For more information about the satellite-maintain command and the extra package protection that was added to Satellite 6.6 and later, see the following Red Hat Customer Portal articles:

Chapter 10. Registering systems to Red Hat Subscription Management

If you are using Red Hat Subscription Management as the data collection tool, register your RHEL systems to Red Hat Subscription Management. Systems that are registered to Red Hat Subscription Management can be found and tracked by the subscriptions service.

Some RHEL images can use the autoregistration feature of the RHEL management bundle and do not have to be manually registered to Red Hat Subscription Management. However, the following specific requirements must be met:

  • The image must be based on RHEL 8.4 and later or 8.3.1 and later.
  • The image must be an Amazon Web Services (AWS) or Microsoft Azure cloud services image.
  • The image can be a Cloud Access Gold Images image or a custom image, such as an image built with Image Builder. If it is a custom image, the subscription-manager tool in the image must be configured to use autoregistration.
  • The image must be associated with an AWS or Azure source, as configured from the Sources menu of the Hybrid Cloud Console, with the RHEL management bundle selected for activation.
  • The image must be provisioned after this source is created.

RHEL systems that do not meet these requirements must be registered manually to be tracked by the subscriptions service.

Procedure

  1. Register your RHEL systems to Red Hat Subscription Management, if not already registered. For more information about this process, see the following information:

Chapter 11. Connecting Red Hat OpenShift to the subscriptions service

If you use Red Hat OpenShift products, the steps you must do to connect the correct data collection tools to the subscriptions service depend on multiple factors. These factors include the installed version of Red Hat OpenShift Container Platform and Red Hat OpenShift Dedicated, whether you are working in a connected or disconnected environment, and whether you are using Red Hat Enterprise Linux, Red Hat Enterprise Linux CoreOS, or both as the operating system for clusters.

The subscriptions service is designed to work with customers who use Red Hat OpenShift in connected environments. One example of this customer profile is using RHOCP 4.1 and later with an Annual subscription with connected clusters. For this customer profile, Red Hat OpenShift has a robust set of tools that can perform the data collection. The connected clusters report data to Red Hat through Red Hat OpenShift Cluster Manager, Telemetry, and the other monitoring stack tools to supply information to the data pipeline for the subscriptions service.

Customers with disconnected RHOCP 4.1 and later environments can use Red Hat OpenShift as a data collection tool by manually creating each cluster in Red Hat OpenShift Cluster Manager.

Customers who use Red Hat OpenShift 3.11 can also use the subscriptions service. However, for Red Hat OpenShift version 3.11, the communication with the subscriptions service is enabled through other tools that supply the data pipeline, such as Insights, Satellite, or Red Hat Subscription Management.

Note

For customers who use Red Hat OpenShift Container Platform or Red Hat OpenShift Dedicated 4.7 and later with a pay-as-you-go On-Demand subscription (available for connected clusters only), data collection is done through the same tools as those used by Red Hat OpenShift Container Platform 4.1 and later with an Annual subscription.

Procedure

Complete the following steps, based on your version of Red Hat OpenShift Container Platform and the cluster operating system for worker nodes.

For Red Hat OpenShift Container Platform 4.1 or later with Red Hat Enterprise Linux CoreOS

For this profile, cluster architecture is optimized to report data to Red Hat OpenShift Cluster Manager through the Telemetry tool in the monitoring stack. Therefore, setup of the subscriptions service reporting is essentially confirming that this monitoring tool is active.

  1. Make sure that all clusters are connected to Red Hat OpenShift Cluster Manager through the Telemetry monitoring component. If so, no additional configuration is needed. The subscriptions service is ready to track Red Hat OpenShift Container Platform usage and capacity.

For Red Hat OpenShift Container Platform 4.1 or later with a mixed environment with Red Hat Enterprise Linux CoreOS and Red Hat Enterprise Linux

For this profile, data gathering is affected by the change in the Red Hat OpenShift Container Platform reporting models between Red Hat OpenShift major versions 3 and 4. Version 3 relies upon RHEL to report RHEL cluster usage at the node level. This is still the reporting model used for version 4 RHEL nodes. However, the version 4 era reporting model reports Red Hat Enterprise Linux CoreOS usage at the cluster level through Red Hat OpenShift tools.

The tools that are used to gather this data are different. Therefore, the setup of the subscriptions service reporting is to confirm that both tool sets are configured correctly.

  1. Make sure that all clusters are connected to Red Hat OpenShift Cluster Manager through the Red Hat OpenShift Container Platform Telemetry monitoring component.
  2. Make sure that Red Hat Enterprise Linux nodes in all clusters are connected to at least one of the Red Hat Enterprise Linux data collection tools, Insights, Satellite, or Red Hat Subscription Management. For more information, see the instructions about connecting to each of these data collection tools in this guide.

For Red Hat OpenShift Container Platform version 3.11

Red Hat OpenShift Container Platform version 3.11 reports cluster usage based on the Red Hat Enterprise Linux nodes in the cluster. Therefore, for this profile, the subscriptions service reporting uses the standard Red Hat Enterprise Linux data collection tools.

  1. Make sure that all Red Hat Enterprise Linux nodes in all clusters are connected to at least one of the Red Hat Enterprise Linux data collection tools, Insights, Satellite, or Red Hat Subscription Management. For more information, see the instructions about connecting to each of these data collection tools in this guide.

Chapter 12. Adding sources for public cloud metering

Most of the data collection tools that gather, process, and analyze data for the subscriptions service are either established subscription management tools or additional components that work with or enhance the functions of tools. Examples include Red Hat Satellite and the Satellite inventory upload plugin, or OpenShift Cluster Manager and the monitoring stack tools.

In addition to these tools, there are Cloud Services platform tools for the Hybrid Cloud Console that perform data collection. One type of these tools is a source. The sources application is how the services and applications in the Hybrid Cloud Console connect with public cloud providers and with each other to collect and exchange data. You can think of a source as another data collection tool, but remember that it is set up with a different process than the other data collection tools. A source is created from within the Hybrid Cloud Console.

For the subscriptions service, you can add sources to enable high-precision data collection for your RHEL based Amazon Web Services instances in the public cloud. Although the subscriptions service currently has the ability to identify RHEL based instances for multiple cloud providers, it is not able to identify and track the activities of individual instances as they start and stop, sometimes multiple times per day. The public cloud metering tool adds that capability for AWS instances, resulting in more accurate monitoring of usage for those instances by the subscriptions service.

To use the public cloud metering tool for public cloud data collection, you must add sources to represent each of your AWS accounts. You add sources by using the sources application in the Hybrid Cloud Console settings.

Note

For organizations where the subscriptions service has not already been activated for the Red Hat organization account, adding an AWS source to enable the public cloud metering tool also activates the subscriptions service for the Red Hat account.

Select from these steps

Learn more

12.1. Adding an AWS source with the account authorization configuration mode

If you are using public cloud metering as the data collection tool for Red Hat Enterprise Linux usage in Amazon Web Services (AWS) accounts, add each account as a cloud source.

Note

The account authorization configuration mode is an automated mode for creating sources. When you select this mode, you provide your AWS account root user credentials in the form of the access key ID and secret access key. These credentials are used briefly to complete the automated steps and are then discarded. If you do not want to use the account authorization configuration mode, you can instead use the manual configuration mode for source creation.

When you add an AWS account as a source, the automated steps for the account authorization configuration mode create a specialized AWS Identity and Access Management (IAM) policy and role and add a connection between your AWS account and public cloud metering. The policy and role enable public cloud metering to perform the tasks that are required to identify and to meter public cloud usage of RHEL in that account.

Prerequisites

To create a source, you must meet the following prerequisites:

  • You must have the ability to create AWS resources in the us-east-1 region. If your AWS policies do not allow the creation of AWS resources in the us-east-1 region, you might be able to complete the steps to create the source, but the source might not complete the enablement process.
  • You must have the Sources administrator role in the role-based access control (RBAC) system for the Hybrid Cloud Console.

    Note

    Beginning in September 2021, the creation of a source requires the Sources administrator RBAC role. The Red Hat Customer Portal organization administrator (org admin) account role for your organization no longer has sufficient permissions to create sources.

Procedure

  1. In a browser window, go to cloud.redhat.com.
  2. If prompted, enter your Red Hat Customer Portal login credentials. The Hybrid Cloud Console opens.
  3. Click Settings (the gear icon) to show the settings options.
  4. In the navigation menu, click Sources.
  5. Click the Cloud sources tab if this page is not displayed by default. Click Add Source. The Add a cloud source wizard opens.

    Note

    You can also edit an existing source to add an association to the subscriptions service.

  6. Select the Amazon Web Services icon as the source type. Click Next.
  7. Enter a name for the source. This name is not required to be the same as the AWS account name. However, use a name that is easy to distinguish if you have multiple AWS accounts and must create multiple sources for them. Click Next.
  8. Select Account Authorization as the configuration mode. The window refreshes to display the fields for the AWS account root user credentials.
  9. Enter the access key ID and secret access key for the AWS account root user. Click Next.
  10. Select RHEL management as the application. This selection provides the high-precision data capabilities of public cloud metering for the subscriptions service. Select other options as appropriate. Click Next.
  11. Review the details for this source. Click Add to complete the source creation.

12.1.1. Verification steps

During the final step of source creation in the Add a cloud source wizard, the connection to the AWS account is verified and an AWS CloudTrail trail is created for the account. The CloudTrail trail is used to monitor the start and stop events for instances, the raw data that is used to calculate usage data for display in the subscriptions service. If the verification and trail creation is successful, the source creation is successful. This process normally takes only a few seconds.

To find the RHEL images and the associated instances that it is going to track, public cloud metering must then perform an inspection of the AWS account. The length of this inspection process can vary according to many factors, including AWS performance, the number of images in the account, the size and type of each image, the number of instances for an image, and others. As a general rule, the inspection process for an image and its instances can take approximately one hour.

After the inspection process is complete, public cloud metering can begin reporting usage data to subscriptions. In most cases, reporting begins in subscriptions within 24 hours. However, because of the timing of source creation, the amount of time required for the inspection process, and the reporting intervals, or heartbeats, for Cloud Services platform tools, in rare cases you might have to wait up to 48 hours for this data to begin appearing in the subscriptions service.

12.2. Adding an AWS source with the manual configuration mode

If you are using public cloud metering as the data collection tool for Red Hat Enterprise Linux usage in Amazon Web Services (AWS) accounts, add each account as a cloud source.

Note

The manual configuration mode enables you to create a source without providing your AWS account root user credentials. When you select this mode, you manually create a specialized AWS Identity and Access Management (IAM) policy and role and add a connection between your AWS account and public cloud metering. The policy and role enable public cloud metering to perform the tasks that are required to identify and to meter public cloud usage of RHEL in that account.

Prerequisites

To create a source, you must meet the following prerequisites:

  • You must have the ability to create AWS resources in the us-east-1 region. If your AWS policies do not allow the creation of AWS resources in the us-east-1 region, you might be able to complete the steps to create the source, but the source might not complete the enablement process.
  • You must have the Sources administrator role in the role-based access control (RBAC) system for the Hybrid Cloud Console.

    Note

    Beginning in September 2021, the creation of a source requires the Sources administrator RBAC role. The Red Hat Customer Portal organization administrator (org admin) account role for your organization no longer has sufficient permissions to create sources.

  • The following process requires you to complete steps in both the cloud.redhat.com Add a cloud source wizard and the IAM console. You must keep both applications open while you complete these steps. See the Additional Information links in the IAM console and the IAM documentation if you need help to complete the IAM tasks.

12.2.1. Adding the source type, name, and configuration mode

Select AWS as the source type, name the source, select the configuration mode, and create the application association.

Procedure

  1. In a browser window, go to cloud.redhat.com.
  2. If prompted, enter your Red Hat Customer Portal login credentials. The Hybrid Cloud Console opens.
  3. Click Settings (the gear icon) to show the settings options.
  4. In the navigation menu, click Sources.
  5. Click the Cloud sources tab if this page is not displayed by default. Click Add Source. The Add a cloud source wizard opens.

    Note

    You can also edit an existing source to add an association to the subscriptions service.

  6. Select the Amazon Web Services icon as the source type. Click Next.
  7. Enter a name for the source. This name is not required to be the same as the AWS account name. However, use a name that is easy to distinguish if you have multiple AWS accounts and must create multiple sources for them. Click Next.
  8. Select Manual configuration as the configuration mode. Click Next.
  9. Select RHEL management as the application. This selection provides the high-precision data capabilities of public cloud metering for the subscriptions service. Select other options as appropriate. Click Next.

12.2.2. Creating the IAM policy for public cloud metering

Create a policy for the AWS account. An IAM policy defines permissions for an AWS resource, for example, a role. This policy defines the actions that public cloud metering can perform on the AWS account.

Procedure

  1. Open the IAM console and then sign in to the console.
  2. Create a new IAM policy.
  3. In the Add a cloud source wizard, copy the policy document for public cloud metering.
  4. In the IAM console, paste the copied policy document into the JSON text box, replacing any default policy document information.
  5. Complete the process to create the new policy. Do not close the IAM console.
  6. In the wizard, click Next.

12.2.3. Creating the IAM role for public cloud metering

Create a role for the AWS account. An IAM role is an identity that can perform the actions that are defined by its associated policies. This role defines the actions that public cloud metering can perform on the AWS account.

Procedure

  1. In the IAM console, create a new role.
  2. For the trusted entity type, select Another AWS Account.
  3. In the Add a cloud source wizard, copy the public cloud metering account ID.
  4. In the IAM console, paste the copied public cloud metering account ID into the Account ID field for the role.
  5. In the permissions step of role creation, attach the new policy.
  6. Complete the process to create the new role. Do not close the IAM console.
  7. In the wizard, click Next.

12.2.4. Adding the IAM ARN to the source

Adding the ARN for the role to the source creates the connection between the subscriptions service and your account so that public cloud metering can begin collecting data.

Procedure

  1. In the IAM console, find and click the new role.
  2. In the Summary page for the role, copy the role ARN.
  3. In the Add a cloud source wizard, paste the copied ARN.
  4. Click Next.
  5. Review the details for this source. Click Add to complete the source creation.

12.2.5. Verification steps

During the final step of source creation in the Add a cloud source wizard, the connection to the AWS account is verified and an AWS CloudTrail trail is created for the account. The CloudTrail trail is used to monitor the start and stop events for instances, the raw data that is used to calculate usage data for display in the subscriptions service. If the verification and trail creation is successful, the source creation is successful. This process normally takes only a few seconds.

To find the RHEL images and the associated instances that it is going to track, public cloud metering must then perform an inspection of the AWS account. The length of this inspection process can vary according to many factors, including AWS performance, the number of images in the account, the size and type of each image, the number of instances for an image, and others. As a general rule, the inspection process for an image and its instances can take approximately one hour.

After the inspection process is complete, public cloud metering can begin reporting usage data to subscriptions. In most cases, reporting begins in subscriptions within 24 hours. However, because of the timing of source creation, the amount of time required for the inspection process, and the reporting intervals, or heartbeats, for Cloud Services platform tools, in rare cases you might have to wait up to 48 hours for this data to begin appearing in the subscriptions service.

12.3. How public cloud metering interacts with AWS

When you add an Amazon Web Services (AWS) account as a source and connect it to the RHEL management bundle, you are connecting the AWS account to the subscriptions service and the public cloud metering tool.

The public cloud metering data collection tool interacts with AWS to meter specific types of Red Hat Enterprise Linux usage in an AWS account. Public cloud metering communicates with your AWS account to gather high-precision data about the images and instances associated with the account.

To do those actions, public cloud metering must have access to your account and its data. This access is defined by a set of permissions. The public cloud metering tool must be able to assume an identity that has those permissions attached to it to communicate with the account.

You create objects that fulfill these requirements during source creation. You grant the access, permissions, and identity through the creation of an AWS Identity and Access Management (IAM) policy and role for the account. You then enable the connection between the account and the subscriptions service by associating the Amazon Resource Name (ARN) for the new role with the subscriptions service. Public cloud metering can then use the ARN for authentication into your account.

At the conclusion of this source creation, public cloud metering is enabled for the AWS account. It can start the image and instance inspection processes, determine which images and instances will be metered, and begin gathering data.

The permissions in the policy strictly limit the actions that public cloud metering can perform in your account. The allowed actions enable the inspection and metering tasks that public cloud metering performs, resulting in the gathering of usage analytics data for the account. This data is the basis for the data that is displayed in the subscriptions service.

The following information provides additional details about how public cloud metering interacts with your AWS accounts.

Note

The following information about the IAM role, policy, and ARN applies whether you select the account authorization configuration mode or the manual configuration mode during the creation of the AWS source. For the account authorization mode, these objects are created for you, but for the manual mode, you must create these objects.

12.3.1. How public cloud metering uses the IAM policy

During AWS source creation, you create a new policy in IAM. A policy defines which principal (for example, a role), has access to specific AWS resources. It also defines the actions that the principal can perform on those resources.

The newly created public cloud metering policy includes permissions for specific actions in your AWS account. The permissions defined by the policy, in combination with the newly created public cloud metering role, enable public cloud metering to do certain Amazon Elastic Compute Cloud (Amazon EC2) and AWS CloudTrail (CloudTrail) actions. These actions include discovering the current state of images and instances through inspection, copying images when needed to enable the inspection process, and creating and enabling an AWS CloudTrail trail.

The public cloud metering trail is configured to capture all write events in your AWS account. This trail directs its output to an Amazon Simple Storage Service (Amazon S3) bucket that is owned by Red Hat, so this new trail does not result in additional data storage costs for your account. When public cloud metering processes that trail output, it disregards any event that is not related to instance state and image tag changes.

The data that is collected from the Amazon EC2 activities and CloudTrail events enables public cloud metering to identify and to meter Red Hat Enterprise Linux usage.

12.3.2. How public cloud metering uses the IAM role

Also during AWS source creation, you create a role in IAM. A role is an AWS identity that is associated with one or more policies to govern the actions that the role can perform.

The newly created public cloud metering policy that grants permissions for specific actions in your AWS account attaches to the newly created public cloud metering role. Public cloud metering assumes the role to interact with your account to collect data about various Amazon EC2 activities.

12.3.3. How public cloud metering uses the IAM ARN

Lastly, during AWS source creation you associate the ARN with the subscriptions service.

This association enables the public cloud metering tool to authenticate to AWS. After authentication, public cloud metering can assume the new role and do the actions permitted by the new policy.

12.4. Actions allowed by the AWS Identity and Access Management policy

During the process to create a source for an Amazon Web Services (AWS) account, you create an AWS Identity and Access Management (IAM) policy. This policy includes permissions for public cloud metering to do specific actions in your AWS account.

The following information contains the actions that public cloud metering can perform in your AWS account.

12.4.1. Actions permitted in Amazon EC2

The Amazon Elastic Compute Cloud actions that public cloud metering can perform primarily include actions related to the inspection of images. An additional action relates to gathering details about existing instances for the metering process.

Table 12.1. Actions for Amazon EC2

ActionDescription

DescribeInstances

Enables public cloud metering to get information about the instances that are currently present in your AWS account.

DescribeImages

Enables public cloud metering to get information about the Amazon Machine Images (AMIs) that are used to start your instances.

DescribeSnapshots

Enables public cloud metering to get information about the snapshots for the AMIs.

ModifySnapshotAttribute

Enables public cloud metering to set an attribute that allows the copying of snapshots for inspection.

DescribeSnapshotAttribute

Enables public cloud metering to verify that the attribute that copies the snapshots is set.

CopyImage

Enables public cloud metering to make an intermediate copy of a privately shared third-party image into your account so that public cloud metering can subsequently copy the image into the public cloud metering AWS account for the purposes of inspection.

CreateTags

Enables public cloud metering to tag an intermediate copy of a privately shared third-party image to indicate where it came from.

12.4.2. Actions permitted in CloudTrail

The AWS CloudTrail actions that public cloud metering can perform are primarily related to the metering process.

Table 12.2. Actions for AWS CloudTrail

ActionDescription

CreateTrail

Enables public cloud metering to create an AWS CloudTrail trail in your account.

UpdateTrail

Enables public cloud metering to update a CloudTrail trail in your account.

PutEventSelectors

Enables public cloud metering to select the events that CloudTrail processes and logs.

DescribeTrails

Enables public cloud metering to get information about existing CloudTrail trails.

StartLogging

Enables public cloud metering to turn on logging for CloudTrail.

DeleteTrail

Enables public cloud metering to turn off logging and delete the CloudTrail trail when the source is deleted or when the subscriptions service is removed from its association with the source.

12.5. What happens during public cloud metering image inspection

After you create a source for an AWS account, public cloud metering inspects the contents of that account. The inspection process first finds each visible instance in the account, ignoring all instances that are in the process of being terminated. The inspection associates each instance with its parent Amazon Machine Image (AMI). The AMI ID for an image is saved for future instance identification.

After locating the image for a visible instance, the inspection process determines whether the image is a RHEL image. The inspection also determines whether it is appropriate to report the usage data for the instances of an image to the subscriptions service.

The amount of inspection required to determine if the image is a RHEL image and whether it is appropriate to report its instance usage data varies according to the type of image. For some types of images, a simple metadata inspection is enough to find known markers that identify it as a RHEL image and identify its origin. For other types of images, where these markers are not present, a deeper inspection of the file system is required for image identification.

The following information explains common types of images, how they are inspected, and whether usage data is reported for the instances:

Images that are ignored
Images with operating systems that are not RHEL are not relevant to the subscriptions service. Images that are encrypted or are marked as non-copyable cannot be fully inspected to discover the operating system metadata or the running instances. These images are ignored.
AWS Marketplace images

Amazon is an authorized reseller of Red Hat cloud platform products. The RHEL images available in AWS Marketplace might be offered directly by Amazon or by trusted third-party resellers. These RHEL images are inspected by public cloud metering to locate metadata that identifies them as AWS Marketplace images. However, usage data for the associated instances is not reported in the subscriptions service because the terms of use for these images, including any usage tracking or billing, are managed by Amazon.

Note

For some images that are offered in AWS Marketplace, the metadata inspection is not sufficient. For example, for copies of shared images, image metadata shows the owner as the user who made the copy. Such images are subject to the file system inspection process to discover more information about the images.

Red Hat Cloud Access

The Red Hat Cloud Access program enables you to use certain Red Hat product subscriptions on certified public cloud providers. Cloud Access images contain metadata that public cloud metering can use to bypass file system inspection of those images. The instance usage data that is associated with these Cloud Access images is reported in the subscriptions service.

Note

For some Cloud Access images, the metadata inspection is not sufficient. For example, for copies of shared images, image metadata shows the owner as the user who made the copy. Such images are subject to the file system inspection process to discover more information about the images.

Other images

For images that are not obtained directly from AWS Marketplace or Cloud Access but are obtained through other sources, the images are inspected and the instance usage data is reported through public cloud metering. These images could be copies of AWS Marketplace, AWS Community, or Cloud Access shared images or they could be images obtained through some other resource.

With these types of images, it is possible that they could contain markers that identify them as RHEL images, but the metadata inspection might not be sufficient for image identification. For example, for a copy of a shared image the owner metadata changes to the entity that made the copy, so owner data cannot be used to help identify the image. Therefore, a deeper inspection of the file system is needed to discover the markers for image identification.

The file system inspection process includes mounting the image into a running Red Hat instance and looking for markers that, among other data, show that the image is a RHEL image. As the phases of this file system inspection process are completed, artifacts such as image copies, snapshots, or volumes are deleted from the Red Hat instance.

For all images, regardless of type, the Amazon Machine Image (AMI) ID is retained to match instances to the correct image. When an instance is started, it is either matched to its parent image, or, if that image AMI ID is not found, the inspection process runs on that image to identify it and determine whether usage data is tracked for its instances.

12.5.1. Manually tagging AMIs as RHEL

The inspection process is optimized for more commonly used file systems that might be present in the AMI. For less commonly used file systems, RHEL cannot always be found during inspection. To work around this problem, you can manually tag the AMI as RHEL instead of using the inspection process to find RHEL.

When an AMI is tagged as RHEL and this tag is found during the initial steps of inspection, the remainder of the inspection process is skipped. The instances for that tagged image will be tracked by public cloud metering.

It is important to remember that not all AMIs that use less common file systems need to be tagged as RHEL. For example, a swap file system would not be used to run instances, so an AMI that has RHEL only in the swap file system would not need to be tagged. Current testing of the inspection process has shown that Oracle ZFS is an example of a file system where RHEL is more difficult to find. For these types of file systems, tagging AMIs as RHEL bypasses inspection while also ensuring that the instances will be tracked by public cloud metering.

Note

Previously, the Logical Volume Manager (LVM) file system was listed as a file system where AMIs needed to be tagged as RHEL to bypass inspection. The LVM file system is now a supported file system for RHEL image inspection as of October 2021. Those RHEL based AMIs no longer need to be tagged. No action is needed on AMIs that were previously manually tagged as RHEL.

To add and apply a custom tag for RHEL:

  1. From the AWS Management Console, navigate to the Tag Editor.
  2. Use Find a Resource to find AMI as a resource type.
  3. Add a tag and add the Tag key and Tag value values for the custom tag, using the following value for both fields:

    cloudigrade-rhel-present
  4. Navigate to the AMI resources, and then select the AMI for which you want to apply the custom RHEL tag.
  5. Repeat these steps for each AMI that is using a less common file system where RHEL is present in any partition in the AMI.

Part IV. Activating and opening the subscriptions service

After you complete the steps to set up the environment for the subscriptions service, you can go to cloud.redhat.com to request the subscriptions service activation. After activation and the initial data collection cycle, you can open the subscriptions service and begin viewing usage data.

Do these steps

  1. To find out if the subscriptions service activation is needed, see the following information:

  2. To log in to cloud.redhat.com and activate the subscriptions service, see the following information:

  3. To log in to cloud.redhat.com and open the subscriptions service after activation, see the following information:

  4. If you cannot activate or log in to the subscriptions service, see the following information:

Chapter 13. Determining whether manual activation of the subscriptions service is necessary

The subscriptions service must be activated to begin tracking usage for the Red Hat account for your organization. The activation process can be automatic or manual.

Procedure

Review the following tasks that activate the subscriptions service automatically. If someone in your organization has completed one or more of these tasks, manual activation of the subscriptions service is not needed.

  • Purchasing a pay-as-you-go On-Demand subscription for Red Hat OpenShift Container Platform or Red Hat OpenShift Dedicated through Red Hat Marketplace. As the pay-as-you-go clusters begin reporting usage through OpenShift Cluster Manager and the monitoring stack, the subscriptions service activates automatically for the organization.
  • Purchasing a pay-as-you-go On-Demand subscription for Red Hat OpenShift Streams for Apache Kafka through Red Hat Marketplace. As the Red Hat OpenShift Streams for Apache Kafka instances begin reporting usage through the monitoring stack, the subscriptions service activates automatically for the organization.
  • Creating an Amazon Web Services source through the sources application in the Hybrid Cloud Console with the RHEL management bundle selected. The process of creating the source also activates the subscriptions service. In addition, this process activates the public cloud metering tool for the subscriptions service, a tool that enables high-precision data collection for your RHEL based Amazon Web Services instances.
  • Creating a Microsoft Azure source through the sources application in the Hybrid Cloud Console with the RHEL management bundle selected. The process of creating the source also activates the subscriptions service.

These tasks, especially purchasing tasks, are frequently performed by a user that has the organization administrator (org admin) role in the Red Hat organization. The source creation tasks must be performed by a user with the Sources administrator role in the role-based access control (RBAC) system for the Hybrid Cloud Console. Beginning in September 2021, the Red Hat Customer Portal organization administrator (org admin) account role for an organization no longer has sufficient permissions to create sources.

Chapter 14. Activating the subscriptions service

If the subscriptions service is not activated by one of the tasks that include automatic activation, then the subscriptions service must be manually activated. Tasks that include automatic activation are purchasing an On-Demand subscription through Red Hat Marketplace or creating an Amazon Web Services or Microsoft Azure source that includes the RHEL management bundle through the sources application in the Hybrid Cloud Console.

If manual activation is needed, the subscriptions service must be activated by a user with access to the Red Hat account and organization through a Red Hat Customer Portal login. This login is not required to be a Red Hat Customer Portal organization administrator (org admin). In addition, that user must also have the Subscriptions administrator role or the Subscriptions user role in the user access role-based access control (RBAC) system for cloud.redhat.com.

Note

If a Red Hat Customer Portal login is associated with an organization that does not have an account relationship with Red Hat, then the subscriptions service cannot be activated.

When the subscriptions service is activated, the Cloud Services platform tools begin analyzing and processing data from the data collection tools for display in the subscriptions service.

Note

The following procedure guides you through the steps to activate the subscriptions service from cloud.redhat.com. If the subscriptions service is not already activated, you can also access the activation page at the conclusion of the subscriptions service tour or from an option on the Subscription Central page.

Procedure

  1. In a browser window, go to cloud.redhat.com.
  2. If prompted, enter your Red Hat Customer Portal login credentials.
  3. In the Hybrid Cloud Console navigation menu, click either Red Hat Enterprise Linux or OpenShift.
  4. Expand Subscriptions. Then click one of the following options, depending on the product name that you clicked in the previous step.

    • For Red Hat Enterprise Linux, click All RHEL.
    • For OpenShift, click Container Platform
  5. Complete one of the following steps, depending on the status of the subscriptions service activation:

    • If the subscriptions service is not yet active for the account, the activation page displays. Click Activate Subscriptions.
    • If the subscriptions service is activated but not yet ready to display data, the subscriptions service application opens, but it displays an empty graph. Try accessing the subscriptions service later, typically the next day.
    • If the subscriptions service is activated and the initial data processing is complete, the subscriptions service application opens and displays data on the graph. You can begin using the subscriptions service to view data about subscription usage and capacity for the account.

Verification steps

Data processing for the initial display of the subscriptions service can take up to 24 hours. Until data for the account is ready, only an empty graph will display.

Chapter 15. Logging in to the subscriptions service

You access the subscriptions service from the Hybrid Cloud Console after logging in to your Red Hat Customer Portal login.

Procedure

  1. In a browser window, go to cloud.redhat.com.
  2. If prompted, enter your Red Hat Customer Portal login credentials.
  3. In the Hybrid Cloud Console navigation menu, click either Red Hat Enterprise Linux or OpenShift.
  4. Expand Subscriptions. Then click one of the following options, depending on the product name that you clicked in the previous step.

    • For Red Hat Enterprise Linux, click All RHEL or click one of the specific architectures to view more detailed information.
    • For OpenShift, click Container Platform or Dedicated (On-Demand).
  5. If the subscriptions service is activated and the initial data processing is complete, the subscriptions service opens and displays data on the graph. You can begin using the subscriptions service to view data about subscription usage and capacity for the account.

    Note

    If the subscriptions service opens but displays an empty graph, then the subscriptions service is activated but the initial data processing is not complete. Try accessing the subscriptions service later, typically the next day.

Chapter 16. Verifying access to the subscriptions service

User access to cloud.redhat.com services, including the subscriptions service, is controlled through a role-based access control (RBAC) system. User management capabilities for this RBAC system are granted to the organization administrators (org admins) for an organization, as configured through access.redhat.com. Org admins then manage the cloud.redhat.com RBAC groups, roles, and permissions for the other members in the organization. This management can include the assignment of the User Access administrator role to additional members in the organization. The org admins and user access administrators can manage user access by using the Settings > User access option at cloud.redhat.com.

The predefined role Subscriptions user controls the ability to activate and access the subscriptions service. By default, every user in the organization has this role. However, if your org admin has made changes to user access roles and groups, you might not be able to access the subscriptions service.

Note

Beginning in September 2021, the RBAC roles for the subscriptions service are changed. The former Subscription Watch administrator role is renamed to the Subscriptions administrator role. This role contains every available permission for the subscriptions service. The Subscriptions user role, a new role with a subset of the permissions in the Subscriptions administrator role, now exists for users in the organization who do not require all the permissions for the subscriptions service. An example of this type of user is one who only needs to view report data.

After this change to the subscriptions service user access roles, then by default all users for organizations that activate the service and new users for organizations that currently use the service will be assigned the Subscriptions user role. However, the default behavior for role assignments is affected by how an organization is using RBAC groups to manage user access. If custom groups are in use instead of the Default access group, the org admin or another user with the User Access administrator RBAC role must manually update these groups to contain the new roles and manage any default assignment of them to the users in the organization.

Procedure

  1. If you cannot activate or access the subscriptions service, contact your organization administrator. Your org admin can provide information about the status of the subscriptions service for your organization.

Additional resources

Part V. Viewing and understanding the subscriptions service data

After you set up the environment for the subscriptions service, such as setting up data collection tools or other data sources, completing any additional required subscriptions service activation steps, and waiting for the initial data ingestion, analysis, and processing to be complete (usually no longer than 24 hours), you can begin viewing subscription usage and capacity data in the subscriptions service.

Learn more

Chapter 17. How does the subscriptions service show my subscription data?

The subscriptions service shows subscription data for Red Hat offerings such as software products or product sets, organized by the Red Hat software portfolio options in the Hybrid Cloud Console navigation menu. Currently, the subscriptions service shows data for the Red Hat Enterprise Linux, Red Hat OpenShift, and Red Hat Cloud Services software portfolios.

Note

The Red Hat Cloud Services portfolio page is currently represented as the Application Services navigation option on the Hybrid Cloud Console home page.

For each software portfolio, the Subscriptions menu shows options for navigating to the subscriptions service product pages for the available product architectures, products, or product sets within the selected portfolio. The Subscriptions menu might also contain options for viewing other subscription-related data or functions that are not part of the subscriptions service.

Each product page for the subscriptions service offers multiple views. These views enable you to explore different aspects about your subscriptions for that product. When combined, the data from these views can help you recognize and mitigate problems or trends with excess subscription usage, organize subscription allocation across all of your resources, and improve decision-making for future purchasing and renewals.

For all of these activities, and for other questions about your subscription usage, the members of your Red Hat account team can provide expertise, guidance, and additional resources. Their assistance can add context to the account data that is reported in the subscriptions service and can help you understand and comply with your responsibilities as a customer. For more information, see Your responsibilities.

17.1. How to use the subscription data in the views

The subscriptions service views can be grouped generally into the graph view and the table view.

The graph view is a visual representation of the subscription usage and capacity for your organization, where your organization is also a Red Hat account. This view helps you track usage trends and determine utilization, which is the percentage of deployed software when measured against your total subscriptions.

The table view can contain one or more tables that provide more details about the general data in the graph view. The current systems table provides details about subscription usage on individual components of your environment, for example, systems in your inventory or clusters in your cloud infrastructure or restricted network. The current subscriptions table provides details about individual subscriptions in your account. The table view helps you to find where Red Hat software is deployed in your environment, to understand how individual subscriptions contribute to your overall capacity for usage of similar types of subscriptions, to resolve questions you might have about subscription usage, and to refine plans for future deployments.

Note

For some product pages, the table view data is derived from data in the Cloud Services platform inventory service. User access to subscriptions, inventory, and other services is controlled independently by a role-based access control (RBAC) system for the Cloud Services platform tools, where individual users belong to groups and groups are associated with roles. More specifically, user access to the inventory service is controlled through the Inventory administrator role.

When the Inventory administrator RBAC role is enabled for the group or groups for your organization, information in the current systems table for the subscriptions service can display as links, where you can open a more detailed record in the inventory application for the listed systems. Otherwise, current systems table information displays as nonlinked information. For more information about RBAC usage in your organization, contact the organization administrator for your account.

The usage and utilization graph view

The graph view shows you your total subscription usage and capacity over time in a graph form. It provides perspective on your account’s subscription threshold, current subscription utilization, and remaining subscription capacity, along with the historical trend of your software usage. The graph view might contain a single graph or multiple graphs, depending upon how subscription usage for a product is measured.

The usage and capacity calculations that appear in the graph are based on data snapshots that are provided periodically as the Hybrid Cloud Console processing tools analyze information from the various data collection tools and data sources. The data snapshots for Annual subscriptions generally update once every 24 hours. The data snapshots for On-Demand subscriptions can be more frequent, updating multiple times per day.

  • Usage is the measurement of the consumption of Red Hat products installed on physical hardware or its equivalent. Usage is measured with a unit of measurement that is defined within the terms of a subscription.

    Units of measurement differ according to the type of product and the type of subscription. The terms of Annual subscriptions determine usage as the physical hardware that is consumed, such as sockets or cores, or equivalent physical hardware that is consumed, such as a cloud platform instance that is equal to a socket. The terms of On-Demand subscriptions, such as pay-as-you-go subscriptions, can determine usage by a combination of metrics that measure consumed resources. One type of these metrics might be a compound unit, or derived unit. Examples of derived units can be a certain amount of physical hardware that is consumed during a specific period of time, such as core hours, or the availability of a Red Hat service instance, such as instance hours.

    Usage is represented by a line or area graph, with different types of usage, for example, Red Hat Enterprise Linux physical, virtual, and public cloud usage, represented by different colors.

    For Annual subscriptions, usage fluctuates over time as you install and uninstall the software contained in your subscriptions. For On-Demand subscriptions, usage fluctuates as you consume more or less of the resources that are measured by the terms of that subscription.

  • Capacity is the upper limit of usage for a subscription, expressed in the unit of measurement and then summed for similar subscriptions across all of the contracts in your account. Similar subscriptions can be all products in a certain product portfolio, such as all RHEL subscriptions.

    The sum of capacity for all of your active subscriptions, the maximum capacity, is also known as the subscription threshold. This value is represented by a dashed line in the usage and utilization graph for a product. Two primary reasons could prevent a subscription threshold from appearing in the graph. If a product page includes a subscription that is sold with unlimited capacity as part of its sales terms, the subscription threshold is not shown. Also, for On-Demand subscriptions or similar subscriptions that are billed for monthly usage, no capacity is set, so a subscription threshold is not shown. If filter selections remove unlimited subscriptions from a view, then the subscription threshold would appear for those filtered results.

    The capacity of an individual subscription does not change over time. The subscription threshold fluctuates over time as new subscriptions are activated and old subscriptions expire, affecting the maximum capacity.

  • Utilization is the percentage of the maximum capacity, as indicated by the subscription threshold, that is exhausted through the deployment and usage of Red Hat software in your account. In simple terms, utilization is the usage divided by the maximum capacity. If capacity is not applicable to a certain type of subscription present in the account, such as an unlimited subscription, utilization as a percentage of the maximum capacity also does not apply.

    Subscription utilization fluctuates over time due to the interaction of the changes to the usage and the subscription threshold.

Although the graph shows trends over a selected time interval, you can also view more specific information for the graph. For example, if the selected time interval is Weekly, you can hover over the graph near a date to see more specific data for a particular week.

You can also use the available filters, which can vary by product, to change the usage data that displays in the graph. For example, you can filter by the time interval, the unit of measurement, or by the subscription attribute filters such as service level agreement (SLA), as applicable.

The graph view: example graph

The following image shows an example RHEL usage and utilization graph in the subscriptions service. For other product pages, the graph view will contain differences in design, depending on how those products are sold and measured.

For the graph, the time filter is set to a daily view, and the graph displays a month of RHEL usage.

Figure 17.1. Usage and utilization graph example

Usage and utilization graph example for a month of data
  1. A tooltip displays when you hover over a point in the graph. In this example, the tooltip displays more information about the subscription usage and the subscription threshold for a specific day, April 6. For this day, physical RHEL is consuming 20 sockets, virtualized RHEL is consuming 25 sockets, and public cloud RHEL is consuming 22 sockets, with a total of 67 sockets for all usage types. This usage total is less than the subscription threshold of 80 sockets.
  2. The maximum capacity of RHEL usage, based on a unit of measurement of sockets, displays as the dashed subscription threshold line. This example shows an increase in the subscription threshold sometime between April 11 and April 16. The increase in the available capacity in this Red Hat account is due to the activation of additional RHEL subscriptions in the account.
  3. The RHEL subscription usage, based on a unit of measurement of sockets, displays as three different colors for RHEL installed in physical, virtual, and public cloud environments. The example shows how all of these types of usage fluctuate over time. Usage fluctuates according to subscription activity, such as installation and uninstallation on physical systems or launch and termination of instances in the public cloud.

The table view: current systems table

The current systems table shows you details about usage on individual components in your environment, taken from the most recent daily snapshot of the usage data. This table provides information that can help you correlate the aggregated usage totals in the graph with the current software deployments on individual components across your organization. The components and data shown in the table vary by product because of the different ways that usage is tracked for products, by socket count, core count, core hours, and so on. Also, a component that displays as a "system" in the table can be a physical or virtual machine, or it can be another object such as a cluster or instance. Therefore, generic references to this table as the current systems table are for convenience only.

Note

For some products such as RHEL, the data in the current systems table view contains aspects of the data that is available from the Hybrid Cloud Console inventory application, with the following differences:

  • The inventory application shows significantly more system data. The current systems table view is a small subset of this data.
  • Data in the inventory application can be more current because of the methods that are used to update the data. The current systems table view in subscriptions is based on a daily snapshot, so that data could be up to 24 hours old.
  • Consumption of sockets or cores in the inventory application is represented as actual consumption. Usage in subscriptions is represented as normalized consumption, bound by the terms of subscription. For example, usage of a physical RHEL subscription is measured by socket pair, so a socket count for that type of system is always rounded to the next higher even number.

The information in the current systems table generally shows the name of the system, the type of the system, the usage total for that system according to the unit of measurement, and the date that the system was last seen. However, the available columns in the table might differ according to the types of data that are relevant for that product. Columns in the table are sortable.

For the Name column that contains the name of the system, the system is the machine, either physical or virtualized, on which the product or product set is deployed. A system can also be a different component, such as a Red Hat OpenShift cluster or an instance of a Red Hat Cloud Services service. The system is usually represented by either its display name or its universally unique ID (UUID). For multi-guest systems such as hypervisors, you can expand the system to see more information about individual guests. For some objects in the Name column, you can also click the system name to open the full system record in a different resource, for example, in the Hybrid Cloud Console inventory application.

Note

Currently for the display of Red Hat OpenShift Container Platform and Red Hat OpenShift Dedicated pay-as-you-go On-Demand subscription data, the Name column uses the inventory UUID. This ID is not the same as the cluster ID that is used for the cluster in Red Hat OpenShift Cluster Manager. In addition, the inventory UUID in the Name column does not provide a link to the cluster record in Red Hat OpenShift Cluster Manager. However, in both the subscriptions service and Red Hat OpenShift Cluster Manager you can use the available search filters to cross-reference these IDs.

For the Type column that contains the type of the system, the type is the infrastructure type on which the product or product set is deployed. A system can be a physical host, hypervisor, individual virtual machine, or other form of virtual deployment such as a public cloud instance. The information in this column might not be applicable to all products, so for some products the Type column might not appear.

For the column where the usage total for that system is displayed, the column label will vary according to how product usage is measured. For subscriptions where usage is measured with multiple metrics, multiple columns will display. The usage is the actual or equivalent amount of physical hardware that the product or product set is consuming on that system. Usage is counted according to the applicable unit of measurement, which in turn is determined by the terms of the subscription. For example, for a subscription that is sold by sockets, the usage total is the number of sockets, also known as subscribed sockets, that are consumed by a system. Other subscriptions such as On-Demand subscriptions are sold with different terms, such as by core hours, or might include multiple metrics in the terms, such as data transfer, data storage, and instance hours.

Note

The data for the usage total is based on the update, or heartbeat, cycles for the subscriptions service. For Annual subscriptions, the value that displays for the usage total is based on the 24-hour snapshot of usage for the most recently tallied day. For On-Demand subscriptions, the value is the most recently tallied data that is available to the subscriptions service, data that could be from the current day.

For the Last seen column that contains a date, that last seen date is the date that the system was last found by the Cloud Services platform tools, such as the inventory service or Red Hat OpenShift Cluster Manager and other tools in the monitoring stack. As part of the underlying tasks that subscriptions and other tools perform to calculate usage, the inventory service and the monitoring stack help to identify and deduplicate system data that is gathered by the various data collection tools.

As with the usage and utilization graph, you can use the filters to change the data that displays in the current systems table. However, a change to the time interval, such as changing from days to weeks, has no effect on the current systems table. The data displayed is from the most recent snapshot, so it is usually no more than 24 hours old.

You can also search the current systems table for a specific system name or a group of similarly named systems by using the search field. Exact and partial strings are accepted, but common wildcard characters are treated as literal characters, not special character wildcards.

The table view: current subscriptions table

The current subscriptions table shows you details about your currently active subscriptions, taken from the most recent daily snapshot of this data. This table contains information that can help you understand the maximum capacity for your usage of that product within your account. The maximum capacity is displayed as the subscription threshold in the usage and utilization graph view.

The table shows the capacity for each subscription in the unit of measurement by which that subscription is sold, for example, sockets or cores. The sum of the capacity for all rows equals the subscription threshold.

By using the data in the current subscriptions table, you can more fully understand how individual subscriptions are contributing to the subscription threshold. This information can help you plan for any future purchasing decisions, such as adjusting the amount of existing subscriptions or purchasing different subscriptions that are more suited to your usage profile. You can also use the information in the table to anticipate upcoming events that could affect your business activities in relation to purchasing and renewals, such as contract expiration.

Note

Currently, On-Demand subscriptions such as Red Hat OpenShift Dedicated On-Demand are restricted to one subscription per account. Therefore, the current subscriptions table does not display for these types of products.

The information in the current subscriptions table generally shows the name of the product subscription, the service level agreement (SLA) for the subscription, the quantity of the subscription, the capacity of that subscription according to the unit of measurement, and the next renewal event for the subscription. All columns in the table are sortable.

The Product column lists unique product subscriptions that are currently active in your account. Future-dated subscriptions that are not yet active do not appear in the table. Expired subscriptions that are not renewed are removed from the table.

Subscriptions that share the same stock-keeping unit (SKU) appear on a single row. Subscriptions that can be grouped on the same row include these characteristics:

  • Subscriptions with the same SKU, whether purchased in the same or different contracts or purchased at the same or different times.
  • Subscriptions with the same SKU but with other minor differences to attributes, such as differences in quantity, that do not result in the creation of a new SKU.

In the Product column, a subscription might display multiple times. The text that displays for a subscription is derived from the SKU description text. In some cases, this text might be identical for different SKUs. For example, two subscriptions could differ in one major attribute such as the SLA, resulting in a different SKU for the changed SLA.

The Service level column contains the service level agreement (SLA) for a subscription, as defined within the terms of the subscription. Examples include Premium, Standard, or Self-Support. This information can sometimes help you distinguish between two subscriptions in the Product column that have identical descriptions.

The Quantity column contains the number of active subscriptions for a SKU. For example, a single table row might contain multiples of the same SKU purchased in the same transaction. It might also contain multiples of the same SKU purchased in different transactions.

For the column where the capacity for a subscription is displayed, the column label will vary according to how product usage is measured. For example, RHEL is sold in socket pairs, so the capacity column for RHEL has the label Sockets. This capacity column measures the maximum amount of available usage for the subscriptions in each table row. Usage is counted according to the applicable unit of measurement, which in turn is determined by the terms of the subscription. When summed, the total for all rows in the table represents the maximum possible capacity of usage for all subscriptions of that product. This value is also the subscription threshold in the graph view.

Note

When a row includes a subscription that is sold with unlimited capacity, the capacity value for that row will show the infinity symbol to represent the unlimited capacity.

The Next renewal column lists the next pending renewal event for any subscription that is in that row.

17.2. Measurement of usage and capacity

Currently, the subscriptions service tracks certain types of Red Hat Enterprise Linux and Red Hat OpenShift products. The data that is displayed for usage and capacity varies by product.

Overall usage and capacity trends display on the usage and utilization graph. The information in the current systems table provides additional detail about the most recent day of data from the graph.

17.2.1. Measurement of usage and capacity for Red Hat Enterprise Linux

Red Hat Enterprise Linux

For Red Hat Enterprise Linux, measurement of usage is based on the consumption of sockets, according to the terms of your subscription.

Usage: RHEL

Usage is measured in CPU sockets. Data is aggregated for all supported architectures and is divided by architecture, including the RHEL variants for x86. You can view aggregated or specific architecture data by selecting from the Subscriptions options in the navigation menu.

The usage data in the graph is divided into three sections, based on RHEL on physical systems, virtualized systems, or public cloud systems.

Capacity: RHEL
To measure capacity, the socket contribution of each RHEL subscription is added to a total that encompasses the inventory’s CPU architecture, including the RHEL variants for x86.

For some Red Hat products, RHEL is included with and is installed to support that product. For example, RHEL is included with Red Hat Satellite. Bundled RHEL is not tracked or counted against total usage or capacity.

17.2.2. Measurement of usage and capacity for Red Hat OpenShift

For Red Hat OpenShift, measurement of usage is based upon the size of clusters. The unit of measurement that is used to measure cluster size depends upon the subscription terms and type of subscription for the product.

The cluster size is the sum of the size of all the subscribed nodes. The subscribed nodes are the compute or worker nodes in the versions of Red Hat OpenShift where this fact can be obtained. For each of the subscribed nodes, the kernel is queried for the number of sockets, the number of cores on each socket, and the number of threads supported by each core. Then the total number of threads is divided by the threads per core to determine the number of cores on the node (physical or virtual machine).

Note

For Red Hat OpenShift version 4.1 and later (including the 4.7 versions of Red Hat OpenShift Container Platform and OpenShift Dedicated for On-Demand subscriptions), the subscriptions service is able to distinguish between control plane and compute nodes, also commonly referred to as infrastructure and worker nodes. You might be familiar with other names for the types of control plane nodes that were used in different releases of Red Hat OpenShift, such as master, router, registry, metrics, logging, etcd, and similar names. In the aggregation of usage data based on cluster size for these versions of Red Hat OpenShift, control plane nodes are ignored. However, for OpenShift Dedicated On-Demand, control plane usage is tracked as instance hours, based upon the availability of clusters.

The subscriptions service is not able to make this same distinction for earlier versions of Red Hat OpenShift Container Platform, so data for infrastructure nodes is displayed and counted along with the worker node usage. Analysis of cluster data indicates that approximately 15% of data displayed for earlier versions of Red Hat OpenShift Container Platform is infrastructure node overhead. Therefore, if your subscription profile includes Red Hat OpenShift Container Platform version 3, it is possible that you can exceed your Red Hat OpenShift subscription threshold by up to 15% but still be in compliance with your subscriptions.

For additional details about improvements to Red Hat OpenShift usage tracking in the subscriptions service, see the following information: How do vCPUs, hyper-threading, and subscription structure affect the subscriptions service usage data?

After the cluster size information is obtained, usage and capacity information is calculated according to the product and type of subscription. For more information, see the following descriptions of each product and subscription type.

Red Hat OpenShift Container Platform

Usage: Red Hat OpenShift Container Platform with an Annual subscription
Usage of an Annual subscription of Red Hat OpenShift Container Platform is measured in CPU cores or sockets. Data displays as an account-level view that is a sum of usage across active clusters.
Capacity: Red Hat OpenShift Container Platform with an Annual subscription
To measure capacity, the core or socket contribution (as applicable) of each subscription is added to a total for Annual subscriptions.
Usage: Red Hat OpenShift Container Platform with a pay-as-you-go On-Demand subscription

Usage of a pay-as-you-go On-Demand subscription of Red Hat OpenShift Container Platform is measured in core hours. A core hour is a unit of measurement for computational activity on one core (as defined by the subscription terms), for a total of one hour, measured to the granularity of the meter that is used. To obtain usage in core hours, the subscriptions service uses numerical integration, also commonly known as an "area under the curve" calculation.

The core hour based usage data for all clusters is summed and then displays as daily usage in the usage and utilization graph. Because of the monthly billing cycle for a pay-as-you-go subscription, the default time interval for the graph is one month, the current month. A cumulative core hours used value also displays for the most recent snapshot of the usage for that month if there is accumulated usage to display.

Note

The core hour usage data for the account and for individual clusters that is shown in the subscriptions service interface is rounded to two decimal places for display purposes. The usage values that are displayed in different locations in the interface might show slight discrepancies due to this rounding. However, the data that is used for the subscriptions service calculations and that is provided to the Red Hat Marketplace billing service is at the millicore level, rounded to 6 decimal places, and is not from the displayed values.

Capacity: Red Hat OpenShift Container Platform with a pay-as-you-go On-Demand subscription
Capacity is not an applicable metric for a pay-as-you-go On-Demand subscription. So capacity is not tracked, nor is a subscription threshold line shown, for this type of subscription.

Red Hat OpenShift Dedicated

Usage: Red Hat OpenShift Dedicated with a pay-as-you-go On-Demand subscription

Usage of a pay-as-you-go On-Demand subscription of Red Hat OpenShift Dedicated is measured with two units of measurement, core hours and instance hours. Therefore, the usage and utilization graph includes a dual y-axis, also known as a primary y-axis and secondary y-axis.

  • A core hour is a unit of measurement for computational activity on one core (as defined by the subscription terms), for a total of one hour, measured to the granularity of the meter that is used. For Red Hat OpenShift Dedicated On-Demand, core hours measure the workload usage on the compute machines.
  • An instance hour is a unit of measurement for the availability of a Red Hat service instance, during which it can accept and execute customer workloads. For Red Hat OpenShift Dedicated On-Demand, instance hours use your cluster availability data to measure the control plane usage on the control plane machines (in older versions of Red Hat OpenShift, the master machines). This data is used to calculate the control plane cost, also known as the cluster fee, that is included in your Red Hat Marketplace invoice.

To obtain usage in core hours and instance hours, the subscriptions service uses numerical integration, also commonly known as an "area under the curve" calculation. This process samples usage multiple times per hour, normalizes the samples for a specific time interval, aggregates the normalized samples into a daily total, and then sums each day into a total that is determined by the billing terms of the subscription. The usage data for all clusters is summed and displayed in the usage and utilization graph based on the selected time filter. The core hour usage is plotted with the primary y-axis, and the instance hour usage is plotted with the secondary y-axis. Because of the monthly billing cycle for a pay-as-you-go subscription, the default time interval for the graph is one month, the current month. A cumulative core hours used value also displays for the most recent snapshot of the usage for that month if there is accumulated usage to display.

Note

The core hour and instance hour usage data for the account and for individual clusters that is shown in the subscriptions service interface is rounded to two decimal places for display purposes. The usage values that are displayed in different locations in the interface might show slight discrepancies due to this rounding. However, the data that is used for the subscriptions service calculations and that is provided to the Red Hat Marketplace billing service is at the millicore level, rounded to 6 decimal places, and is not from the displayed values.

Capacity: Red Hat OpenShift Dedicated with a pay-as-you-go On-Demand subscription
Capacity is not an applicable metric for a pay-as-you-go On-Demand subscription. So capacity is not tracked, nor is a subscription threshold line shown, for this type of subscription.

17.2.3. Measurement of usage and capacity for Red Hat Cloud Services

For Red Hat Cloud Services, measurement of usage is based on metrics that generally relate to the consumption of computing resources by the platform that powers the service. These resources might include, but are not limited to, metrics concerning CPU, RAM, network traffic, storage volume, and control plane consumption during the availability of each instance of a service. Because these services perform different jobs and consume different resources, an individual service might be measured by a single metric or a combination of these metrics. In addition, those differences in the services can result in different units of measurement being used for the basic metric types.

Red Hat OpenShift Streams for Apache Kafka

Usage: Red Hat OpenShift Streams for Apache Kafka with a pay-as-you-go On-Demand subscription

Usage of a pay-as-you-go On-Demand subscription of Red Hat OpenShift Streams for Apache Kafka is measured with three metrics, data transfer, data storage, and instance hours. The usage is aggregated for display in the subscriptions service according to the selected time filter.

Note

For the following measurements that include binary gigabytes, a binary gigabyte is the equivalent of a gibibyte. A gibibyte (GiB) is equal to 1,073,741,824 bytes, or 10243 bytes.

  • The data transfer metric is the total number of bytes of inbound and outbound data that is transferred for all active instances within a single hour, shown in binary gigabytes. The data transfer metric shows the network traffic for the instances of the service. You can think of the data transfer metric as a pure counter, incrementing its value for both inbound and outbound network traffic and summing that count into a total that is determined by the billing terms of the subscription, such as a monthly total for an On-Demand subscription.
  • The data storage metric is the maximum number of bytes stored for each active instance during a single hour, summed into an hourly total for all instances and shown in binary gigabyte hours. The data storage metric shows the amount of data that is stored by the instances of the service. You can think of the data storage metric as a gauge that finds the greatest amount of storage that is consumed by each instance in a single hour, sums each hour into a daily total, and then sums each day into a total that is determined by the billing terms of the subscription.
  • The instance hours metric is the number of active instances within a single hour, where each instance consumes a full hour of service if active at any time within that hour, shown in instance hours. The instance hours metric shows the availability of a Red Hat service instance, during which it can accept and execute workloads. You can think of the instance hours metric as a switch that measures availability during the time when an instance is in “on” mode. While an instance is in "on" mode, it is consuming Red Hat resources on the supporting control plane machines throughout its lifespan. An instance that is deleted is in "off" mode and does not generate data for any of the metrics, including the data storage metric, because all storage volumes are deleted with the instance deletion.
Capacity: Red Hat OpenShift Streams for Apache Kafka with a pay-as-you-go On-Demand subscription
Capacity is not an applicable metric for a pay-as-you-go On-Demand subscription. So capacity is not tracked, nor is a subscription threshold line shown, for this type of subscription.

17.3. Units of measurement

The unit of measurement by which product usage is tracked is determined by the terms of the subscription.

17.3.1. Units of measurement for Red Hat Enterprise Linux

Because of the inherent differences between physical, virtual, and public cloud offerings and their relation to hardware, the subscriptions service tracking uses different units of measurement, as follows:

Physical usage

The subscriptions service measures your physical RHEL installations by CPU socket pairs. Each system contributes its installed socket count, rounded upwards to the next even number. The value that displays is the total socket count, including all of the system-level pair rounding.

In the current systems table, on-premise physical hardware and other structures such as a RHEL based hypervisor can display as physical machines.

Virtualized usage

The subscriptions service measures your virtualized RHEL installations in two ways. Where host-guest mappings are not used, such as with standard guest subscriptions, each system contributes a single installed socket. Where host-guest mappings are required, such as with virtual data center (VDC) subscriptions or similar virtualized environments, the socket count of the hypervisor host node is counted, by using the same socket pair method that is used with physical RHEL installations.

Virtualized usage for hypervisors and virtual machines is grouped together in the usage and utilization graph, but hypervisor usage is displayed separately from virtual usage in the current systems table. This separation can help you troubleshoot questions about the collection of usage data for virtualized environments. In particular, it can help you determine whether host-guest mapping data is being correctly provided to the subscriptions service through the configuration of virt-who and the Satellite inventory upload plugin. For example, when these tools are correctly configured, virtualized usage is counted as follows:

  • For a RHEL based hypervisor with RHEL guests, the socket count of the hypervisor is counted twice, with the socket pair method applied. One count as physical represents the node’s own copy of RHEL, and one count as virtualized represents the usage of guest systems.
  • For a non RHEL based hypervisor with RHEL guests, the socket count of the hypervisor is counted once, as virtualized, with the socket pair method.
  • For standalone virtual machines, or for virtual machines with no detectable hypervisor management, each virtual machine is counted as a single socket.
Public cloud usage

The subscriptions service measures public cloud RHEL installations by socket. The measurement of public cloud usage differs depending on whether you are using the high-precision public cloud metering capabilities provided by the subscriptions service.

  • If you are not using public cloud metering, the instances launched from public cloud RHEL images are recognized through Desktop Management Interfaces (DMI) fact-value pairs that are present in the image and instance metadata. The values of the DMI facts identify an instance as running in the cloud infrastructure provided by Amazon Web Services (AWS), Microsoft Azure, Google Cloud Platform (GCP), and Alibaba Cloud. Each running instance contributes a single socket to the socket count. With this method, the subscriptions service has no way to identify when a single instance runs multiple times per day, so a single instance will be counted as active for the entire day.
  • If you are using public cloud metering (currently available only for AWS instances), the subscriptions service is able to track your RHEL based AWS instances with much higher precision. For each of your AWS accounts, you create a source by using the settings feature for the Hybrid Cloud Console. You provide enough data during source creation for the subscriptions service to track the start and stop events of the instances for an AWS account. Because the subscriptions service has access to these tracking capabilities, a single instance that runs multiple times per day can be identified. So instead of counting instances only, the subscriptions service shifts to count the maximum number of concurrent instances running in each source (AWS account) per day. The daily totals for multiple AWS sources are then compiled at the Red Hat account level.

17.3.2. Units of measurement for Red Hat OpenShift

Red Hat OpenShift Container Platform with an Annual subscription

The subscriptions service measures your Red Hat OpenShift usage in units of CPU cores or CPU sockets. For Red Hat OpenShift 4, the counting is aggregated at the cluster level, and for Red Hat OpenShift 3, the counting is aggregated at the node level. Currently, the subscriptions service cannot display a single, mixed-unit view of Red Hat OpenShift usage in environments that include core-based and socket-based clusters within the same account. You must use filtering to view that data in separate views.

You can use a filter to toggle the usage and capacity data between the two units of measurement. If subscription attributes are set on the cluster (through Red Hat OpenShift Cluster Manager for Red Hat OpenShift 4) or on the node (through the command to set the ocm.units value for Red Hat OpenShift 3), then that data can be reported by cores or sockets. If subscription attributes are not set or cannot be set, then the data is included in reports for both core-based and socket-based usage.

Physical usage

The subscriptions service measures your core-based physical Red Hat OpenShift installations by actual core count. Socket-based physical installations are measured by socket pairs, so the count is rounded upwards to the next even number.

In the current systems table, an example of a physical system for Red Hat OpenShift is a Red Hat OpenShift cluster running on bare metal. Another example is a RHEL system reporting as a Red Hat OpenShift 3 cluster node.

Virtual usage

The subscriptions service measures your core-based and socket-based installations by actual core and actual socket count.

In the current systems table, an example of a virtual system for Red Hat OpenShift is a cluster installed in environments such as Red Hat OpenStack Platform, Red Hat Virtualization, VMware vSphere, or on public cloud.

Red Hat OpenShift Container Platform and Red Hat OpenShift Dedicated with a pay-as-you-go On-Demand subscription

The subscriptions service measures your pay-as-you-go On-Demand subscription of Red Hat OpenShift Container Platform or Red Hat OpenShift Dedicated usage in core hours. A core hour is a unit of measurement for computational activity on one core (as defined by the subscription terms), for a total of one hour, measured to the granularity of the meter that is used.

Physical usage
The subscriptions service measures your core-based physical Red Hat OpenShift installations by actual core count. Socket-based physical installations are measured by socket pairs, so the count is rounded upwards to the next even number.
Virtual usage
The subscriptions service measures your core-based and vCPU-based virtual installations by actual core count, with vCPUs rationalized to cores using maximum efficiency. Socket-based virtual installations are measured by socket count as reported by your hypervisor. For best reporting, confirm that your hypervisor is reporting accurate socket counts for your virtual machines.
Control plane usage
For Red Hat OpenShift Dedicated On-Demand only, the subscriptions service also measures your cluster availability by instance hour. For Red Hat OpenShift Dedicated On-Demand, this instance hour calculation of control plane usage is based on a cluster hour unit of measurement.

17.3.3. Units of measurement for Red Hat Cloud Services

Red Hat OpenShift Streams for Apache Kafka with a pay-as-you-go On-Demand subscription

The subscriptions service measures your pay-as-you-go On-Demand subscription of Red Hat OpenShift Streams for Apache Kafka with different units of measurement for the three different metrics that are used.

Note

For the following measurements that include binary gigabytes, a binary gigabyte is the equivalent of a gibibyte. A gigibyte (GiB) is equal to 1,073,741,824 bytes, or 10243 bytes.

Data transfer
The subscriptions service measures data transfer for an OpenShift Streams for Apache Kafka service instance by binary gigabytes.
Data storage
The subscriptions service measures data storage for an OpenShift Streams for Apache Kafka service instance by binary gigabyte hours. A binary gigabyte hour is a unit of measurement for computational activity, including storage of data, generated by that service instance for a total of one hour.
Instance hours
The subscriptions service measures control plane usage for an OpenShift Streams for Apache Kafka service instance by instance hours.

17.4. Filtering

You can further refine the subscriptions service data by selecting values from the available filters in the interface. When you select a filter option, the graph view (and in some cases, the table view) generally refreshes to show data that relates to that option. In other words, most of the filters are inclusive, not exclusive, for the selected option.

Filtering by time

For Annual subscriptions, you can filter data by several different time intervals, including daily (the default) and quarterly. For On-Demand subscriptions, you can filter by the current month or by any other month in the previous 12 months.

Filtering by time affects only the usage and utilization graph view. The current systems table view is always data from the most recent subscriptions service daily snapshot, and is not affected by the time filter.

Note

During the rapid development of the subscriptions service, the addition of new features is improving the scope and accuracy of this tool. The subscriptions service does not provide in-application capability to recalculate older usage and capacity data as these new features are being added. Therefore, the selection of a longer time interval could display results that contain inconsistencies.

Filtering by subscription attributes

You can filter by subscription attributes, which is data that describes the characteristics and intended usage of subscription. The accuracy of those filters is dependent upon how accurately the subscription attribute data is set.

Subscription attributes might be configured from the operating system or its management tools, or from settings within the product itself. In these various tools, subscription attribute data is also known as system purpose, subscription settings, or similar names. In some cases, subscription attribute values might be derived from the subscription, such as when a subscription is sold either by sockets or cores.

You can use the subscriptions service filters to get a more focused view on usage that meets certain use cases within your subscription profile. For example, filtering your RHEL subscriptions by service level agreement (SLA) to show only those with an SLA of Premium could help you determine the current usage of premium subscriptions compared to your overall capacity for those premium subscriptions. In turn, this knowledge can inform decisions such as additional deployments, actions to mitigate subscription compliance issues, or future purchasing and renewals.

As another example, selecting a nonspecific value for a filter, such as the No SLA or Unspecified options, can help show subscriptions that have subscription attribute values that might be missing or that might be less common and not specifically filterable by the subscriptions service. For those subscriptions with missing subscription attributes, adding that data can improve the accuracy and usefulness of the subscriptions service reporting.

The subscriptions service provides the following filters and filter options for RHEL:

  • SLA (service level agreement): Premium, Standard, Self-Support, No SLA
  • Usage: Development/Test, Disaster Recovery, Production, Unspecified

The subscriptions service provides the following filter and filter options for Red Hat OpenShift:

  • SLA (service level agreement): Premium, Standard, Self-Support, No SLA
  • Cores: Cores (default), Sockets

Because the current offerings for Red Hat OpenShift Streams for Apache Kafka are of one subscription type only, filtering by subscription attributes is currently not available.

Filtering by name (current systems table)

You can filter the data in the current systems table by the contents of Name column, which shows either the display name or universally unique ID (UUID) of each system. To filter by name, use the search field near the Name column.

You can search for a specific system name or a group of similarly named systems. Exact and partial strings are accepted, but common wildcard characters are treated as literal characters, not special character wildcards.

Filtering the graph display with the graph legend

You can filter how data displays in the usage and utilization graph by clicking the legend options below the graph, toggling them off and on. For example, in the graph for RHEL, you can click Physical RHEL in the graph legend to hide all physical RHEL data and show only the virtualized RHEL and public cloud RHEL data. You can then click it again to show the physical RHEL data. You can also toggle multiple legend options off at the same time.

Note

Unlike the other filtering options, filtering by the graph legend is an exclusive filter, not an inclusive filter. In other words, the intent of this filter is to hide data for the selected option.

Chapter 18. What data does the subscriptions service store?

The subscriptions service gathers data to track usage by using several tools, including Red Hat Insights, Red Hat Subscription Management, Red Hat Satellite and the Satellite inventory upload plugin, OpenShift Cluster Manager, the Red Hat OpenShift monitoring stack, and others. The number of tools that are helping to gather data for your account depends on your subscription profile and the products in it, because different tools are used to gather data for different products.

For more information about the data that is gathered and stored by Red Hat Insights, Red Hat Subscription Management, or other products, see that product’s documentation.

The subscriptions service uses the data that is gathered in three ways:

  • To make sure that inventory is counted only once. Some data is used for deduplication, in both primary and secondary storage.
  • To link data submissions to the proper account and to log how and from what resource the data was received. Some quality control data is included.
  • To calculate subscription values. Some data indicates the presence of Red Hat software and powers the usage portion of the subscriptions service.

The subscriptions service itself stores only a subset of the data that is collected by Red Hat Insights. The primary data that is stored by the subscriptions service includes information related to installed Red Hat products, system size, and other similar system characteristics.

Additional resources

Chapter 19. How the subscriptions service gets and refreshes data

The data collection tools gather and periodically send data, including data about subscription usage, to the Hybrid Cloud Console tools that analyze and process this data. After the data is processed, the data that is needed for the subscriptions service, including the data related to subscription usage and capacity, is sent to the subscriptions service for display. For Annual subscriptions, this data is sent once per day. For On-Demand subscriptions, this data can be updated more frequently, usually a few times per day. Therefore, the data that displays in the subscriptions service is a tally of the results in the form of a snapshot, either once per day or at a few intervals throughout the day, and is not a real-time, continuous usage monitor.

The Red Hat Enterprise Linux data pipeline

The following image provides additional detail about the data pipeline that moves RHEL data from collection to display in the subscriptions service. The data collection tool, whether you are using Red Hat Insights, Satellite, or Red Hat Subscription Management with the Subscription Manager agent, sends data to the Hybrid Cloud Console processing tools. After data is processed, it is available to Hybrid Cloud Console tools such as the inventory service. The subscriptions service uses a subset of the data that is available to the inventory service to display data about subscription usage and capacity.

Figure 19.1. The RHEL data pipeline for the subscriptions service

The RHEL data pipeline for the subscriptions service

The Red Hat OpenShift data pipeline

Red Hat OpenShift can have nodes that are based on Red Hat Enterprise Linux or Red Hat Enterprise Linux CoreOS. Only nodes that are based on RHCOS report data through the tools in the Red Hat OpenShift data pipeline, such as OpenShift Cluster Manager and the monitoring stack. RHEL nodes report through the tools in the RHEL data pipeline, such as Red Hat Insights, Satellite, or Red Hat Subscription Management.

Table 19.1. Node reporting and the data pipelines

Red Hat OpenShift versionNode operating systemData pipeline used

Version 4

RHCOS

Red Hat OpenShift pipeline

Nodes aggregated into cluster reporting

Compute nodes reported, control plane nodes ignored

Version 4

RHEL

Red Hat OpenShift pipeline

Nodes report individually

Compute nodes reported

Version 3

RHEL

RHEL pipeline

Nodes report individually

Control plane nodes cannot be distinguished from compute nodes

For Red Hat OpenShift version 4.1 and later data collection, the tools available in the monitoring stack, including Telemetry, Prometheus, Thanos, and others, monitor and periodically sum the CPU activity of all worker-based nodes, while ignoring the activity of infrastructure-based nodes. That data is sent to Red Hat OpenShift Cluster Manager at different intervals for new clusters, resized clusters, and clusters with deleted entities, to maintain currency.

Red Hat OpenShift Cluster Manager then updates the cluster size attribute for existing clusters and creates entries for any new clusters in the Hybrid Cloud Console inventory tool.

Lastly, the subscriptions service analyzes the inventory data and creates account-wide usage information for each Red Hat OpenShift product in the subscription profile. That information is displayed in the the subscriptions service interface, along with capacity data as applicable for the subscription type. For Red Hat OpenShift Container Platform with an Annual subscription, the usage information accounts for both core and socket usage. For Red Hat OpenShift Container Platform or OpenShift Dedicated with an On-Demand subscription, the usage information shows core hour usage.

Figure 19.2. The Red Hat OpenShift data pipeline for the subscriptions service

The Red Hat OpenShift data pipeline for the subscriptions service

The Red Hat Cloud Services data pipeline

The managed services in the Red Hat Cloud Services portfolio, such as Red Hat OpenShift Streams for Apache Kafka, rely on Red Hat infrastructure. Part of that infrastructure is the monitoring stack tools that, among other jobs, supply data about subscription usage to the subscriptions service. Therefore, the Red Hat Cloud Services services report usage through the tools used in the Red Hat OpenShift data pipeline.

Heartbeats for data collection tools

The frequency at which the data collection tools send data for processing, also known as the heartbeat, varies by tool. This variance can affect the freshness of the data that the subscriptions service displays.

The following table shows default heartbeats for the data collection tools. In some cases, these values are configurable within that data collection tool.

Table 19.2. Heartbeats for data collection tools

ToolConfigurableHeartbeat interval

Insights

No

Daily, once every 24 hours

Red Hat Subscription Management

Yes

Multiple times per day, 4 hour default

Satellite

Yes

Monthly, configurable with the Satellite scheduler function

If used, the Satellite inventory upload plugin reports daily, with a manual send option.

Additionally, to maintain accurate information about the mapping of virtual guests to hosts, a best practice is to run the virt-who utility daily.

public cloud metering

No

Daily, once every 24 hours

Red Hat OpenShift

No

Several tools are involved in the data pipeline, including tools in the Red Hat OpenShift Container Platform monitoring stack and in the Hybrid Cloud Console, with differing intervals:

Red Hat OpenShift Container Platform monitoring stack:
New clusters identified every 15 minutes
Cluster size updated every 2 hours
Cluster cleanup for deleted entities updated every 5 hours

Red Hat OpenShift Cluster Manager:
New clusters identified to Red Hat Subscription Management every 15 minutes
Existing clusters synchronized every 6 hours

the subscriptions service:
daily, once every 24 hours for Annual subscriptions, multiple times daily for On-Demand subscriptions

Part VI. Troubleshooting and common questions

When you review the subscriptions service data for your account, you might have additional questions about how those calculations are made or whether the calculations are accurate. Answers for some of the most commonly asked questions might help you understand more about the data that appears in the subscriptions service. Other information can help you troubleshoot some common problems that are experienced by subscriptions service users. In some cases, completing the suggested steps in the troubleshooting information can help you improve the accuracy of the reported data in the subscriptions service.

Chapter 20. Troubleshooting: Correcting over-reporting of virtualized RHEL

So that the subscriptions service can accurately report Red Hat Enterprise Linux in virtualized environments such as virtual data center (VDC) subscriptions, host-guest mappings must be present in the data that the subscriptions service analyzes. For Red Hat Satellite, the Satellite inventory upload plugin and the virt-who tool gather these mappings for the subscriptions service. For Red Hat Subscription Management, the virt-who tool gathers these mappings. For each of these subscription management options, all necessary tools must be both installed and properly configured to accurately report virtualized RHEL usage.

If these tools are not used, virtualized usage data cannot be correctly calculated. In that type of scenario, guests are counted, not ignored. Each guest is counted as an individual virtual machine, leading to a rapid escalation of virtualized socket count and unusual deployment over capacity. When multiplied over numerous VDC subscriptions, all running multiple guests, the subscriptions service could easily show RHEL overdeployment that significantly exceeds your subscription threshold.

The following example contains an isolated usage and utilization graph view from the subscriptions service interface. It shows a substantial drop in the reporting of virtualized usage after the correct configuration of the Satellite inventory upload plugin and virt-who. Throughout the time period displayed, the subscription threshold remains constant, at 904 sockets. Before correction, total RHEL usage is reported as approximately 2,250 sockets. This count far exceeds the subscription threshold. After correction, virtualized usage is considerably reduced, with total RHEL usage at 768 sockets. This count falls below the subscription threshold of 904 sockets.

Figure 20.1. Corrected virtualized RHEL with virt-who and Satellite data

Corrected virtualized RHEL with virt-who and Satellite data

Procedure

To correct over-reporting of virtualized RHEL usage in the subscriptions service, make sure that you have completed the following steps:

  1. Review your RHEL subscription profile in Red Hat Satellite or Red Hat Subscription Management and determine which subscriptions require virt-who.

    • In the Satellite Web UI, click Content > Subscriptions. If needed, use the Search field to narrow the list of results. Review the values in the Requires Virt-Who column. If any check box is selected, you must configure virt-who.
    • In the Overview page of the Red Hat Subscription Management Customer Portal interface, click View All Subscriptions. If needed, use filtering to narrow the list of results. Select a subscription name to view the details. If Virt-Who: Required appears in the SKU Details, you must configure virt-who.
  2. Confirm that the virt-who tool is deployed on your hypervisors so that host-guest mappings can be communicated. For more information, see the virtualization documentation that is appropriate for your subscription management tool.

  3. For Satellite, make sure that the Satellite inventory upload plugin is installed and configured to supply data to the subscriptions service.

Chapter 21. Troubleshooting: Correcting problems with filtering

The subscriptions service includes several filters that you can use to sort data by different characteristics. These characteristics include subscription attributes, also known as system purpose or subscription settings, depending on the product. Types of subscription attributes include service level agreement (SLA), usage, and others.

Subscription attributes values must be set on systems to enable filtering by those values on the product-level pages in the the subscriptions service interface. There are different methods to set these values, such as directly in the product or in one of the subscription management tools. Subscription attributes values should be set by only one method to avoid the potential for mismatched values.

In the older entitlement-based subscription model, the system purpose values are used by the subscription management tools such as Red Hat Satellite or Red Hat Subscription Management to help match subscriptions with systems. If a system is correctly matched with a subscription, the system status value (System Status Details or System Purpose Status in the various tools) shows as Matched. However, if you are using simple content access with the subscriptions service, that usage of system purpose is obsolete, because subscriptions are not attached to systems. After you enable simple content access, the system status shows as Disabled.

Note

The Disabled state for the system status means that per-system subscription attachment is not being enforced. It does not mean that system purpose values themselves are unimportant. The subscriptions service filters related to system purpose values will not show reliable data if these values are not set for all systems.

Procedure

If the filters that relate to subscription attributes (system purpose values) are showing unexpected results, you might be able to improve the accuracy of that data by ensuring that the subscription attributes are set correctly:

  1. Review system information in your preferred subscription management tool to detect whether there are systems where the subscription attributes are missing.
  2. If there are missing values for subscription attributes, set those values. You might be able to use options to set these values in bulk, depending on the type and version of subscription management tool that you are using.

Additional resources

  • For more information about how to set system purpose values in bulk in Red Hat Satellite, see the section about editing system purpose for multiple hosts in the Managing Hosts guide.
  • For more information about how to use Ansible and the subscription-manager command to set system purpose values in bulk for Red Hat Subscription Management, see the redhat-subscription module information.

Chapter 22. Troubleshooting: Deleting copies of shared images from your AWS account

When you create a source for an AWS account to enable public cloud metering for the subscriptions service, the account is inspected to find RHEL based images and the instances for those images. For some images, a metadata inspection can find known markers for rapid identification. For other images, a file system inspection is required to find this data. When a file system inspection is required, the public cloud metering inspection process copies the image into a Red Hat AWS account and adds it into a running instance to do the inspection tasks.

However, in some cases an image cannot be copied. For example, if the image in your AWS account is owned and shared by a third party, public cloud metering is aware that the image exists, but cannot copy it. In that case, the public cloud metering function uses the IAM role and policy granted during subscriptions source creation to make a reference copy of the original image. This reference copy image is stored in your account. The reference copy is used to make another copy of the image that is stored temporarily in the Red Hat AWS account for inspection purposes.

The reference copy is needed only for a short duration, to make the inspection copy of the image. However, the IAM profile that you created for public cloud metering does not contain the Amazon EC2 DeregisterImage action that would permit public cloud metering to delete the reference copy in your AWS account. Therefore, you must do these actions manually.

Prerequisites

You should wait at least 24 hours after adding a source that contains a known shared and copied image before completing the deregister and delete actions on the reference copy of the image in your AWS account. This wait time ensures that the image is copied to the Red Hat AWS account for for inspection.

Procedure

To deregister the AMI and delete the snapshot of the reference copy of the image:

  1. Sign in to the Amazon EC2 console and follow the steps to deregister a Linux AMI.

    Note

    For more information about how you delete an image, see the Amazon EC2 User Guide for Linux Instances and review the steps to deregister a Linux AMI.

  2. When you need to provide the AMI ID in the steps to deregister the AMI and to delete the snapshot, find the AMI ID that matches the following pattern, where original_AMI_name is the AMI name from the original third-party image:

    cloudigrade reference copy (original_AMI_name)
  3. Continue with the remaining steps to deregister a Linux AMI to complete this process.

Chapter 23. How is the subscription threshold calculated?

In the subscriptions service, the usage and utilization graph for most product pages contains a subscription threshold. This line shows the maximum capacity of similar subscriptions across all of your contracts.

Note

Some product pages do not show a subscription threshold on the graph.

  • For a product page that includes pay-as-you-go On-Demand subscriptions, that graph does not display a subscription threshold because of the characteristics of that subscription type.
  • For an account that includes any subscription with a unit of measurement (UoM) of "Unlimited" as part of the terms, the graph for any product page that includes this subscription does not display a subscription threshold. If filtering is used to exclude this subscription from the views, the graph will display a subscription threshold for the filtered data.

To measure the maximum capacity of an organization’s account and plot the subscription threshold line in the graph, the subscriptions service does the following steps:

  1. Accesses the Red Hat internal subscription services to gather subscription-related contract data for the account.
  2. Analyzes every subscription in the account, including each SKU (stock-keeping unit) that was purchased and the amount of each SKU that was purchased.
  3. Determines which products are provided in each SKU that is found.
  4. Calculates the maximum amount of technology that is provided by a subscription by multiplying the amount of technology that a SKU allows by the number of that SKU that was purchased in the subscription. The amount of technology that a SKU allows is the unit of measurement for the SKU multiplied by the number of these units (the limit) that the SKU provides.
  5. Adds the maximum amount of technology for every subscription to determine the subscription threshold that appears on the graph for every product or product portfolio.
  6. Analyzes the available subscription attributes data (also known as system purpose data or subscription settings) to enable filtering of that data with the filters in the subscriptions service.

Chapter 24. How is core hour usage data calculated?

The introduction of the new pay-as-you-go On-Demand subscription type in 2021 resulted in new types of units of measurement in the subscriptions service, in addition to the units of measurement for sockets or cores. These new units of measurement are compound units that function as derived units, where the unit of measurement is calculated from other base units.

At this time, the newer derived units for the subscriptions service add a base unit of time, so these new units are measuring consumption over a period of time. Time base units can be combined with base units that are appropriate for specific products, resulting in derived units that meter a product according to the types of resources that it consumes.

In addition, for a subset of those time-based units, usage data is derived from frequent, time-based sampling of data instead of direct counting. In part, the sampling method might be used for a particular product or service because of the required unit of measurement and the capabilities of the Red Hat OpenShift monitoring stack tools to gather usage data for that unit of measurement.

When the subscriptions service tracks subscription usage with time-based metrics that also use sampling, the metrics used and the units of measurement applied to those metrics are based upon the terms for the subscriptions for these products. The following list shows examples of time-based metrics that also use sampling to gather usage data:

  • Red Hat OpenShift Container Platform On-Demand usage is measured with a single derived unit of measurement of core hours. A core hour is a unit of measurement for computational activity on one core (as defined by the subscription terms), for a total of one hour, measured to the granularity of the meter that is used.
  • Red Hat OpenShift Dedicated On-Demand is measured with two units of measurement, both derived units of measurement. It is measured in core hours to track the workload usage on the compute machines, and in instance hours to track instance availability as the control plane usage on the control plane machines (formerly the master machines in older versions of Red Hat OpenShift). An instance hour is the availability of a Red Hat service instance, during which it can accept and execute customer workloads. For Red Hat OpenShift Dedicated On-Demand, instance hours are measured by summing the availability of all active clusters, in hours.

24.1. An example for Red Hat OpenShift On-Demand subscriptions

The following information for Red Hat OpenShift On-Demand subscriptions includes an explanation of the applicable units of measurement, a detailed scenario that shows the steps that the subscriptions service and the other Hybrid Cloud Console and monitoring stack tools use to calculate core hour usage, and additional information that can help you understand how core hour usage is reported in the subscriptions service. You can use this information to help you understand the basic principles of how the subscriptions service calculates usage for the time-based units of measurement that also use sampling.

24.1.1. Units of measurement for Red Hat OpenShift On-Demand subscriptions

The following table provides additional details about the derived units of measurement that are used for the Red Hat OpenShift On-Demand products. These details include the name and definition of the unit of measurement along with examples of usage that would equal one of that unit of measurement. In addition, a sample Prometheus query language (PromQL) query is provided for each unit. This example query is not the complete set of processes by which the subscriptions service calculates usage, but it is a query that you can run locally in a cluster to help you understand some of those processes.

Table 24.1. Units of measurement for Red Hat OpenShift Container Platform On-Demand and Red Hat OpenShift Dedicated On-Demand

Unit of measurementDefinitionExamples

core hour

Computational activity on one core (as defined by the subscription terms), for a total of one hour, measured to the granularity of the meter that is used.

For Red Hat OpenShift Container Platform On-Demand and Red Hat OpenShift Dedicated On-Demand workload usage:

  • A single core running for 1 hour.
  • Many cores running in short time intervals to equal 1 hour.

Core hour base PromQL query that you can run locally on your cluster:

sum_over_time((max by (_id) (cluster:usage:workload:capacity_physical_cpu_cores:min:5m))[1h:1s])

instance hours, in cluster hours

The availability of a Red Hat service instance, during which it can accept and execute customer workloads.

In a cluster hour context, for Red Hat OpenShift Dedicated On-Demand control plane usage:

  • A single cluster that spawns pods and runs applications for 1 hour.
  • Two clusters that spawn pods and run applications for 30 minutes.

Instance hour base PromQL query that you can run locally on your cluster:

group(cluster:usage:workload:capacity_physical_cpu_cores:max:5m[1h:5m]) by (_id)

24.1.2. Example core hour usage calculation

The following example describes the process for calculating core hour usage for a Red Hat OpenShift On-Demand subscription. You can use this example to help you understand other derived units of measurement where time is one of the base units of the usage calculation and sampling is used as part of the measurement.

To obtain usage in core hours, the subscriptions service uses numerical integration. Numerical integration is also commonly known as an "area under the curve" calculation, where the area of a complex shape is calculated by using the area of a series of rectangles.

The tools in the Red Hat OpenShift monitoring stack contain the Prometheus query language (PromQL) function sum_over_time, a function that aggregates data for a time interval. This function is the foundation of the core hours calculation in the subscriptions service.

sum_over_time((max by (_id) (cluster:usage:workload:capacity_physical_cpu_cores:min:5m))[1h:1s])
Note

You can run this PromQL query locally in a cluster to show results that include the cluster size and a snapshot of usage.

Every 2 minutes, a cluster reports its size in cores to the monitoring stack tools, including Telemetry. One of the Hybrid Cloud Console tools, the Tally engine, reviews this information every hour in 5 minute intervals. Because the cluster reports to the monitoring stack tools every 2 minutes, each 5 minute interval might contain up to three values for cluster size. The Tally engine selects the smallest cluster size value to represent the full 5 minute interval.

The following example shows how a sample cluster size is collected every 2 minutes and how the smallest size is selected for the 5 minute interval.

Figure 24.1. Calculating the cluster size

Calculating the cluster size

Then, for each cluster, the Tally engine uses the selected value and creates a box of usage for each 5 minute interval. The area of the 5-minute box is 300 seconds times the height in cores. For every 5 minute box, this core seconds value is stored and eventually used to calculate the daily, account-wide aggregation of core hour usage.

The following example shows a graphical representation of how an area under the curve is calculated, with cluster size and time used to create usage boxes, and the area of each box used as building blocks to create daily core hour usage totals.

Figure 24.2. Calculating the core hours

Calculating the core hours

Every day, each 5 minute usage value is added to create the total usage of a cluster on that day. Then the totals for each cluster are combined to create daily usage information for all clusters in the account. In addition, the core seconds are converted to core hours.

During the regular 24-hour update of the subscriptions service with the previous day’s data, the core hour usage information for pay-as-you-go subscriptions is updated. In the subscriptions service, the daily core hour usage for the account is plotted on the usage and utilization graph, and additional core hours used information shows the accumulated total for the account. The current systems table also lists each cluster in the account and shows the cumulative number of core hours used in that cluster.

Note

The core hour usage data for the account and for individual clusters that is shown in the subscriptions service interface is rounded to two decimal places for display purposes. However, the data that is used for the subscriptions service calculations and that is provided to the Red Hat Marketplace billing service is at the millicore level, rounded to 6 decimal places.

Every month, the monthly core hour usage total for your account is supplied to Red Hat Marketplace for invoice preparation and billing. For subscription types that are offered with a four-to-one relationship of core hour to vCPU hour, the core hour total from the subscriptions service is divided by 4 for the Red Hat Marketplace billing activities. For subscription types that are offered with a one-to-one relationship of core hour to vCPU hour, no conversion in the total is made.

After the monthly total is sent to Red Hat Marketplace and the new month begins, the usage values for the subscriptions service display reset to 0 for the new current month. You can use filtering to view usage data for previous months for the span of one year.

24.1.3. Resolving questions about core hour usage

If you have questions about core hour usage, first use the following steps as a diagnostic tool:

  1. In the subscriptions service, review the cumulative total for the month for each cluster in the current systems table. Look for any cluster that shows unusual usage, based on your understanding of how that cluster is configured and deployed.

    Note

    The current systems table displays a snapshot of the most recent monthly cumulative total for each cluster. Currently this information updates a few times per day. This value resets to 0 at the beginning of each month.

  2. Then review the daily core hour totals and trends in the usage and utilization graph. Look for any day that shows unusual usage. It is likely that unusual usage on a cluster that you found in the previous step corresponds to this day.

From these initial troubleshooting steps, you might be able to find the cluster owner and discuss whether the unusual usage is due to an extremely high workload, problems with cluster configuration, or other issues.

If you continue to have questions after using these steps, you can contact your Red Hat account team to help you understand your core hour usage. For questions about billing, use the support instructions for Red Hat Marketplace.

Chapter 25. How do vCPUs, hyper-threading, and subscription structure affect the subscriptions service usage data?

The Red Hat OpenShift portfolio contains offerings that track usage with a unit of measurement of cores, but this measurement is obfuscated by virtualization and multithreading technologies. The behavior of these technologies led to the development of the term vCPUs to help describe the virtual consumption of physical CPUs, but this term can vary in its meaning. In addition, the structure of Red Hat OpenShift offerings can be complex, making usage data in the subscriptions service difficult to understand.

Red Hat has responded to various customer concerns about Red Hat OpenShift usage data through a series of improvements, both to the subscriptions service itself and to the underlying technologies and methodologies that inform Red Hat OpenShift usage tracking.

25.1. Improved calculations for x86-64 architectures with simultaneous multithreading

October 2021: This change assumes that simultaneous multithreading on x86-64 architectures is enabled, resulting in more accurate usage data within the subscriptions service.

Across different technology vendors, the term vCPU can have different definitions. If you work with a number of different vendors, the definition that you use might not match the definition that is used by Red Hat. As a result, you might not be familiar with how Red Hat and the subscriptions service measures usage when vCPUs and simultaneous multithreading (also referred to as hyper-threading) are in use within your environment.

Some vendors offer hypervisors that do not expose to guests whether the CPUs of the guests use simultaneous multithreading. For example, recent versions of the VMware hypervisor do not show the simultaneous multithreading status to the kernel of the VM, and always report threads per core as 1. The effect of this counting method is that customers can interpret the subscriptions service reporting of Red Hat OpenShift usage data related to vCPUs to be artificially doubled.

To address customer concerns about vCPU counting, Red Hat has adjusted its assumptions related to simultaneous multithreading. Red Hat now assumes simultaneous multithreading of 2 threads per core for x86 architectures. For many hypervisors, that assumption results in an accurate counting of vCPUs per core, and customers who use those hypervisors will see no change in their Red Hat OpenShift usage data in the subscriptions service.

However, other customers who use hypervisors that do not expose simultaneous multithreading status to the kernel will see an abrupt change in subscriptions service data in October 2021. Those customers will see their related Red Hat OpenShift usage data in the subscriptions service reduced by 50% on the date that this change in counting is implemented. Past data will not be affected.

Customers who encounter this situation will not be penalized. Red Hat requires that the customer purchase enough subscriptions to cover the usage as counted in the subscriptions service only.

In the past, the discrepancies in the definitions for vCPUs have resulted in known problems with the interpretation of usage and capacity data for some subscriptions service users. This change in the assumptions for simultaneous multithreading is intended to improve the accuracy of vCPU usage data across a wider spectrum of customers, regardless of the hypervisor technology that is deployed.

If you have questions or concerns related to the usage and capacity data that is displayed in the subscriptions service, work with your Red Hat account team to help you understand your data and account status. For additional information about the resolution of this problem, you can also log in to your Red Hat account to view the following issue: Bugzilla issue 1934915.

25.2. Improved analysis of subscription capacity for certain subscriptions

January 2022: These changes improved capacity analysis for subscriptions that include extra entitlements or infrastructure subscriptions. These improvements resulted in a more accurate calculation of usage and capacity data for those subscriptions and a more accurate calculation of the subscription threshold within the subscriptions service for the Red Hat OpenShift portion of your Red Hat account.

  • Improved accuracy for subscriptions with numerous entitlements: Certain Red Hat OpenShift subscriptions that included a large capacity of cores also included extra entitlements. These entitlements helped to streamline installation by using tools that rely on attached entitlement workflows. However, these extra entitlements were calculated as extra capacity by the subscriptions service, resulting in confusion about how much Red Hat OpenShift could legally be deployed by customers. As of January 2022, counting methods have been revised to remove the extra entitlements from the capacity calculations.
  • Infrastructure subscriptions excluded from capacity calculations: For certain purchases of Red Hat OpenShift subscriptions, a particular type of Red Hat OpenShift infrastructure subscription would be added to that purchase automatically. This type of subscription is used to provide infrastructure support for large deployments. Both version 4.1 and later and version 3.11 subscriptions were affected. Normally for Red Hat OpenShift version 4.1 and later, the subscriptions service does not count infrastructure nodes when calculating your Red Hat OpenShift capacity. However, for accounts that received this infrastructure subscription, the improper calculations were occurring at the subscription level, and that data was passed to the subscriptions service. Red Hat OpenShift capacity numbers were artificially inflated, resulting in an incorrect subscription threshold in the subscriptions service. As of January 2022, an added infrastructure subscription is not considered when calculating your Red Hat OpenShift capacity.

Legal Notice

Copyright © 2022 Red Hat, Inc.
The text of and illustrations in this document are licensed by Red Hat under a Creative Commons Attribution–Share Alike 3.0 Unported license ("CC-BY-SA"). An explanation of CC-BY-SA is available at http://creativecommons.org/licenses/by-sa/3.0/. In accordance with CC-BY-SA, if you distribute this document or an adaptation of it, you must provide the URL for the original version.
Red Hat, as the licensor of this document, waives the right to enforce, and agrees not to assert, Section 4d of CC-BY-SA to the fullest extent permitted by applicable law.
Red Hat, Red Hat Enterprise Linux, the Shadowman logo, the Red Hat logo, JBoss, OpenShift, Fedora, the Infinity logo, and RHCE are trademarks of Red Hat, Inc., registered in the United States and other countries.
Linux® is the registered trademark of Linus Torvalds in the United States and other countries.
Java® is a registered trademark of Oracle and/or its affiliates.
XFS® is a trademark of Silicon Graphics International Corp. or its subsidiaries in the United States and/or other countries.
MySQL® is a registered trademark of MySQL AB in the United States, the European Union and other countries.
Node.js® is an official trademark of Joyent. Red Hat is not formally related to or endorsed by the official Joyent Node.js open source or commercial project.
The OpenStack® Word Mark and OpenStack logo are either registered trademarks/service marks or trademarks/service marks of the OpenStack Foundation, in the United States and other countries and are used with the OpenStack Foundation's permission. We are not affiliated with, endorsed or sponsored by the OpenStack Foundation, or the OpenStack community.
All other trademarks are the property of their respective owners.