Menu Close

Chapter 24. How is core hour usage data calculated?

The introduction of the new pay-as-you-go On-Demand subscription type in 2021 resulted in new types of units of measurement in the subscriptions service, in addition to the units of measurement for sockets or cores. These new units of measurement are compound units that function as derived units, where the unit of measurement is calculated from other base units.

At this time, the newer derived units for the subscriptions service add a base unit of time, so these new units are measuring consumption over a period of time. Time base units can be combined with base units that are appropriate for specific products, resulting in derived units that meter a product according to the types of resources that it consumes.

In addition, for a subset of those time-based units, usage data is derived from frequent, time-based sampling of data instead of direct counting. In part, the sampling method might be used for a particular product or service because of the required unit of measurement and the capabilities of the Red Hat OpenShift monitoring stack tools to gather usage data for that unit of measurement.

When the subscriptions service tracks subscription usage with time-based metrics that also use sampling, the metrics used and the units of measurement applied to those metrics are based upon the terms for the subscriptions for these products. The following list shows examples of time-based metrics that also use sampling to gather usage data:

  • Red Hat OpenShift Container Platform On-Demand usage is measured with a single derived unit of measurement of core hours. A core hour is a unit of measurement for computational activity on one core (as defined by the subscription terms), for a total of one hour, measured to the granularity of the meter that is used.
  • Red Hat OpenShift Dedicated On-Demand is measured with two units of measurement, both derived units of measurement. It is measured in core hours to track the workload usage on the compute machines, and in instance hours to track instance availability as the control plane usage on the control plane machines (formerly the master machines in older versions of Red Hat OpenShift). An instance hour is the availability of a Red Hat service instance, during which it can accept and execute customer workloads. For Red Hat OpenShift Dedicated On-Demand, instance hours are measured by summing the availability of all active clusters, in hours.

24.1. An example for Red Hat OpenShift On-Demand subscriptions

The following information for Red Hat OpenShift On-Demand subscriptions includes an explanation of the applicable units of measurement, a detailed scenario that shows the steps that the subscriptions service and the other Hybrid Cloud Console and monitoring stack tools use to calculate core hour usage, and additional information that can help you understand how core hour usage is reported in the subscriptions service. You can use this information to help you understand the basic principles of how the subscriptions service calculates usage for the time-based units of measurement that also use sampling.

24.1.1. Units of measurement for Red Hat OpenShift On-Demand subscriptions

The following table provides additional details about the derived units of measurement that are used for the Red Hat OpenShift On-Demand products. These details include the name and definition of the unit of measurement along with examples of usage that would equal one of that unit of measurement. In addition, a sample Prometheus query language (PromQL) query is provided for each unit. This example query is not the complete set of processes by which the subscriptions service calculates usage, but it is a query that you can run locally in a cluster to help you understand some of those processes.

Table 24.1. Units of measurement for Red Hat OpenShift Container Platform On-Demand and Red Hat OpenShift Dedicated On-Demand

Unit of measurementDefinitionExamples

core hour

Computational activity on one core (as defined by the subscription terms), for a total of one hour, measured to the granularity of the meter that is used.

For Red Hat OpenShift Container Platform On-Demand and Red Hat OpenShift Dedicated On-Demand workload usage:

  • A single core running for 1 hour.
  • Many cores running in short time intervals to equal 1 hour.

Core hour base PromQL query that you can run locally on your cluster:

sum_over_time((max by (_id) (cluster:usage:workload:capacity_physical_cpu_cores:min:5m))[1h:1s])

instance hours, in cluster hours

The availability of a Red Hat service instance, during which it can accept and execute customer workloads.

In a cluster hour context, for Red Hat OpenShift Dedicated On-Demand control plane usage:

  • A single cluster that spawns pods and runs applications for 1 hour.
  • Two clusters that spawn pods and run applications for 30 minutes.

Instance hour base PromQL query that you can run locally on your cluster:

group(cluster:usage:workload:capacity_physical_cpu_cores:max:5m[1h:5m]) by (_id)

24.1.2. Example core hour usage calculation

The following example describes the process for calculating core hour usage for a Red Hat OpenShift On-Demand subscription. You can use this example to help you understand other derived units of measurement where time is one of the base units of the usage calculation and sampling is used as part of the measurement.

To obtain usage in core hours, the subscriptions service uses numerical integration. Numerical integration is also commonly known as an "area under the curve" calculation, where the area of a complex shape is calculated by using the area of a series of rectangles.

The tools in the Red Hat OpenShift monitoring stack contain the Prometheus query language (PromQL) function sum_over_time, a function that aggregates data for a time interval. This function is the foundation of the core hours calculation in the subscriptions service.

sum_over_time((max by (_id) (cluster:usage:workload:capacity_physical_cpu_cores:min:5m))[1h:1s])

You can run this PromQL query locally in a cluster to show results that include the cluster size and a snapshot of usage.

Every 2 minutes, a cluster reports its size in cores to the monitoring stack tools, including Telemetry. One of the Hybrid Cloud Console tools, the Tally engine, reviews this information every hour in 5 minute intervals. Because the cluster reports to the monitoring stack tools every 2 minutes, each 5 minute interval might contain up to three values for cluster size. The Tally engine selects the smallest cluster size value to represent the full 5 minute interval.

The following example shows how a sample cluster size is collected every 2 minutes and how the smallest size is selected for the 5 minute interval.

Figure 24.1. Calculating the cluster size

Calculating the cluster size

Then, for each cluster, the Tally engine uses the selected value and creates a box of usage for each 5 minute interval. The area of the 5-minute box is 300 seconds times the height in cores. For every 5 minute box, this core seconds value is stored and eventually used to calculate the daily, account-wide aggregation of core hour usage.

The following example shows a graphical representation of how an area under the curve is calculated, with cluster size and time used to create usage boxes, and the area of each box used as building blocks to create daily core hour usage totals.

Figure 24.2. Calculating the core hours

Calculating the core hours

Every day, each 5 minute usage value is added to create the total usage of a cluster on that day. Then the totals for each cluster are combined to create daily usage information for all clusters in the account. In addition, the core seconds are converted to core hours.

During the regular 24-hour update of the subscriptions service with the previous day’s data, the core hour usage information for pay-as-you-go subscriptions is updated. In the subscriptions service, the daily core hour usage for the account is plotted on the usage and utilization graph, and additional core hours used information shows the accumulated total for the account. The current systems table also lists each cluster in the account and shows the cumulative number of core hours used in that cluster.


The core hour usage data for the account and for individual clusters that is shown in the subscriptions service interface is rounded to two decimal places for display purposes. However, the data that is used for the subscriptions service calculations and that is provided to the Red Hat Marketplace billing service is at the millicore level, rounded to 6 decimal places.

Every month, the monthly core hour usage total for your account is supplied to Red Hat Marketplace for invoice preparation and billing. For subscription types that are offered with a four-to-one relationship of core hour to vCPU hour, the core hour total from the subscriptions service is divided by 4 for the Red Hat Marketplace billing activities. For subscription types that are offered with a one-to-one relationship of core hour to vCPU hour, no conversion in the total is made.

After the monthly total is sent to Red Hat Marketplace and the new month begins, the usage values for the subscriptions service display reset to 0 for the new current month. You can use filtering to view usage data for previous months for the span of one year.

24.1.3. Resolving questions about core hour usage

If you have questions about core hour usage, first use the following steps as a diagnostic tool:

  1. In the subscriptions service, review the cumulative total for the month for each cluster in the current systems table. Look for any cluster that shows unusual usage, based on your understanding of how that cluster is configured and deployed.


    The current systems table displays a snapshot of the most recent monthly cumulative total for each cluster. Currently this information updates a few times per day. This value resets to 0 at the beginning of each month.

  2. Then review the daily core hour totals and trends in the usage and utilization graph. Look for any day that shows unusual usage. It is likely that unusual usage on a cluster that you found in the previous step corresponds to this day.

From these initial troubleshooting steps, you might be able to find the cluster owner and discuss whether the unusual usage is due to an extremely high workload, problems with cluster configuration, or other issues.

If you continue to have questions after using these steps, you can contact your Red Hat account team to help you understand your core hour usage. For questions about billing, use the support instructions for Red Hat Marketplace.