OpenShift Dedicated Policies

OpenShift Dedicated 4.5

Red Hat OpenShift Documentation Team

Abstract

Understanding the policies that apply to your use of Red Hat OpenShift Dedicated.

Part I. OpenShift Dedicated Service Definition

Chapter 1. Account management

1.1. Billing

Each OpenShift Dedicated cluster requires a minimum annual base cluster purchase and there are two billing options available for each cluster, Standard and Customer Cloud Subscription (CCS).

Standard OpenShift Dedicated clusters are deployed in to their own cloud infrastructure accounts, each owned by Red Hat. Red Hat is responsible for this account, and cloud infrastructure costs are paid directly by Red Hat. The customer will only pay the Red Hat subscription costs.

In the CCS model, the customer pays the cloud infrastructure provider directly for cloud costs and the cloud infrastructure account will be part of a customer’s Organization, with specific access granted to Red Hat. The customer will have restricted access to this account, but will be able to view billing and usage information. In this model, the customer will pay Red Hat for the CCS subscription and will pay the cloud provider for the cloud costs. It is the customer’s responsibility to pre-purchase or provide Reserved Instance (RI) compute instances to ensure lower cloud infrastructure costs.

Additional resources may be purchased for an OpenShift Dedicated Cluster, including:

  • Additional Nodes (must be same type/size as existing application nodes)
  • Middleware (JBoss EAP, JBoss Fuse, etc.) - additional pricing based on specific middleware component
  • Additional Storage in increments of 500GB (standard only)
  • Additional 12 TiB Network I/O (standard only)
  • Load Balancers for Services are available in bundles of 4; enables non-HTTP/SNI traffic or non-standard ports (standard only)

1.2. Cluster self-service

Customers can create, scale, and delete their clusters from OpenShift Cluster Manager (OCM), provided they’ve pre-purchased the necessary subscriptions.

1.3. Cloud providers

OpenShift Dedicated offers OpenShift Container Platform clusters as a managed service on the following cloud providers:

  • Amazon Web Services (AWS)
  • Google Cloud Platform (GCP)

1.4. Compute

Single availability zone clusters require a minimum of 4 worker nodes deployed to a single availability zone. These 4 worker nodes are included in the base subscription.

Multiple availability zone clusters require a minimum of 9 worker nodes, 3 deployed to each of the three availability zones. These 9 worker nodes are included in the base subscription, and additional nodes must be purchased in multiples of three in order to maintain proper node distribution.

Worker nodes must all be the same type and size within a single OpenShift Dedicated cluster.

Note: Worker node type and size cannot be changed once the cluster has been created.

Master and infrastructure nodes are also provided by Red Hat. There are at least 3 master nodes that handle etcd and API related workloads. There are at least 3 infrastructure nodes that handle metrics, routing, the web console, and other workloads. Master and infrastructure nodes are strictly for Red Hat workloads to operate the service, and customer workloads are not permitted to be deployed on these nodes.

Note: 1 vCPU core and 1 GiB of memory are reserved on each worker node to run processes required as part of the managed service. This includes, but is not limited to, audit log aggregation, metrics collection, DNS, image registry, and SDN.

1.5. AWS compute types

OpenShift Dedicated offers the following worker node types and sizes:

General purpose

  • M5.xlarge (4 vCPU, 16 GiB)
  • M5.2xlarge (8 vCPU, 32 GiB)
  • M5.4xlarge (16 vCPU, 64 GiB)

Memory-optimized

  • R5.xlarge (4 vCPU, 32 GiB)
  • R5.2xlarge (8 vCPU, 64 GiB)
  • R5.4xlarge (16 vCPU, 128 GiB)

Compute-optimized

  • C5.2xlarge (8 vCPU, 16 GiB)
  • C5.4xlarge (16 vCPU, 32 GiB)

1.6. Google Cloud compute types

OpenShift Dedicated offers the following worker node types and sizes on GCP:

General purpose

  • custom-4-16384 (4 vCPU, 16 GiB)
  • custom-8-32768 (8 vCPU, 32 GiB)
  • custom-16-65536 (16 vCPU, 64 GiB)

Memory-optimized

  • custom-4-32768-ext (4 vCPU, 32 GiB)
  • custom-8-65536-ext (8 vCPU, 64 GiB)
  • custom-16-131072-ext (16 vCPU, 128 GiB)

Compute-optimized

  • custom-8-16384 (8 vCPU, 16 GiB)
  • custom-16-32768 (16 vCPU, 32 GiB)

1.7. Regions and availability zones

The following AWS regions are supported by Red Hat OpenShift 4 and are supported for OpenShift Dedicated. Note: China and GovCloud (US) regions are not supported, regardless of their support on OpenShift 4.

  • ap-northeast-1 (Tokyo)
  • ap-northeast-2 (Seoul)
  • ap-south-1 (Mumbai)
  • ap-southeast-1 (Singapore)
  • ap-southeast-2 (Sydney)
  • ca-central-1 (Central)
  • eu-central-1 (Frankfurt)
  • eu-north-1 (Stockholm)
  • eu-west-1 (Ireland)
  • eu-west-2 (London)
  • eu-west-3 (Paris)
  • me-south-1 (Bahrain)
  • sa-east-1 (São Paulo)
  • us-east-1 (N. Virginia)
  • us-east-2 (Ohio)
  • us-west-1 (N. California)
  • us-west-2 (Oregon)

The following Google Cloud regions are currently supported:

  • asia-east1, Changhua County, Taiwan
  • asia-east2, Hong Kong
  • asia-northeast1, Tokyo, Japan
  • asia-south1, Mumbai, India
  • asia-southeast1, Jurong West, Singapore
  • europe-west1, St. Ghislain, Belgium
  • europe-west2, London, England, UK
  • europe-west4, Eemshaven, Netherlands
  • us-central1, Council Bluffs, Iowa, USA
  • us-east1, Moncks Corner, South Carolina, USA
  • us-east4, Ashburn, Northern Virginia, USA
  • us-west1, The Dalles, Oregon, USA
  • us-west2, Los Angeles, California, USA

Multi availability zone clusters can only be deployed in regions with at least 3 availability clouds (see AWS and Google Cloud).

Each new OpenShift Dedicated cluster is installed within a dedicated Virtual Private Cloud (VPC) in a single Region, with the option to deploy into a single Availability Zone (Single-AZ) or across multiple Availability Zones (Multi-AZ). This provides cluster-level network and resource isolation, and enables cloud-provider VPC settings, such as VPN connections and VPC Peering. Persistent volumes are backed by cloud block storage and are specific to the availability zone in which they are provisioned. Persistent volumes do not bind to a volume until the associated pod resource is assigned into a specific availability zone in order to prevent unschedulable pods. availability zone-specific resources are only usable by resources in the same availability zone.

Warning

The region and the choice of single or multi availability zone cannot be changed once a cluster has been deployed.

1.8. Service Level Agreement (SLA)

Any SLAs for the service itself are defined in Appendix 4 of the Red Hat Enterprise Agreement Appendix 4 (Online Subscription Services).

1.9. Support

OpenShift Dedicated includes Red Hat Premium Support, which can be accessed by using the Red Hat Customer Portal.

Please see our Scope of Coverage Page for more details on what is covered with included support for OpenShift Dedicated.

OpenShift Dedicated SLAs for support response times.

Chapter 2. Logging

OpenShift Dedicated includes an optional logging stack based on Elasticsearch, Fluentd, and Kibana (EFK). When deployed, a three shard Elasticsearch cluster is deployed with one replica per shard. The logging stack in OpenShift is designed for short-term retention to aid application troubleshooting, not for long-term log archiving.

2.1. Cluster operations logging

Red Hat provides services to maintain the health and performance of each OpenShift Dedicated cluster and its components. This includes cluster operations and audit logs. Cluster operations logs are enabled through the optional Cluster Logging Operator and Elasticsearch Operator as described in the OpenShift Dedicated product documentation. When deployed, the cluster will aggregate cluster logs from the OpenShift cluster, nodes, and pods and retain them for 1 hour to assist the SRE team in cluster troubleshooting. It is not intended for customers to have access to operations logs, they will remain under full control of Red Hat.

2.2. Cluster audit logging

Cluster Audit logs are always enabled. Audit logs are streamed to a log aggregation system outside the cluster VPC for automated security analysis and secure retention for 90 days. Red Hat controls the log aggregation system. Customers do not have access. Customers may receive a copy of their cluster’s audit logs upon request through a support ticket. Audit log requests must specify a date and time range not to exceed 21 days. When requesting audit logs, customers should be aware that audit logs are many GB per day in size.

2.3. Application logging

Application logs sent to STDOUT will be collected by Fluentd and made available through the cluster logging stack, if it is installed. Retention is set to 7 days, but will not exceed 200GiB worth of logs per shard. For longer term retention, customers should follow the sidecar container design in their deployments and forward logs to the log aggregation or analytics service of their choice.

It is Red Hat’s expectation and guidance that application logging workloads are scheduled on a customer’s worker nodes. This includes workloads such as ElasticSearch and the Kibana dashboard. Application logging is considered a customer workload, given that the rates of logging are different per cluster and per customer.

Chapter 3. Monitoring

3.1. Cluster metrics

OpenShift Dedicated clusters come with an integrated Prometheus/Grafana stack for cluster monitoring including CPU, memory, and network-based metrics. This is accessible through the web console and can also be used to view cluster-level status and capacity/usage through a Grafana dashboard. These metrics also allow for horizontal pod autoscaling based on CPU or memory metrics provided by an OpenShift Dedicated user.

3.2. Cluster status notification

Red Hat communicates the health and status of OpenShift Dedicated clusters through a combination of a cluster dashboard available in the OpenShift Cluster Manager, and email notifications sent to the email address of the contact that originally deployed the cluster.

Chapter 4. Networking

4.1. Custom domains for applications

To use a custom hostname for a route, you must update your DNS provider by creating a canonical name (CNAME) record. Your CNAME record should map the OpenShift canonical router hostname to your custom domain. The OpenShift canonical router hostname is shown on the Route Details page after a Route is created. Or a wildcard CNAME record can be created once to route all subdomains for a given hostname to the cluster’s router.

4.2. Domain validated certificates

OpenShift Dedicated includes TLS security certificates needed for both internal and external services on the cluster. For external routes, there are two, separate TLS wildcard certificates that are provided and installed on each cluster, one for the web console and route default hostnames and the second for the API endpoint. Let’s Encrypt is the certificate authority used for certificates. Routes within the cluster, e.g., the internal API endpoint, use TLS certificates signed by the cluster’s built-in certificate authority and require the CA bundle available in every pod for trusting the TLS certificate.

4.3. Custom certificate authorities for builds

OpenShift Dedicated supports the use of custom certificate authorities to be trusted by builds when pulling images from an image registry.

4.4. Load Balancers

OpenShift Dedicated uses up to five different load balancers:

  • Internal master load balancer that is internal to the cluster and used to balance traffic for internal cluster communications.
  • External master load balancer that is used for accessing the OpenShift and Kubernetes APIs. This load balancer can be disabled in OCM. If this load balancer is disabled, Red Hat reconfigures the API DNS to point to the internal master load balancer.
  • External master load balancer for Red Hat that is reserved for cluster management by Red Hat. Access is strictly controlled, and communication is only possible from whitelisted bastion hosts.
  • Default external router/ingress load balancer that is the default application load balancer, denoted by apps in the URL. The default load balancer can be configured in OCM to be either publicly accessible over the Internet, or only privately accessible over a pre-existing private connection. All application routes on the cluster are exposed on this default router load balancer, including cluster services such as the logging UI, metrics API, and registry.
  • Optional: secondary router/ingress load balancer that is a secondary application load balancer, denoted by apps2 in the URL. The secondary load balancer can be configured in OCM to be either publicly accessible over the Internet, or only privately accessible over a pre-existing private connection. If a 'Label match' is configured for this router load balancer, then only application routes matching this label will be exposed on this router load balancer, otherwise all application routes will also be exposed on this router load balancer.
  • Optional: Load balancers for Services may also be purchased to enable non-HTTP/SNI traffic and non-standard ports for services. These load balancers can be mapped to a service running on OpenShift Dedicated to enable advanced ingress features, such as non-HTTP/SNI traffic or the use of non-standard ports. These can be purchased in groups of 4 for standard clusters or can be provisioned without charge in CCS clusters, however each AWS account has a quota which limits the number of Classic Load Balancers that can be used within each cluster.

4.5. Cluster ingress

Project administrators can add route annotations for many different purposes, including ingress control through IP whitelisting.

Ingress policies can also be changed by using NetworkPolicy objects, which leverage the ovs-networkpolicy plugin. This allows for full control over the ingress network policy down to the pod level, including between pods on the same cluster and even in the same namespace.

All cluster ingress traffic will go through the defined load balancers. Direct access to all nodes is blocked by cloud configuration.

4.6. Cluster egress

Pod egress traffic control through EgressNetworkPolicy objects can be used to prevent or limit outbound traffic in OpenShift Dedicated.

Public outbound traffic from the master and infrastructure nodes is required and necessary to maintain cluster image security and cluster monitoring. This requires the 0.0.0.0/0 route to belong only to the internet gateway, it is not possible to route this range over private connections.

OpenShift 4 clusters use NAT Gateways to present a public, static IP for any public outbound traffic leaving the cluster. Each availability zone a cluster is deployed into receives a distinct NAT Gateway, therefore up to 3 unique static IP addresses can exist for cluster egress traffic. Any traffic that remains inside the cluster, or does not go out to the public internet, will not pass through the NAT Gateway and will have a source IP address belonging to the node that the traffic originated from. Node IP addresses are dynamic, therefore a customer should not rely on whitelisting individual IP address when accessing private resources.

Customers can determine their public static IP addresses by running a pod on the cluster and then querying an external service. For example:

oc run ip-lookup --image=busybox -i -t --restart=Never --rm -- /bin/sh -c "/bin/nslookup

4.7. Cloud network configuration

OpenShift Dedicated allows for the configuration of a private network connection through several cloud provider managed technologies:

  • VPN connections
  • AWS VPC peering
  • AWS Transit Gateway
  • AWS Direct Connect
  • Google Cloud VPC Network peering
  • Google Cloud Classic VPN
  • Google Cloud HA VPN
Important

Red Hat SRE’s do not monitor private network connections. Monitoring these connections is the responsibility of the customer.

4.8. DNS forwarding

For OpenShift Dedicated clusters that have a private cloud network configuration, a customer may specify internal DNS servers available on that private connection that should be queried for explicitly provided domains.

Chapter 5. Storage

5.1. Encrypted-at-rest OS/Node storage

Master nodes use encrypted-at-rest-EBS storage.

5.2. Encrypted-at-rest PV

EBS volumes used for persistent volumes are encrypted-at-rest by default.

5.3. Block storage (RWO)

Persistent volumes are backed by block storage (AWS EBS and Google Cloud persistent disk), which is Read-Write-Once. On an OpenShift Dedicated base cluster, 100GB of block storage is provided for persistent volumes, which is dynamically provisioned and recycled based on application requests. Additional persistent storage can be purchased in 500GB increments.

Persistent volumes can only be attached to a single node at a time and are specific to the availability zone in which they were provisioned, but they can be attached to any node in the availability zone.

Each cloud provider has its own limits for how many PVs can be attached to a single node. See AWS instance type limits or Google Cloud Platform custom machine types for details.

5.4. Shared storage (RWX)

Shared storage is not available on OpenShift Dedicated at this time.

Chapter 6. Platform

6.1. Cluster backup policy

Important

It is critical that customers have a backup plan for their applications and application data.

Application and application data backups are not a part of the OpenShift Dedicated service. All Kubernetes objects and PVs in each OpenShift Dedicated cluster are backed up to facilitate a prompt recovery in the unlikely event that a cluster becomes irreparably inoperable.

The backups are stored in a secure object storage (Multi Availability Zone) bucket in the same account as the cluster. Node root volumes are not backed up as Red Hat Enterprise Linux CoreOS is fully managed by the OpenShift Container Platform cluster and no stateful data should be stored on a node’s root volume.

The following table shows the frequency of backups:

ComponentSnapshot FrequencyRetentionNotes

Full object store backup, all cluster PVs

Daily at 0100 UTC

7 days

This is a full backup of all kubernetes objects, as well as all mounted PVs in the cluster.

Full object store backup, all cluster PVs

Weekly on Mondays at 0200 UTC

30 days

This is a full backup of all kubernetes objects, as well as all mounted PVs in the cluster.

Full object store backup

Hourly at 17 minutes past the hour

24 hours

This is a full backup of all kubernetes objects. No PVs are backed up in this backup schedule.

6.2. Autoscaling

Node autoscaling is not available on OpenShift Dedicated at this time.

6.3. Daemonsets

Customers may create and run DaemonSets on OpenShift Dedicated. In order to restrict DaemonSets to only running on worker nodes, use the following nodeSelector:

...
spec:
  nodeSelector:
    role: worker
...

6.4. Multiple availability zone

In a multiple availability zone cluster, master nodes are distributed across availability zones and at least three worker nodes are required in each availability zone.

6.5. Node labels

Custom node labels are created by Red Hat during node creation and cannot be changed on OpenShift Dedicated clusters at this time.

6.6. Openshift version

OpenShift Dedicated is run as a service and is kept up-to-date with the latest OpenShift Container Platform version.

6.7. Upgrades

Patch level (also known as z-stream; x.y.Z) updates are applied automatically the week following their release as long as OpenShift Dedicated-specific end-to-end tests pass.

Minor version updates (x.Y.z) may include Kubernetes version upgrades and/or API changes. Therefore, customers are notified by email two weeks in advance before these upgrades are automatically applied.

6.8. Window containers

Window containers are not available on OpenShift Dedicated at this time.

6.9. Container engine

OpenShift Dedicated runs on OpenShift 4 and uses CRI-O as the only available container engine.

6.10. Operating system

OpenShift Dedicated runs on OpenShift 4 and uses Red Hat Enterprise Linux CoreOS as the operating system for all master and worker nodes.

6.11. Kubernetes operator support

OpenShift Dedicated supports non-privileged Operators created by Red Hat and Certified ISVs.

Chapter 7. Security

7.1. Authentication provider

Authentication for the cluster is configured as part of the OpenShift Cluster Manager (OCM) cluster creation process. OpenShift is not an identity provider, and all access to the cluster must be managed by the customer as part of their integrated solution. Multiple identity providers provisioned at the same time is supported. The following identity providers are supported:

  • GitHub or GitHub Enterprise
  • GitLab
  • Google
  • LDAP
  • OpenID connect

7.2. Privileged containers

Privileged containers are not supported on OpenShift Dedicated. To enable Red Hat to operate OpenShift Dedicated as a managed service with an SLA, some restrictions are enforced to limit the ability of rogue or accidental changes that could impact the service.

7.3. Customer administrator user

In addition to normal users, OpenShift Dedicated provides access to an OpenShift Dedicated-specific Group called dedicated-admin. Any users on the cluster that are members of the dedicated-admin group:

  • Have administrator access to all customer-created projects on the cluster
  • Can manage resource quotas and limits on the cluster
  • Can add and manage NetworkPolicy objects
  • Are able to view information about specific nodes and PVs in the cluster, including scheduler information
  • Can access the reserved dedicated-admin project on the cluster, which allows for the creation of Service Accounts with elevated privileges and also gives the ability to update default limits and quotas for projects on the cluster.

7.4. Cluster administration role

As an administrator of OpenShift Dedicated with Customer Cloud Subscriptions (CCS), you can request additional permissions and access to the cluster-admin role within your organization’s cluster. While logged into an account with the 'cluster-admin' role, users have increased permissions to run privileged security contexts.

To request access to cluster-admin on your cluster, please open a Red Hat support request.

7.5. Project self-service

All users, by default, have the ability to create, update, and delete their projects. This can be restricted if a member of the dedicated-admin group removes the self-provisioner role from authenticated users:

oc adm policy remove-cluster-role-from-group self-provisioner system:authenticated:oauth

Restrictions can be reverted by applying:

oc adm policy add-cluster-role-to-group self-provisioner system:authenticated:oauth

7.6. Regulatory compliance

Refer to OpenShift Dedicated Process and Security Overview for the latest compliance information.

7.7. Network security

With OpenShift Dedicated on AWS, AWS does provide a standard DDoS protection on all Load Balancers, called AWS Shield. This provides 95% protection against most commonly used level 3 and 4 attacks on all the public facing Load Balancers used for OpenShift Dedicated. We also add a 10 second timeout for http requests coming to our haproxy router to receive a response or the connection is closed to provide additional protection.

Part II. Responsibility Assignment Matrix

This documentation outlines Red Hat, cloud provider, and customer responsibilities for the OpenShift Dedicated managed service.

Chapter 8. Overview of responsibilities for OpenShift Dedicated

While Red Hat manages the OpenShift Dedicated service, the customer shares responsibility with respect to certain aspects. The OpenShift Dedicated services are accessed remotely, hosted on public cloud resources, created in either Red Hat or customer-owned cloud service provider accounts, and have underlying platform and data security that is owned by Red Hat.

Important

If the cluster-admin role is enabled on a cluster, please see the responsibilities and exclusion notes in the Red Hat Enterprise Agreement Appendix 4 (Online Subscription Services).

ResourceIncident and operations managementChange managementIdentity and access managementSecurity and regulation complianceDisaster recovery

Customer data

Customer

Customer

Customer

Customer

Customer

Customer applications

Customer

Customer

Customer

Customer

Customer

Developer services

Customer

Customer

Customer

Customer

Customer

Platform monitoring

Redhat

Redhat

Redhat

Redhat

Redhat

Logging

Redhat

Shared

Shared

Shared

Red Hat

Application networking

Shared

Shared

Shared

Red Hat

Red Hat

Cluster networking

Redhat

Shared

Shared

Red Hat

Red Hat

Virtual networking

Shared

Shared

Shared

Shared

Shared

Master and infrastructure nodes

Redhat

Redhat

Redhat

Redhat

Redhat

Worker nodes

Redhat

Redhat

Redhat

Redhat

Redhat

Cluster version

Redhat

Shared

Redhat

Redhat

Redhat

Capacity managment

Red Hat

Shared

Redhat

Redhat

Redhat

Virtual storage

Red Hat and cloud provider

Red Hat and cloud provider

Red Hat and cloud provider

Red Hat and cloud provider

Red Hat and cloud provider

Physical infrastructure and security

Cloud provider

Cloud provider

Cloud provider

Cloud provider

Cloud provider

Chapter 9. Shared responsibility matrix

The customer and Red Hat share responsibility for the monitoring and maintenance of an OpenShift Dedicated cluster. This documentation illustrates the delineation of responsibilities by area and task.

9.1. Incident and operations management

The customer is responsible for incident and operations management of customer application data and any custom networking the customer may have configured for the cluster network or virtual network.

ResourceRed Hat responsibilitiesCustomer responsibilities

Application networking

Monitor cloud load balancers and native OpenShift router service, and respond to alerts.

  • Monitor health of service load balancer endpoints
  • Monitor health of application routes, and the endpoints behind them.
  • Report outages to Red Hat.

Virtual networking

Monitor cloud load balancers, subnets, and public cloud components necessary for default platform networking, and respond to alerts.

Monitor network traffic that is optionally configured through VPC to VPC connection, VPN connection, or Direct connection for potential issues or security threats.

9.2. Change management

Red Hat is responsible for enabling changes to the cluster infrastructure and services that the customer will control, as well as maintaining versions for the master nodes, infrastructure nodes and services, and worker nodes. The customer is responsible for initiating infrastructure change requests and installing and maintaining optional services and networking configurations on the cluster, as well as all changes to customer data and customer applications.

ResourceRed Hat responsibilitiesCustomer responsibilities

Logging

  • Centrally aggregate and monitor platform audit logs.
  • Provide and maintain a logging operator to enable the customer to deploy a logging stack for default application logging.
  • Provide audit logs upon customer request.
  • Install the optional default application logging operator on the cluster.
  • Install, configure, and maintain any optional app logging solutions, such as logging sidecar containers or third-party logging applications.
  • Tune size and frequency of application logs being produced by customer applications if they are affecting the stability of the logging stack or the cluster.
  • Request platform audit logs through a support case for researching specific incidents.

Application networking

  • Set up public cloud load balancers. Provide the ability to set up private load balancers and up to one additional load balancer when required.
  • Set up native OpenShift router service. Provide the ability to set the router as private and add up to one additional router shard.
  • Install, configure, and maintain OpenShift SDN components for default internal pod traffic.
  • Provide the ability for the customer to manage NetworkPolicy and EgressNetworkPolicy (firewall) objects.
  • Configure non-default pod network permissions for project and pod networks, pod ingress, and pod egress using NetworkPolicy objects.
  • Use OpenShift Cluster Manager to request a private load balancer for default application routes.
  • Use OpenShift Cluster Manager to configure up to one additional public or private router shard and corresponding load balancer.
  • Request and configure any additional service load balancers for specific services.
  • Configure any necessary DNS forwarding rules.

Cluster networking

  • Set up cluster management components, such as public or private service endpoints and necessary integration with virtual networking components.
  • Set up internal networking components required for internal cluster communication between worker, infrastructure, and master nodes.
  • Provide optional non-default IP address ranges for machine CIDR, service CIDR, and pod CIDR if needed through OpenShift Cluster Manager when the cluster is provisioned.
  • Request that the API service endpoint be made public or private on cluster creation or after cluster creation through OpenShift Cluster Manager.

Virtual networking

  • Set up and configure virtual networking components required to provision the cluster, including virtual private cloud, subnets, load balancers, internet gateways, NAT gateways, etc.
  • Provide the ability for the customer to manage VPN connectivity with on-premises resources, VPC to VPC connectivity, and Direct connectivity as required through OpenShift Cluster Manager.
  • Enable customers to create and deploy public cloud load balancers for use with service load balancers.
  • Set up and maintain optional public cloud networking components, such as VPC to VPC connection, VPN connection, or Direct connection.
  • Request and configure any additional service load balancers for specific services.

Cluster version

  • Communicate schedule and status of upgrades for minor and maintenance versions
  • Publish changelogs and release notes for minor and maintenance upgrades
  • Work with Red Hat to establish maintenance start times for upgrades
  • Test customer applications on minor and maintenance versions to ensure compatibility

Capacity management

  • Monitor utilization of control plane (master nodes and infrastructure nodes)
  • Scale and/or resize control plane nodes to maintain quality of service
  • Monitor utilization of customer resources including Network, Storage and Compute capacity. Where autoscaling features are not enabled alert customer for any changes required to cluster resources (eg. new compute nodes to scale, additional storage, etc)
  • Use the provided OpenShift Cluster Manager controls to add or remove additional worker nodes as required.
  • Respond to Red Hat notifications regarding cluster resource requirements.

9.3. Identity and access management

The Identity and Access Management matrix includes responsibilities for managing authorized access to clusters, applications, and infrastructure resources. This includes tasks such as providing access control mechanisms, authentication, authorization, and managing access to resources.

ResourceRed Hat responsibilitiesCustomer responsibilities

Logging

  • Adhere to an industry standards-based tiered internal access process for platform audit logs.
  • Provide native OpenShift RBAC capabilities.
  • Configure OpenShift RBAC to control access to projects and by extension a project’s application logs.
  • For third-party or custom application logging solutions, the customer is responsible for access management.

Application networking

Provide native OpenShift RBAC and 'dedicated-admin' capabilities.

  • Configure OpenShift dedicated-admins and RBAC to control access to route configuration as required.
  • Manage Org Admins for Red Hat organization to grant access to OpenShift Cluster Manager. OCM is used to configure router options and provide service load balancer quota.

Cluster networking

  • Provide customer access controls through OpenShift Cluster Manager.
  • Provide native OpenShift RBAC and dedicated-admin capabilities.
  • Manage Red Hat organization membership of Red Hat accounts.
  • Manage Org Admins for Red Hat organization to grant access to OpenShift Cluster Manager.
  • Configure OpenShift dedicated-admins and RBAC to control access to route configuration as required.

Virtual networking

Provide customer access controls through OpenShift Cluster Manager.

Manage optional user access to public cloud components through OpenShift Cluster Manager.

9.4. Security and regulation compliance

The following are the responsibilities and controls related to compliance:

ResourceRed Hat responsibilitiesCustomer responsibilities

Logging

Send cluster audit logs to a Red Hat SIEM to analyze for security events. Retain audit logs for a defined period of time to support forensic analysis.

Analyze application logs for security events. Send application logs to an external endpoint through logging sidecar containers or third-party logging applications if longer retention is required than is offered by the default logging stack.

Virtual networking

  • Monitor virtual networking components for potential issues and security threats.
  • Leverage additional public cloud provider tools for additional monitoring and protection.
  • Monitor optionally-configured virtual networking components for potential issues and security threats.
  • Configure any necessary firewall rules or data center protections as required.

9.5. Disaster recovery

Disaster recovery includes data and configuration backup, replicating data and configuration to the disaster recovery environment, and failover on disaster events.

ResourceRed Hat responsibilitiesCustomer responsibilities

Virtual networking

Restore or recreate affected virtual network components that are necessary for the platform to function.

  • Configure virtual networking connections with more than one tunnel where possible for protection against outages as recommended by the public cloud provider.
  • Maintain failover DNS and load balancing if using a global load balancer with multiple clusters.

Chapter 10. Customer responsibilities for data and applications

The customer is responsible for the applications, workloads, and data that they deploy to OpenShift Dedicated. However, Red Hat provides various tools to help the customer manage data and applications on the platform.

ResourceRed Hat responsibilitiesCustomer responsibilities

Customer data

  • Maintain platform-level standards for data encryption.
  • Provide OpenShift components to help manage application data, such as secrets.
  • Enable integration with third-party data services (such as AWS RDS or Google Cloud SQL) to store and manage data outside of the cluster and/or cloud provider.

Maintain responsibility for all customer data stored on the platform and how customer applications consume and expose this data.

Customer applications

  • Provision clusters with OpenShift components installed so that customers can access the OpenShift and Kubernetes APIs to deploy and manage containerized applications.
  • Create clusters with image pull secrets so that customer deployments can pull images from the Red Hat Container Catalog registry.
  • Provide access to OpenShift APIs that a customer can use to set up Operators to add community, third-party, and Red Hat services to the cluster.
  • Provide storage classes and plug-ins to support persistent volumes for use with customer applications.
  • Maintain responsibility for customer and third-party applications, data, and their complete lifecycle.
  • If a customer adds Red Hat, community, third-party, their own, or other services to the cluster by using Operators or external images, the customer is responsible for these services and for working with the appropriate provider (including Red Hat) to troubleshoot any issues.
  • Use the provided tools and features to configure and deploy; keep up-to-date; set up resource requests and limits; size the cluster to have enough resources to run apps; set up permissions; integrate with other services; manage any image streams or templates that the customer deploys; externally serve; save, back up, and restore data; and otherwise manage their highly available and resilient workloads.
  • Maintain responsibility for monitoring the applications run on OpenShift Dedicated; including installing and operating software to gather metrics and create alerts.

Developer services (CodeReady)

Make CodeReady Workspaces available as an add-on through OpenShift Cluster Manager (OCM).

Install, secure, and operate CodeReady Workspaces and the Developer CLI.

Part III. Understanding process and security for OpenShift Dedicated

Chapter 11. Incident and operations management

This documentation details the Red Hat responsibilities for the OpenShift Dedicated managed service.

11.1. Platform monitoring

A Red Hat Site Reliability Engineer (SRE) maintains a centralized monitoring and alerting system for all OpenShift Dedicated cluster components, SRE services, and underlying cloud provider accounts. Platform audit logs are securely forwarded to a centralized SIEM (Security Information and Event Monitoring) system, where they may trigger configured alerts to the SRE team and are also subject to manual review. Audit logs are retained in the SIEM for one year. Audit logs for a given cluster are not deleted at the time the cluster is deleted.

11.2. Incident management

An incident is an event which results in a degradation or outage of one or more Red Hat services. An incident may be raised by a customer or Customer Experience and Engagement (CEE) member through a support case, directly by the centralized monitoring and alerting system, or directly by a member of the SRE team.

Depending on the impact on the service and customer, the incident is categorized in terms of severity.

The general workflow of how a new incident is managed by Red Hat:

  1. An SRE first responder is alerted to a new incident, and begins an initial investigation.
  2. After the initial investigation, the incident is assigned an incident lead, who coordinates the recovery efforts.
  3. The incident lead manages all communication and coordination around recovery, including any relevant notifications and/or support case updates.
  4. The incident is recovered.
  5. The incident is documented and a root cause analysis is performed within 3 business days of the incident.
  6. Root Cause Analysis (RCA) draft document will be shared with the customer within 7 business days of the incident.

11.3. Notiications

Platform notifications are configured using email. Any customer notification will also be sent to the corresponding Red Hat account team and if applicable, the Red Hat Technical Account Manager.

The following activities may trigger notifications:

  • Platform incident
  • Performance degradation
  • Cluster capacity warnings
  • Critical vulnerabilities and resolution
  • Upgrade scheduling

11.4. Backup and recovery

All OpenShift Dedicated clusters are backed up using cloud provider snapshots. Notably, this does not include customer data stored on persistent volumes. All snapshots are taken using the appropriate cloud provider snapshot APIs and are uploaded to a secure object storage bucket (S3 in AWS, and GCS in Google Cloud) in the same account as the cluster.

ComponentSnapshot frequencyRetentionNotes

Full object store backup, all SRE-managed cluster Persistent Volumes (PVs)

Daily

7 days

This is a full backup of all kubernetes objects like etcd, as well as all SRE-managed PVs in the cluster.

Weekly

30 days

Full object store backup

Hourly

24 hour

This is a full backup of all kubernetes objects like etcd. No PVs are backed up in this backup schedule.

Node root volume

Never

N/A

Nodes are considered to be short-term. Nothing critical should be stored on a node’s root volume.

  • Red Hat SRE rehearses recovery processes quarterly.
  • Red Hat does not commit to any Recovery Point Objective (RPO) or Recovery Time Objective (RTO).
  • Customers should take regular backups of their data.
  • Backups performed by SRE are taken as a precautionary measure only. They are stored in the same Region as the cluster.
  • Customers may access SRE backup data on request via a support case.
  • Red Hat highly encourages customers to deploy multi-AZ clusters with workloads that follow Kubernetes best practices to ensure high availability within a region.
  • In the event an entire cloud Region is unavailable, customers must install a new cluster in a different region and restore their apps using their backup data.

11.5. Cluster capacity

Evaluating and managing cluster capacity is a responsibility that is shared between Red Hat and the customer. Red Hat SRE is responsible for the capacity of all master and infrastructure nodes on the cluster.

Red Hat SRE also evaluates cluster capacity during upgrades and in response to cluster alerts. The impact of a cluster upgrade on capacity is evaluated as part of the upgrade testing process to ensure that capacity is not negatively impacted by new additions to the cluster. During a cluster upgrade, additional worker nodes are added to make sure that total cluster capacity is maintained during the upgrade process.

Capacity evaluations by SRE staff also happen in response to alerts from the cluster, once usage thresholds are exceeded for a certain period of time. Such alerts may also result in a notification to the customer.

Chapter 12. Change management

Cluster changes are initiated in one of two ways:

  1. A customer initiates changes through self-service capabilities like cluster deployment, worker node scaling, and cluster deletion.
  2. SRE initiates a change through Operator-driven capabilities like configuration, upgrade, patching, or configuration changes.

Change history is captured in the Cluster History section in OpenShift Cluster Manager (OCM) Overview tab and is available to customers. This includes logs from the following changes:

  • Adding or removing identity providers
  • Adding or removing users to/from the dedicated-admins group
  • Scaling the cluster compute nodes
  • Scaling the cluster load balancer
  • Scaling the cluster persistent storage
  • Upgrading the cluster

SRE-initiated changes that require manual intervention generally follow the below procedure:

  • Preparing for Change

    • Change characteristics are identified and a gap analysis against current state is performed.
    • Change steps are documented and validated.
    • Communication plan and schedule is shared with all stakeholders.
    • CICD and end-to-end tests are updated to automate change validation.
    • Change request capturing change details is submitted for management approval.
  • Managing Change

    • Automated nightly CI/CD jobs pick up the change and run tests.
    • The change is made to Integration and Stage environments, and manually validated before updating the customer cluster.
    • Major change notifications are sent before and after the event.
  • Reinforcing the Change

    • Feedback on the change is collected and analyzed..
    • Potential gaps are diagnosed in order to understand resistance and automate similar change requests.
    • Corrective actions are implemented.
Note

SREs consider manual changes a failure and this is only used as a fallback process.

12.1. Configuration management

The infrastructure and configuration of the OpenShift Dedicated environment is managed as code. Red Hat SRE manages changes to the OpenShift Dedicated environment using a gitops workflow and automated CI/CD pipeline.

Each proposed change undergoes a series of automated verifications immediately upon check-in. Changes are then deployed to a Staging environment where they undergo automated integration testing. Finally, changes are deployed to the Production environment. Each step is fully automated.

An authorized SRE reviewer must approve advancement to each step. The reviewer may not be the same individual who proposed the change. All changes and approvals are fully auditable as part of the gitops workflow.

12.2. Patch management

OpenShift Container Platform software and the underlying immutable Red Hat Enterprise Linux CoreOS (RHCOS) operating system image are patched for bugs and vulnerabilities as a side effect of regular z-stream upgrades. Read more about RHCOS architecture in the OpenShift Container Platform documentation.

12.3. Relase management

OpenShift Dedicated clusters are upgraded as frequently as weekly to ensure the latest security patches and bug fixes are applied to OpenShift Dedicated clusters.

Patch-level upgrades, also referred to as z-stream upgrades (e.g. 4.3.18 to 4.3.19), are automatically deployed on Tuesdays. New z-stream releases are tested nightly with automated OpenShift Dedicated integration testing and released only once validated in the OSD environment.

Minor version upgrades, also referred to as y-stream upgrades (e.g. 4.3 to 4.4), are coordinated with customers via email notification.

Customers can review the history of all cluster upgrade events in their OCM web console.

Chapter 13. Identity and access management

Most access by SRE teams is done using cluster operators through automated configuration management.

13.1. SRE access to all OpenShift Dedicated clusters

SREs access OpenShift Dedicated clusters through the web console or command line tools. Authentication requires Multi-Factor Authentication (MFA) with industry-standard requirements for password complexity and account lockouts. SREs must authenticate as individuals to ensure auditability. All authentication attempts are logged to a Security Information and Event Management (SIEM) system.

SREs access private clusters using an encrypted tunnel through a hardened SRE Support Pod running in the cluster. Connections to the SRE Support Pod are permitted only from a secured Red Hat network using an IP allow-list. In addition to the cluster authentication controls described above, authentication to the SRE Support Pod is controlled using SSH keys. SSH key authorization is limited to SRE staff and automatically synchronized with Red Hat corporate directory data. Corporate directory data is secured and controlled by HR systems, including management review, approval, and audits.

13.2. Privileged access controls in OpenShift Dedicated

Red Hat SRE adheres to the principle of least privilege when accessing OpenShift Dedicated and public cloud provider components. There are four basic categories of manual SRE access:

  • SRE admin access through the Red Hat Portal with normal two-factor authentication and no privileged elevation.
  • SRE admin access through the Red Hat corporate SSO with normal two-factor authentication and no privileged elevation.
  • OpenShift elevation, which is a manual elevation using Red Hat SSO. It is limited to 2 hours, is fully audited, and requires management approval.
  • Cloud provider access/elevation, which is a manual elevation for cloud provider console access. It is limited to 60 minutes, is fully audited, and requires management approval.

Each of these access types have different levels of access to components:

ComponentTypical SRE admin access (Red Hat Portal)Typical SRE admin access (Red Hat SSO)Openshift elevationCloud provider access/elevation

OpenShift Cluster Manager (OCM)

R/W

No access

No access

No access

OpenShift console

No access

R/W

R/W

No access

Node Operatiing system

No access

A specific list of elevated OS and network permissions.

A specific list of elevated OS and network permissions.

No access

AWS Console

No access

No Access, but this is the account used to request cloud provider access.

No access

All cloud provider permissions using the SRE identity.

13.3. SRE access to cloud infrastructure accounts

Red Hat personnel do not access cloud infrastructure accounts in the course of routine OpenShift Dedicated operations. For emergency troubleshooting purposes, Red Hat SRE have well-defined and auditable procedures to access cloud infrastructure accounts.

In AWS, SREs generate a short-lived AWS access token for the osdManagedAdminSRE user using the AWS Security Token Service (STS). Access to the STS token is audit logged and traceable back to individual users. The osdManagedAdminSRE has the AdministratorAccess IAM policy attached.

In Google Cloud, SREs access resources after being authenticated against a Red Hat’s SAML identity provider (IDP). The IDP authorizes tokens that have time-to-live expirations. The issuance of the token is auditable by corporate Red Hat IT and linked back to an individual user.

13.4. Red Hat support access

Members of the Red Hat CEE team will typically have read-only access to parts of the cluster. Specifically, CEE has limited access to the core and product namespaces and does not have access to the customer namespaces.

RoleCore namespaceLayered product namespaceCustomer namespaceCloud infrastructure account*

OpenShift SRE

Read: All

Write: Very

Limited [1]

Read: All

Write: None

Read: None[2]

Write: None

Read: All [3]

Write: All [3]

CEE

Read: All

Write: None

Read: All

Write: None

Read: None[2]

Write: None

Read: None

Write: None

Customer administrator

Read: None

Write: None

Read: None

Write: None

Read: All

Write: All

Read: Limited[4]

Write: Limited[4]

Customer user

Read: None

Write: None

Read: None

Write: None

Read: Limited[5]

Write: Limited[5]

Read: None

Write: None

Everybody else

Read: None

Write: None

Read: None

Write: None

Read: None

Write: None

Read: None

Write: None

* Cloud Infrastructure Account refers to the underlying AWS or Google Cloud account

  1. Limited to addressing common use cases such as failing deployments, upgrading a cluster, and replacing bad worker nodes.
  2. Red Hat associates have no access to customer data by default.
  3. SRE access to the cloud infrastructure account is a "break-glass" procedure for exceptional troubleshooting during a documented incident.
  4. Customer Administrator has limited access to the cloud infrastructure account console through Cloud Infrastructure Access.
  5. Limited to what is granted through RBAC by the Customer Administrator, as well as namespaces created by the user.

13.5. Customer access

Customer access is limited to namespaces created by the customer and permissions that are granted using RBAC by the Customer Administrator role. Access to the underlying infrastructure or product namespaces is generally not permitted without 'cluster-admin' access. More information on customer access and authentication can be found in the Understanding Authentication section of the documentation.

13.6. Access approval and review

New SRE user access requires management approval. Separated or transferred SRE accounts are removed as authorized users through an automated process. Additionally, SRE performs periodic access review including management sign-off of authorized user lists.

Chapter 14. Security and regulation compliance

Security and regulation compliance includes tasks such as the implementation of security controls and compliance certification.

14.1. Data classification

Red Hat defines and follows a data classification standard to determine the sensitivity of data and highlight inherent risk to the confidentiality and integrity of that data while it is collected, used, transmitted stored, and processed. Customer-owned data is classified at the highest level of sensitivity and handling requirements.

14.2. Data management

OpenShift Dedicated uses cloud provider services to help securely manage keys for encrypted data (AWS KMS and Google Cloud KMS). These keys are used for control plane data volumes which are encrypted by default. Persistent volumes for customer applications also use these cloud services for key management.

When a customer deletes their OpenShift Dedicated cluster, all cluster data is permanently deleted, including control plane data volumes, customer application data volumes (PVs), and backup data.

14.3. Vulnerability management

Red Hat performs periodic vulnerability scanning of OpenShift Dedicated using industry standard tools. Identified vulnerabilities are tracked to their remediation according to timelines based on severity. Vulnerability scanning and remediation activities are documented for verification by third party assessors in the course of compliance certification audits.

14.4. Network security

14.4.1. Firewall and DDoS protection

Each OpenShift Dedicated cluster is protected by a secure network configuration at the cloud infrastructure level using firewall rules (AWS Security Groups or Google Cloud Compute Engine firewall rules). OpenShift Dedicated customers on AWS are also protected against DDoS attacks with AWS Shield Standard.

14.4.2. Private clusters and network connectivity

Customers can optionally configure their OpenShift Dedicated cluster endpoints (web console, API, and application router) to be made private so that the cluster control plane and/or applications are not accessible from the Internet.

For AWS customers can configure a private network connection to their OpenShift Dedicated cluster through AWS VPC peering, AWS VPN, or AWS Direct Connect.

Note

At this time, private clusters are not supported for OpenShift Dedicated clusters on Google Cloud.

14.4.3. Cluster network access controls

Fine-grained network access control rules can be configured by customers per-project using NetworkPolicy objects and the OpenShift SDN.

14.5. Penetration testing

Red Hat performs periodic penetration tests against OpenShift Dedicated. Tests are performed by an independent internal team using industry standard tools and best practices.

Any issues that may be discovered are prioritized based on severity. Any issues found belonging to open source projects are shared with the community for resolution.

14.6. Compliance

OpenShift Dedicated follows common industry best practices for security and controls.

OpenShift Dedicated is certified for SOC 2 Type I on AWS.

OpenShift Dedicated is ISO 27001 certified for both AWS and Google Cloud.

Chapter 15. Disaster recovery

OpenShift Dedicated provides disaster recovery for failures that occur at the pod, worker node, infrastructure node, master node, and availability zone levels.

All disaster recovery requires that the customer use best practices for deploying highly available applications, storage, and cluster architecture (e.g. single-zone deployment vs. multi-zone deployment) to account for the level of desired availability.

One single-zone cluster will not provide disaster avoidance or recovery in the event of an availability zone or region outage. Multiple single-zone clusters with customer-maintained failover can account for outages at the zone or region levels.

One multi-zone cluster will not provide disaster avoidance or recovery in the event of a full region outage. Multiple multi-zone clusters with customer-maintained failover can account for outages at the region level.

Part IV. Understanding availability for OpenShift Dedicated

Availability and disaster avoidance are extremely important aspects of any application platform. OpenShift Dedicated provides many protections against failures at several levels, but customer-deployed applications must be appropriately configured for high availability. In addition, in order to account for cloud provider outages that may occur, other options are available, such as deploying a cluster across multiple availability zones or maintaining multiple clusters with failover mechanisms.

Chapter 16. Potential points of failure

OpenShift Container Platform provides many features and options for protecting your workloads against downtime, but applications must be architected appropriately to take advantage of these features.

OpenShift Dedicated can help further protect you against many common kubernetes issues by adding Red Hat SRE support and the option to deploy a multi-zone cluster, but there are a number of ways in which a container or infrastructure can still fail. By understanding potential points of failure, you can understand risks and appropriately architect both your applications and your clusters to be as resilient as necessary at each specific level.

Note

An outage can occur at several different levels of infrastructure and cluster components.

16.1. Container or pod failure

By design, pods are meant to exist for a short time. Appropriately scaling services so that multiple instances of your application pods are running will protect against issues with any individual pod or container. OpenShift’s node scheduler can also make sure these workloads are distributed across different worker nodes to further improve resiliency.

When accounting for possible pod failures, it is also important to understand how storage is attached to your applications. Single persistent volumes attached to single pods will not be able to leverage the full benefits of pod scaling, whereas replicated databases, database services, or shared storage will.

To avoid disruption to your applications during planned maintenance, such as upgrades, it’s important to define a Pod Disruption Budget. These are part of the Kubernetes API and can be managed with oc commands like other object types. They allow the specification of safety constraints on Pods during operations, such as draining a node for maintenance.

16.2. Worker node failure

Worker nodes are the virtual machines that contain your application pods. By default, an OpenShift Dedicated cluster will have a minimum of four worker nodes for a single availability-zone cluster. In the event of a worker node failure, pods will be relocated to functioning worker nodes, as long as there is enough capacity, until any issue with an existing node is resolved or the node is replaced. More worker nodes means more protection against single node outages, and ensures proper cluster capacity for rescheduled pods in the event of a node failure.

Note

When accounting for possible node failures, it is also important to understand how storage is affected.

16.3. Cluster failure

OpenShift Dedicated clusters have at least three master nodes and three infrastructure nodes that are preconfigured for high availability, either in a single zone or across multiple zones depending on the type of cluster you have selected. This means that master and infrastructure nodes have the same resiliency of worker nodes, with the added benefit of being managed completely by Red Hat.

In the event of a complete master outage, the OpenShift APIs will not function, and existing worker node pods will be unaffected. However, if there is also a pod or node outage at the same time, the masters will have to recover before new pods or nodes can be added or scheduled.

All services running on infrastructure nodes are configured by Red Hat to be highly available and distributed across infrastructure nodes. In the event of a complete infrastructure outage, these services will be unavailable until these nodes have been recovered.

16.4. Zone failure

A zone failure from a public cloud provider will affect all virtual components, such as worker nodes, block or shared storage, and load balancers that are specific to a single availability zone. To protect against a zone failure, OpenShift Dedicated provides the option for clusters that are distributed across three availability zones, called multi-availability zone clusters. Existing stateless workloads will be redistributed to unaffected zones in the event of an outage, as long as there is enough capacity.

16.5. Storage failure

If you have deployed a stateful application, then storage is a critical component and must be accounted for when thinking about high availability. A single block storage PV is unable to withstand outages even at the pod level. The best ways to maintain availability of storage are to use replicated storage solutions, shared storage that is unaffected by outages, or a database service that is independent of the cluster.

Legal Notice

Copyright © 2020 Red Hat, Inc.
The text of and illustrations in this document are licensed by Red Hat under a Creative Commons Attribution–Share Alike 3.0 Unported license ("CC-BY-SA"). An explanation of CC-BY-SA is available at http://creativecommons.org/licenses/by-sa/3.0/. In accordance with CC-BY-SA, if you distribute this document or an adaptation of it, you must provide the URL for the original version.
Red Hat, as the licensor of this document, waives the right to enforce, and agrees not to assert, Section 4d of CC-BY-SA to the fullest extent permitted by applicable law.
Red Hat, Red Hat Enterprise Linux, the Shadowman logo, the Red Hat logo, JBoss, OpenShift, Fedora, the Infinity logo, and RHCE are trademarks of Red Hat, Inc., registered in the United States and other countries.
Linux® is the registered trademark of Linus Torvalds in the United States and other countries.
Java® is a registered trademark of Oracle and/or its affiliates.
XFS® is a trademark of Silicon Graphics International Corp. or its subsidiaries in the United States and/or other countries.
MySQL® is a registered trademark of MySQL AB in the United States, the European Union and other countries.
Node.js® is an official trademark of Joyent. Red Hat is not formally related to or endorsed by the official Joyent Node.js open source or commercial project.
The OpenStack® Word Mark and OpenStack logo are either registered trademarks/service marks or trademarks/service marks of the OpenStack Foundation, in the United States and other countries and are used with the OpenStack Foundation's permission. We are not affiliated with, endorsed or sponsored by the OpenStack Foundation, or the OpenStack community.
All other trademarks are the property of their respective owners.