Prepare your environment

Red Hat OpenShift Service on AWS 4

Planning, limits, and scalability for Red Hat OpenShift Service on AWS

Red Hat OpenShift Documentation Team

Abstract

This document provides planning considerations for Red Hat OpenShift Service on AWS (ROSA) cluster deployments, including information about cluster limits and scalability.

Chapter 1. Prerequisites checklist for deploying ROSA using STS

This is a checklist of prerequisites needed to create a Red Hat OpenShift Service on AWS (ROSA) classic cluster with STS.

Note

This is a high level checklist and your implementation can vary.

Before running the installation process, verify that you deploy this from a machine that has access to:

  • The API services for the cloud to which you provision.
  • Access to api.openshift.com, oidc.op1.openshiftapps.com, and sso.redhat.com.
  • The hosts on the network that you provision.
  • The internet to obtain installation media.
Important

Starting with version 1.2.7 of the ROSA CLI, all OIDC provider endpoint URLs on new clusters use Amazon CloudFront and the oidc.op1.openshiftapps.com domain. This change improves access speed, reduces latency, and improves resiliency for new clusters created with the ROSA CLI 1.2.7 or later. There are no supported migration paths for existing OIDC provider configurations.

1.1. Accounts and CLIs Prerequisites

Accounts and CLIs you must install to deploy the cluster.

1.1.1. AWS account

1.1.2. AWS CLI (aws)

  • Install from AWS Command Line Interface if you have not already.
  • Configure the CLI:

    1. Enter aws configure in the terminal:

      $ aws configure
    2. Enter the AWS Access Key ID and press enter.
    3. Enter the AWS Secret Access Key and press enter.
    4. Enter the default region you want to deploy into.
    5. Enter the output format you want, “table” or “json”.
    6. Verify the output by running:

       $ aws sts get-caller-identity
    7. Ensure that the service role for ELB already exists by running:

      $ aws iam get-role --role-name "AWSServiceRoleForElasticLoadBalancing"
      1. If it does not exist, run:

        $ aws iam create-service-linked-role --aws-service-name "elasticloadbalancing.amazonaws.com"

1.1.3. Red Hat account

1.1.4. ROSA CLI (rosa)

  1. Enable ROSA from your AWS account on the AWS console if you have not already.
  2. Install the CLI from Installing the Red Hat OpenShift Service on AWS (ROSA) CLI, rosa or from the OpenShift console AWS console.
  3. Enter rosa login in a terminal, and this will prompt you to go to the token page through the console:

    $ rosa login
  4. Log in with your Red Hat account credentials.
  5. Click the Load token button.
  6. Copy the token and paste it back into the CLI prompt and press enter.

    • Alternatively, you can copy the full $ rosa login --token=abc…​ command and paste that in the terminal:

      $ rosa login --token=<abc..>
  7. Verify your credentials by running:

    $ rosa whoami
  8. Ensure you have sufficient quota by running:

    $ rosa verify quota

1.1.5. OpenShift CLI (oc)

  1. Install from Getting started with the OpenShift CLI or from the OpenShift console Command-line interface (CLI) tools.
  2. Verify that the OpenShift CLI has been installed correctly by running:

    $ rosa verify openshift-client

Once you have the above prerequisites installed and enabled, proceed to the next steps.

1.2. SCP Prerequisites

ROSA clusters are hosted in an AWS account within an AWS organizational unit. A service control policy (SCP) is created and applied to the AWS organizational unit that manages what services the AWS sub-accounts are permitted to access.

  • Ensure that your organization’s SCPs are not more restrictive than the roles and policies required by the cluster.
  • Ensure that your SCP is configured to allow the required aws-marketplace:Subscribe permission when you choose Enable ROSA from the console, and see AWS Organizations service control policy (SCP) is denying required AWS Marketplace permissions for more details.
  • When you create a ROSA classic cluster, an associated AWS OpenID Connect (OIDC) identity provider is created.

    • This OIDC provider configuration relies on a public key that is located in the us-east-1 AWS region.
    • Customers with AWS SCPs must allow the use of the us-east-1 AWS region, even if these clusters are deployed in a different region.

1.3. Networking Prerequisites

Prerequisites needed from a networking standpoint.

1.3.1. Firewall

1.3.2. Additional custom security groups

When you create a cluster using an existing non-managed VPC, you can add additional custom security groups during cluster creation. Complete these prerequisites before you create the cluster:

  • Create the custom security groups in AWS before you create the cluster.
  • Associate the custom security groups with the VPC that you are using to create the cluster. Do not associate the custom security groups with any other VPC.
  • You may need to request additional AWS quota for Security groups per network interface.

For more details see the detailed requirements for Security groups.

1.3.3. Custom DNS

  • If you want to use custom DNS, then the ROSA installer must be able to use VPC DNS with default DHCP options so it can resolve hosts locally.

    • To do so, run aws ec2 describe-dhcp-options and see if the VPC is using VPC Resolver:

      $ aws ec2 describe-dhcp-options
  • Otherwise, the upstream DNS will need to forward the cluster scope to this VPC so the cluster can resolve internal IPs and services.

Chapter 2. Detailed requirements for deploying ROSA using STS

Red Hat OpenShift Service on AWS (ROSA) provides a model that allows Red Hat to deploy clusters into a customer’s existing Amazon Web Service (AWS) account.

Tip

AWS Security Token Service (STS) is the recommended credential mode for installing and interacting with clusters on Red Hat OpenShift Service on AWS (ROSA) because it provides enhanced security.

Ensure that the following AWS prerequisites are met before installing ROSA with STS.

Important

When you create a ROSA cluster using AWS STS, an associated AWS OpenID Connect (OIDC) identity provider is created as well. This OIDC provider configuration relies on a public key that is located in the us-east-1 AWS region. Customers with AWS SCPs must allow the use of the us-east-1 AWS region, even if these clusters are deployed in a different region.

2.1. Customer requirements when using STS for deployment

The following prerequisites must be complete before you deploy a Red Hat OpenShift Service on AWS (ROSA) cluster that uses the AWS Security Token Service (STS).

2.1.1. Account

  • You must ensure that the AWS limits are sufficient to support Red Hat OpenShift Service on AWS provisioned within your AWS account. Running the rosa verify quota command in the CLI validates that you have the required quota to run a cluster.

    Note

    Quota verification checks your AWS quota, but it does not compare your consumption to your AWS quota. See the "Limits and scalability" link in Additional resources for more information.

  • If SCP policies are applied and enforced, these policies must not be more restrictive than the roles and policies required by the cluster.
  • Your AWS account should not be transferable to Red Hat.
  • You should not impose additional AWS usage restrictions beyond the defined roles and policies on Red Hat activities. Imposing restrictions will severely hinder Red Hat’s ability to respond to incidents.
  • You may deploy native AWS services within the same AWS account.
  • Your account must have a service-linked role set up as it is required for Elastic Load Balancing (ELB) to be configured. See the "Creating the Elastic Load Balancing (ELB) service-linked role" link in the Additional resources for information about creating a service-linked role for your ELB if you have not created a load balancer in your AWS account previously.

    Note

    You are encouraged, but not required, to deploy resources in a Virtual Private Cloud (VPC) separate from the VPC hosting Red Hat OpenShift Service on AWS and other Red Hat supported services.

2.1.2. Access requirements

  • Red Hat must have AWS console access to the customer-provided AWS account. Red Hat protects and manages this access.
  • You must not use the AWS account to elevate your permissions within the Red Hat OpenShift Service on AWS (ROSA) cluster.
  • Actions available in the ROSA CLI (rosa) or OpenShift Cluster Manager console must not be directly performed in your AWS account.
  • You do not need to have a preconfigured domain to deploy ROSA clusters. If you wish to use a custom domain, see the Additional resources for information.

2.1.3. Support requirements

  • Red Hat recommends that the customer have at least Business Support from AWS.
  • Red Hat may have permission from the customer to request AWS support on their behalf.
  • Red Hat may have permission from the customer to request AWS resource limit increases on the customer’s account.
  • Red Hat manages the restrictions, limitations, expectations, and defaults for all Red Hat OpenShift Service on AWS clusters in the same manner, unless otherwise specified in this requirements section.

2.1.4. Security requirements

  • Red Hat must have ingress access to EC2 hosts and the API server from allow-listed IP addresses.
  • Red Hat must have egress allowed to the documented domains. See the "AWS firewall prerequisites" section for the designated domains.

Additional resources

2.1.5. Requirements for using OpenShift Cluster Manager

The following sections describe requirements for OpenShift Cluster Manager. If you use the CLI tools exclusively, then you can disregard the requirements.

To use OpenShift Cluster Manager, you must link your AWS accounts. This linking concept is also known as account association.

2.1.5.1. AWS account association

Red Hat OpenShift Service on AWS (ROSA) cluster-provisioning tasks require linking ocm-role and user-role IAM roles to your AWS account using your Amazon Resource Name (ARN).

The ocm-role ARN is stored as a label in your Red Hat organization while the user-role ARN is stored as a label inside your Red Hat user account. Red Hat uses these ARN labels to confirm that the user is a valid account holder and that the correct permissions are available to perform the necessary tasks in the AWS account.

2.1.5.2. Linking your AWS account

You can link your AWS account to existing IAM roles by using the Red Hat OpenShift Service on AWS (ROSA) CLI, rosa.

Prerequisites

  • You have an AWS account.
  • You are using OpenShift Cluster Manager to create clusters.
  • You have the permissions required to install AWS account-wide roles. See the "Additional resources" of this section for more information.
  • You have installed and configured the latest AWS (aws) and ROSA (rosa) CLIs on your installation host.
  • You have created your ocm-role and user-role IAM roles, but have not yet linked them to your AWS account. You can check whether your IAM roles are already linked by running the following commands:

    $ rosa list ocm-role
    $ rosa list user-role

    If Yes is displayed in the Linked column for both roles, you have already linked the roles to an AWS account.

Procedure

  1. From the CLI, link your ocm-role resource to your Red Hat organization by using your Amazon Resource Name (ARN):

    Note

    You must have Red Hat Organization Administrator privileges to run the rosa link command. After you link the ocm-role resource with your AWS account, it is visible for all users in the organization.

    $ rosa link ocm-role --role-arn <arn>

    Example output

    I: Linking OCM role
    ? Link the '<AWS ACCOUNT ID>` role with organization '<ORG ID>'? Yes
    I: Successfully linked role-arn '<AWS ACCOUNT ID>' with organization account '<ORG ID>'

  2. From the CLI, link your user-role resource to your Red Hat user account by using your Amazon Resource Name (ARN):

    $ rosa link user-role --role-arn <arn>

    Example output

    I: Linking User role
    ? Link the 'arn:aws:iam::<ARN>:role/ManagedOpenShift-User-Role-125' role with organization '<AWS ID>'? Yes
    I: Successfully linked role-arn 'arn:aws:iam::<ARN>:role/ManagedOpenShift-User-Role-125' with organization account '<AWS ID>'

Additional resources

2.1.5.3. Associating multiple AWS accounts with your Red Hat organization

You can associate multiple AWS accounts with your Red Hat organization. Associating multiple accounts lets you create Red Hat OpenShift Service on AWS (ROSA) clusters on any of the associated AWS accounts from your Red Hat organization.

With this feature, you can create clusters in different AWS regions by using multiple AWS profiles as region-bound environments.

Prerequisites

  • You have an AWS account.
  • You are using OpenShift Cluster Manager to create clusters.
  • You have the permissions required to install AWS account-wide roles.
  • You have installed and configured the latest AWS (aws) and ROSA (rosa) CLIs on your installation host.
  • You have created your ocm-role and user-role IAM roles.

Procedure

To associate an additional AWS account, first create a profile in your local AWS configuration. Then, associate the account with your Red Hat organization by creating the ocm-role, user, and account roles in the additional AWS account.

To create the roles in an additional region, specify the --profile <aws-profile> parameter when running the rosa create commands and replace <aws_profile> with the additional account profile name:

  • To specify an AWS account profile when creating an OpenShift Cluster Manager role:

    $ rosa create --profile <aws_profile> ocm-role
  • To specify an AWS account profile when creating a user role:

    $ rosa create --profile <aws_profile> user-role
  • To specify an AWS account profile when creating the account roles:

    $ rosa create --profile <aws_profile> account-roles
Note

If you do not specify a profile, the default AWS profile is used.

2.2. Requirements for deploying a cluster in an opt-in region

An AWS opt-in region is a region that is not enabled by default. If you want to deploy a Red Hat OpenShift Service on AWS (ROSA) cluster that uses the AWS Security Token Service (STS) in an opt-in region, you must meet the following requirements:

  • The region must be enabled in your AWS account. For more information about enabling opt-in regions, see Managing AWS Regions in the AWS documentation.
  • The security token version in your AWS account must be set to version 2. You cannot use version 1 security tokens for opt-in regions.

    Important

    Updating to security token version 2 can impact the systems that store the tokens, due to the increased token length. For more information, see the AWS documentation on setting STS preferences.

2.2.1. Setting the AWS security token version

If you want to create a Red Hat OpenShift Service on AWS (ROSA) cluster with the AWS Security Token Service (STS) in an AWS opt-in region, you must set the security token version to version 2 in your AWS account.

Prerequisites

  • You have installed and configured the latest AWS CLI on your installation host.

Procedure

  1. List the ID of the AWS account that is defined in your AWS CLI configuration:

    $ aws sts get-caller-identity --query Account --output json

    Ensure that the output matches the ID of the relevant AWS account.

  2. List the security token version that is set in your AWS account:

    $ aws iam get-account-summary --query SummaryMap.GlobalEndpointTokenVersion --output json

    Example output

    1

  3. To update the security token version to version 2 for all regions in your AWS account, run the following command:

    $ aws iam set-security-token-service-preferences --global-endpoint-token-version v2Token
    Important

    Updating to security token version 2 can impact the systems that store the tokens, due to the increased token length. For more information, see the AWS documentation on setting STS preferences.

2.3. Red Hat managed IAM references for AWS

With the STS deployment model, Red Hat is no longer responsible for creating and managing Amazon Web Services (AWS) IAM policies, IAM users, or IAM roles. For information on creating these roles and policies, see the following sections on IAM roles.

2.4. Provisioned AWS Infrastructure

This is an overview of the provisioned Amazon Web Services (AWS) components on a deployed Red Hat OpenShift Service on AWS (ROSA) cluster. For a more detailed listing of all provisioned AWS components, see the OpenShift Container Platform documentation.

2.4.1. EC2 instances

AWS EC2 instances are required for deploying the control plane and data plane functions of ROSA in the AWS public cloud.

Instance types can vary for control plane and infrastructure nodes, depending on the worker node count. At a minimum, the following EC2 instances will be deployed:

  • Three m5.2xlarge control plane nodes
  • Two r5.xlarge infrastructure nodes
  • Two m5.xlarge customizable worker nodes

For further guidance on worker node counts, see the information about initial planning considerations in the "Limits and scalability" topic listed in the "Additional resources" section of this page.

2.4.2. Amazon Elastic Block Store storage

Amazon Elastic Block Store (Amazon EBS) block storage is used for both local node storage and persistent volume storage.

Volume requirements for each EC2 instance:

  • Control Plane Volume

    • Size: 350GB
    • Type: gp3
    • Input/Output Operations Per Second: 1000
  • Infrastructure Volume

    • Size: 300GB
    • Type: gp3
    • Input/Output Operations Per Second: 900
  • Worker Volume

    • Size: 300GB
    • Type: gp3
    • Input/Output Operations Per Second: 900
Note

Clusters deployed before the release of OpenShift Container Platform 4.11 use gp2 type storage by default.

2.4.3. Elastic Load Balancing

Up to two Network Load Balancers for API and up to two Classic Load Balancers for application router. For more information, see the ELB documentation for AWS.

2.4.4. S3 storage

The image registry is backed by AWS S3 storage. Pruning of resources is performed regularly to optimize S3 usage and cluster performance.

Note

Two buckets are required with a typical size of 2TB each.

2.4.5. VPC

Customers should expect to see one VPC per cluster. Additionally, the VPC will need the following configurations:

  • Subnets: Two subnets for a cluster with a single availability zone, or six subnets for a cluster with multiple availability zones.

    Note

    A public subnet connects directly to the internet through an internet gateway. A private subnet connects to the internet through a network address translation (NAT) gateway.

  • Route tables: One route table per private subnet, and one additional table per cluster.
  • Internet gateways: One Internet Gateway per cluster.
  • NAT gateways: One NAT Gateway per public subnet.

Figure 2.1. Sample VPC Architecture

VPC Reference Architecture

2.4.6. Security groups

AWS security groups provide security at the protocol and port access level; they are associated with EC2 instances and Elastic Load Balancing (ELB) load balancers. Each security group contains a set of rules that filter traffic coming in and out of one or more EC2 instances. You must ensure the ports required for the OpenShift installation are open on your network and configured to allow access between hosts.

Table 2.1. Required ports for default security groups

GroupTypeIP ProtocolPort range

MasterSecurityGroup

AWS::EC2::SecurityGroup

icmp

0

tcp

22

tcp

6443

tcp

22623

WorkerSecurityGroup

AWS::EC2::SecurityGroup

icmp

0

tcp

22

BootstrapSecurityGroup

AWS::EC2::SecurityGroup

tcp

22

tcp

19531

2.4.6.1. Additional custom security groups

When you create a cluster using an existing non-managed VPC, you can add additional custom security groups during cluster creation. Custom security groups are subject to the following limitations:

  • You must create the custom security groups in AWS before you create the cluster. For more information, see Amazon EC2 security groups for Linux instances.
  • You must associate the custom security groups with the VPC that the cluster will be installed into. Your custom security groups cannot be associated with another VPC.
  • You might need to request additional quota for your VPC if you are adding additional custom security groups. For information on AWS quota requirements for ROSA, see Required AWS service quotas in Prepare your environment. For information on requesting an AWS quota increase, see Requesting a quota increase.

2.6. Next steps

2.7. Additional resources

Chapter 3. ROSA IAM role resources

Red Hat OpenShift Service on AWS (ROSA) web UI requires that you have specific permissions on your AWS account that create a trust relationship to provide the end-user experience at OpenShift Cluster Manager and for the rosa command line interface (CLI).

This trust relationship is achieved through the creation and association of the ocm-role AWS IAM role. This role has a trust policy with the AWS installer that links your Red Hat account to your AWS account. In addition, you also need a user-role AWS IAM role for each web UI user, which serves to identify these users. This user-role AWS IAM role has no permissions.

The AWS IAM roles required to use OpenShift Cluster Manager are:

  • ocm-role
  • user-role

Whether you manage your clusters using the ROSA CLI (rosa) or OpenShift Cluster Manager web UI, you must create the account-wide roles, known as account-roles in the ROSA CLI, by using the ROSA CLI. These account roles are necessary for your first cluster, and these roles can be used across multiple clusters. These required account roles are:

  • Worker-Role
  • Support-Role
  • Installer-Role
  • ControlPlane-Role
Note

Role creation does not request your AWS access or secret keys. AWS Security Token Service (STS) is used as the basis of this workflow. AWS STS uses temporary, limited-privilege credentials to provide authentication.

For more information about creating these roles, see Account-wide IAM role and policy reference.

Cluster-specific Operator roles, known as operator-roles in the ROSA CLI, obtain the temporary permissions required to carry out cluster operations, such as managing back-end storage, ingress, and registry. These roles are required by the cluster that you create. These required Operator roles are:

  • <cluster_name>-<hash>-openshift-cluster-csi-drivers-ebs-cloud-credentials
  • <cluster_name>-<hash>-openshift-cloud-network-config-controller-credentials
  • <cluster_name>-<hash>-openshift-machine-api-aws-cloud-credentials
  • <cluster_name>-<hash>-openshift-cloud-credential-operator-cloud-credentials
  • <cluster_name>-<hash>-openshift-image-registry-installer-cloud-credentials
  • <cluster_name>-<hash>-openshift-ingress-operator-cloud-credentials

For more information on creating these roles, see Cluster-specific Operator IAM role reference.

3.1. About the ocm-role IAM resource

You must create the ocm-role IAM resource to enable a Red Hat organization of users to create Red Hat OpenShift Service on AWS (ROSA) clusters. Within the context of linking to AWS, a Red Hat organization is a single user within OpenShift Cluster Manager.

Some considerations for your ocm-role IAM resource are:

  • Only one ocm-role IAM role can be linked per Red Hat organization; however, you can have any number of ocm-role IAM roles per AWS account. The web UI requires that only one of these roles can be linked at a time.
  • Any user in a Red Hat organization may create and link an ocm-role IAM resource.
  • Only the Red Hat Organization Administrator can unlink an ocm-role IAM resource. This limitation is to protect other Red Hat organization members from disturbing the interface capabilities of other users.

    Note

    If you just created a Red Hat account that is not part of an existing organization, this account is also the Red Hat Organization Administrator.

  • See "Understanding the OpenShift Cluster Manager role" in the Additional resources of this section for a list of the AWS permissions policies for the basic and admin ocm-role IAM resources.

Using the ROSA CLI (rosa), you can link your IAM resource when you create it.

Note

"Linking" or "associating" your IAM resources with your AWS account means creating a trust-policy with your ocm-role IAM role and the Red Hat OpenShift Cluster Manager AWS role. After creating and linking your IAM resource, you see a trust relationship from your ocm-role IAM resource in AWS with the arn:aws:iam::7333:role/RH-Managed-OpenShift-Installer resource.

After a Red Hat Organization Administrator has created and linked an ocm-role IAM resource, all organization members may want to create and link their own user-role IAM role. This IAM resource only needs to be created and linked only once per user. If another user in your Red Hat organization has already created and linked an ocm-role IAM resource, you need to ensure you have created and linked your own user-role IAM role.

Additional resources

3.1.1. Creating an ocm-role IAM role

You create your ocm-role IAM roles by using the command-line interface (CLI).

Prerequisites

  • You have an AWS account.
  • You have Red Hat Organization Administrator privileges in the OpenShift Cluster Manager organization.
  • You have the permissions required to install AWS account-wide roles.
  • You have installed and configured the latest Red Hat OpenShift Service on AWS (ROSA) CLI, rosa, on your installation host.

Procedure

  • To create an ocm-role IAM role with basic privileges, run the following command:

    $ rosa create ocm-role
  • To create an ocm-role IAM role with admin privileges, run the following command:

    $ rosa create ocm-role --admin

    This command allows you create the role by specifying specific attributes. The following example output shows the "auto mode" selected, which lets the ROSA CLI (rosa) create your Operator roles and policies. See "Methods of account-wide role creation" in the Additional resources for more information.

Example output

I: Creating ocm role
? Role prefix: ManagedOpenShift 1
? Enable admin capabilities for the OCM role (optional): No 2
? Permissions boundary ARN (optional): 3
? Role Path (optional): 4
? Role creation mode: auto 5
I: Creating role using 'arn:aws:iam::<ARN>:user/<UserName>'
? Create the 'ManagedOpenShift-OCM-Role-182' role? Yes 6
I: Created role 'ManagedOpenShift-OCM-Role-182' with ARN  'arn:aws:iam::<ARN>:role/ManagedOpenShift-OCM-Role-182'
I: Linking OCM role
? OCM Role ARN: arn:aws:iam::<ARN>:role/ManagedOpenShift-OCM-Role-182 7
? Link the 'arn:aws:iam::<ARN>:role/ManagedOpenShift-OCM-Role-182' role with organization '<AWS ARN>'? Yes 8
I: Successfully linked role-arn 'arn:aws:iam::<ARN>:role/ManagedOpenShift-OCM-Role-182' with organization account '<AWS ARN>'

1
A prefix value for all of the created AWS resources. In this example, ManagedOpenShift prepends all of the AWS resources.
2
Choose if you want this role to have the additional admin permissions.
Note

You do not see this prompt if you used the --admin option.

3
The Amazon Resource Name (ARN) of the policy to set permission boundaries.
4
Specify an IAM path for the user name.
5
Choose the method to create your AWS roles. Using auto, the ROSA CLI generates and links the roles and policies. In the auto mode, you receive some different prompts to create the AWS roles.
6
The auto method asks if you want to create a specific ocm-role using your prefix.
7
Confirm that you want to associate your IAM role with your OpenShift Cluster Manager.
8
Links the created role with your AWS organization.

3.2. About the user-role IAM role

You need to create a user-role IAM role per web UI user to enable those users to create ROSA clusters.

Some considerations for your user-role IAM role are:

  • You only need one user-role IAM role per Red Hat user account, but your Red Hat organization can have many of these IAM resources.
  • Any user in a Red Hat organization may create and link an user-role IAM role.
  • There can be numerous user-role IAM roles per AWS account per Red Hat organization.
  • Red Hat uses the user-role IAM role to identify the user. This IAM resource has no AWS account permissions.
  • Your AWS account can have multiple user-role IAM roles, but you must link each IAM role to each user in your Red Hat organization. No user can have more than one linked user-role IAM role.
Note

"Linking" or "associating" your IAM resources with your AWS account means creating a trust-policy with your user-role IAM role and the Red Hat OpenShift Cluster Manager AWS role. After creating and linking this IAM resource, you see a trust relationship from your user-role IAM role in AWS with the arn:aws:iam::710019948333:role/RH-Managed-OpenShift-Installer resource.

3.2.1. Creating a user-role IAM role

You can create your user-role IAM roles by using the command-line interface (CLI).

Prerequisites

  • You have an AWS account.
  • You have installed and configured the latest Red Hat OpenShift Service on AWS (ROSA) CLI, rosa, on your installation host.

Procedure

  • To create a user-role IAM role with basic privileges, run the following command:

    $ rosa create user-role

    This command allows you create the role by specifying specific attributes. The following example output shows the "auto mode" selected, which lets the ROSA CLI (rosa) to create your Operator roles and policies. See "Understanding the auto and manual deployment modes" in the Additional resources for more information.

Example output

I: Creating User role
? Role prefix: ManagedOpenShift 1
? Permissions boundary ARN (optional): 2
? Role Path (optional): 3
? Role creation mode: auto 4
I: Creating ocm user role using 'arn:aws:iam::2066:user'
? Create the 'ManagedOpenShift-User.osdocs-Role' role? Yes 5
I: Created role 'ManagedOpenShift-User.osdocs-Role' with ARN 'arn:aws:iam::2066:role/ManagedOpenShift-User.osdocs-Role'
I: Linking User role
? User Role ARN: arn:aws:iam::2066:role/ManagedOpenShift-User.osdocs-Role
? Link the 'arn:aws:iam::2066:role/ManagedOpenShift-User.osdocs-Role' role with account '1AGE'? Yes 6
I: Successfully linked role ARN 'arn:aws:iam::2066:role/ManagedOpenShift-User.osdocs-Role' with account '1AGE'

1
A prefix value for all of the created AWS resources. In this example, ManagedOpenShift prepends all of the AWS resources.
2
The Amazon Resource Name (ARN) of the policy to set permission boundaries.
3
Specify an IAM path for the user name.
4
Choose the method to create your AWS roles. Using auto, the ROSA CLI generates and links the roles and policies. In the auto mode, you receive some different prompts to create the AWS roles.
5
The auto method asks if you want to create a specific user-role using your prefix.
6
Links the created role with your AWS organization.
Important

If you unlink or delete your user-role IAM role prior to deleting your cluster, an error prevents you from deleting your cluster. You must create or relink this role to proceed with the deletion process. See Repairing a cluster that cannot be deleted for more information.

3.3. AWS account association

Red Hat OpenShift Service on AWS (ROSA) cluster-provisioning tasks require linking ocm-role and user-role IAM roles to your AWS account using your Amazon Resource Name (ARN).

The ocm-role ARN is stored as a label in your Red Hat organization while the user-role ARN is stored as a label inside your Red Hat user account. Red Hat uses these ARN labels to confirm that the user is a valid account holder and that the correct permissions are available to perform the necessary tasks in the AWS account.

3.3.1. Linking your AWS account

You can link your AWS account to existing IAM roles by using the Red Hat OpenShift Service on AWS (ROSA) CLI, rosa.

Prerequisites

  • You have an AWS account.
  • You are using OpenShift Cluster Manager to create clusters.
  • You have the permissions required to install AWS account-wide roles. See the "Additional resources" of this section for more information.
  • You have installed and configured the latest AWS (aws) and ROSA (rosa) CLIs on your installation host.
  • You have created your ocm-role and user-role IAM roles, but have not yet linked them to your AWS account. You can check whether your IAM roles are already linked by running the following commands:

    $ rosa list ocm-role
    $ rosa list user-role

    If Yes is displayed in the Linked column for both roles, you have already linked the roles to an AWS account.

Procedure

  1. From the CLI, link your ocm-role resource to your Red Hat organization by using your Amazon Resource Name (ARN):

    Note

    You must have Red Hat Organization Administrator privileges to run the rosa link command. After you link the ocm-role resource with your AWS account, it is visible for all users in the organization.

    $ rosa link ocm-role --role-arn <arn>

    Example output

    I: Linking OCM role
    ? Link the '<AWS ACCOUNT ID>` role with organization '<ORG ID>'? Yes
    I: Successfully linked role-arn '<AWS ACCOUNT ID>' with organization account '<ORG ID>'

  2. From the CLI, link your user-role resource to your Red Hat user account by using your Amazon Resource Name (ARN):

    $ rosa link user-role --role-arn <arn>

    Example output

    I: Linking User role
    ? Link the 'arn:aws:iam::<ARN>:role/ManagedOpenShift-User-Role-125' role with organization '<AWS ID>'? Yes
    I: Successfully linked role-arn 'arn:aws:iam::<ARN>:role/ManagedOpenShift-User-Role-125' with organization account '<AWS ID>'

3.3.2. Associating multiple AWS accounts with your Red Hat organization

You can associate multiple AWS accounts with your Red Hat organization. Associating multiple accounts lets you create Red Hat OpenShift Service on AWS (ROSA) clusters on any of the associated AWS accounts from your Red Hat organization.

With this feature, you can create clusters in different AWS regions by using multiple AWS profiles as region-bound environments.

Prerequisites

  • You have an AWS account.
  • You are using OpenShift Cluster Manager to create clusters.
  • You have the permissions required to install AWS account-wide roles.
  • You have installed and configured the latest AWS (aws) and ROSA (rosa) CLIs on your installation host.
  • You have created your ocm-role and user-role IAM roles.

Procedure

To associate an additional AWS account, first create a profile in your local AWS configuration. Then, associate the account with your Red Hat organization by creating the ocm-role, user, and account roles in the additional AWS account.

To create the roles in an additional region, specify the --profile <aws-profile> parameter when running the rosa create commands and replace <aws_profile> with the additional account profile name:

  • To specify an AWS account profile when creating an OpenShift Cluster Manager role:

    $ rosa create --profile <aws_profile> ocm-role
  • To specify an AWS account profile when creating a user role:

    $ rosa create --profile <aws_profile> user-role
  • To specify an AWS account profile when creating the account roles:

    $ rosa create --profile <aws_profile> account-roles
Note

If you do not specify a profile, the default AWS profile is used.

3.4. Additional resources

Chapter 4. Limits and scalability

This document details the tested cluster maximums for Red Hat OpenShift Service on AWS (ROSA) clusters, along with information about the test environment and configuration used to test the maximums. Information about control plane and infrastructure node sizing and scaling is also provided.

4.1. Cluster maximums

Consider the following tested object maximums when you plan a Red Hat OpenShift Service on AWS (ROSA) cluster installation. The table specifies the maximum limits for each tested type in a (ROSA) cluster.

These guidelines are based on a cluster of 102 compute (also known as worker) nodes in a multiple availability zone configuration. For smaller clusters, the maximums are lower.

Note

The OpenShift Container Platform version used in all of the tests is OCP 4.8.0.

Table 4.1. Tested cluster maximums

Maximum type4.8 tested maximum

Number of nodes

102

Number of pods [1]

20,400

Number of pods per node

250

Number of pods per core

There is no default value

Number of namespaces [2]

3,400

Number of pods per namespace [3]

20,400

Number of services [4]

10,000

Number of services per namespace

10,000

Number of back ends per service

10,000

Number of deployments per namespace [3]

1,000

  1. The pod count displayed here is the number of test pods. The actual number of pods depends on the application’s memory, CPU, and storage requirements.
  2. When there are a large number of active projects, etcd can suffer from poor performance if the keyspace grows excessively large and exceeds the space quota. Periodic maintenance of etcd, including defragmentation, is highly recommended to make etcd storage available.
  3. There are a number of control loops in the system that must iterate over all objects in a given namespace as a reaction to some changes in state. Having a large number of objects of a type, in a single namespace, can make those loops expensive and slow down processing the state changes. The limit assumes that the system has enough CPU, memory, and disk to satisfy the application requirements.
  4. Each service port and each service back end has a corresponding entry in iptables. The number of back ends of a given service impacts the size of the endpoints objects, which then impacts the size of data that is sent throughout the system.

In OpenShift Container Platform 4.8, half of a CPU core (500 millicore) is reserved by the system compared to previous versions of OpenShift Container Platform.

4.2. OpenShift Container Platform testing environment and configuration

The following table lists the OpenShift Container Platform environment and configuration on which the cluster maximums are tested for the AWS cloud platform.

NodeTypevCPURAM(GiB)Disk typeDisk size(GiB)/IOPSCountRegion

Control plane/etcd [1]

m5.4xlarge

16

64

gp3

350 / 1,000

3

us-west-2

Infrastructure nodes [2]

r5.2xlarge

8

64

gp3

300 / 900

3

us-west-2

Workload [3]

m5.2xlarge

8

32

gp3

350 / 900

3

us-west-2

Compute nodes

m5.2xlarge

8

32

gp3

350 / 900

102

us-west-2

  1. io1 disks are used for control plane/etcd nodes in all versions prior to 4.10.
  2. Infrastructure nodes are used to host monitoring components because Prometheus can claim a large amount of memory, depending on usage patterns.
  3. Workload nodes are dedicated to run performance and scalability workload generators.

Larger cluster sizes and higher object counts might be reachable. However, the sizing of the infrastructure nodes limits the amount of memory that is available to Prometheus. When creating, modifying, or deleting objects, Prometheus stores the metrics in its memory for roughly 3 hours prior to persisting the metrics on disk. If the rate of creation, modification, or deletion of objects is too high, Prometheus can become overwhelmed and fail due to the lack of memory resources.

4.3. Control plane and infrastructure node sizing and scaling

When you install a Red Hat OpenShift Service on AWS (ROSA) cluster, the sizing of the control plane and infrastructure nodes are automatically determined by the compute node count.

If you change the number of compute nodes in your cluster after installation, the Red Hat Site Reliability Engineering (SRE) team scales the control plane and infrastructure nodes as required to maintain cluster stability.

4.3.1. Node sizing during installation

During the installation process, the sizing of the control plane and infrastructure nodes are dynamically calculated. The sizing calculation is based on the number of compute nodes in a cluster.

The following table lists the control plane and infrastructure node sizing that is applied during installation.

Number of compute nodesControl plane sizeInfrastructure node size

1 to 25

m5.2xlarge

r5.xlarge

26 to 100

m5.4xlarge

r5.2xlarge

101 to 180

m5.8xlarge

r5.4xlarge

Note

The maximum number of compute nodes on ROSA is 180.

4.3.2. Node scaling after installation

If you change the number of compute nodes after installation, the control plane and infrastructure nodes are scaled by the Red Hat Site Reliability Engineering (SRE) team as required. The nodes are scaled to maintain platform stability.

Postinstallation scaling requirements for control plane and infrastructure nodes are assessed on a case-by-case basis. Node resource consumption and received alerts are taken into consideration.

Rules for control plane node resizing alerts

The resizing alert is triggered for the control plane nodes in a cluster when the following occurs:

  • Control plane nodes sustain over 66% utilization on average in a classic ROSA cluster.

    Note

    The maximum number of compute nodes on ROSA is 180.

Rules for infrastructure node resizing alerts

Resizing alerts are triggered for the infrastructure nodes in a cluster when it has high-sustained CPU or memory utilization. This high-sustained utilization status is:

  • Infrastructure nodes sustain over 50% utilization on average in a classic ROSA cluster with a single availability zone using 2 infrastructure nodes.
  • Infrastructure nodes sustain over 66% utilization on average in a classic ROSA cluster with multiple availability zones using 3 infrastructure nodes.

    Note

    The maximum number of compute nodes on ROSA is 180.

    The resizing alerts only appear after sustained periods of high utilization. Short usage spikes, such as a node temporarily going down causing the other node to scale up, do not trigger these alerts.

The SRE team might scale the control plane and infrastructure nodes for additional reasons, for example to manage an increase in resource consumption on the nodes.

When scaling is applied, the customer is notified through a service log entry. For more information about the service log, see Accessing the service logs for ROSA clusters.

4.3.3. Sizing considerations for larger clusters

For larger clusters, infrastructure node sizing can become a significant impacting factor to scalability. There are many factors that influence the stated thresholds, including the etcd version or storage data format.

Exceeding these limits does not necessarily mean that the cluster will fail. In most cases, exceeding these numbers results in lower overall performance.

4.4. Next steps

4.5. Additional resources

Chapter 5. Planning your environment

5.1. Planning your environment based on tested cluster maximums

This document describes how to plan your Red Hat OpenShift Service on AWS environment based on the tested cluster maximums.

Oversubscribing the physical resources on a node affects resource guarantees the Kubernetes scheduler makes during pod placement. Learn what measures you can take to avoid memory swapping.

Some of the tested maximums are stretched only in a single dimension. They will vary when many objects are running on the cluster.

The numbers noted in this documentation are based on Red Hat testing methodology, setup, configuration, and tunings. These numbers can vary based on your own individual setup and environments.

While planning your environment, determine how many pods are expected to fit per node using the following formula:

required pods per cluster / pods per node = total number of nodes needed

The current maximum number of pods per node is 250. However, the number of pods that fit on a node is dependent on the application itself. Consider the application’s memory, CPU, and storage requirements, as described in Planning your environment based on application requirements.

Example scenario

If you want to scope your cluster for 2200 pods per cluster, you would need at least nine nodes, assuming that there are 250 maximum pods per node:

2200 / 250 = 8.8

If you increase the number of nodes to 20, then the pod distribution changes to 110 pods per node:

2200 / 20 = 110

Where:

required pods per cluster / total number of nodes = expected pods per node

5.2. Planning your environment based on application requirements

This document describes how to plan your Red Hat OpenShift Service on AWS environment based on your application requirements.

Consider an example application environment:

Pod typePod quantityMax memoryCPU coresPersistent storage

apache

100

500 MB

0.5

1 GB

node.js

200

1 GB

1

1 GB

postgresql

100

1 GB

2

10 GB

JBoss EAP

100

1 GB

1

1 GB

Extrapolated requirements: 550 CPU cores, 450 GB RAM, and 1.4 TB storage.

Instance size for nodes can be modulated up or down, depending on your preference. Nodes are often resource overcommitted. In this deployment scenario, you can choose to run additional smaller nodes or fewer larger nodes to provide the same amount of resources. Factors such as operational agility and cost-per-instance should be considered.

Node typeQuantityCPUsRAM (GB)

Nodes (option 1)

100

4

16

Nodes (option 2)

50

8

32

Nodes (option 3)

25

16

64

Some applications lend themselves well to overcommitted environments, and some do not. Most Java applications and applications that use huge pages are examples of applications that would not allow for overcommitment. That memory can not be used for other applications. In the example above, the environment would be roughly 30 percent overcommitted, a common ratio.

The application pods can access a service either by using environment variables or DNS. If using environment variables, for each active service the variables are injected by the kubelet when a pod is run on a node. A cluster-aware DNS server watches the Kubernetes API for new services and creates a set of DNS records for each one. If DNS is enabled throughout your cluster, then all pods should automatically be able to resolve services by their DNS name. Service discovery using DNS can be used in case you must go beyond 5000 services. When using environment variables for service discovery, if the argument list exceeds the allowed length after 5000 services in a namespace, then the pods and deployments will start failing.

Disable the service links in the deployment’s service specification file to overcome this:

Example

Kind: Template
apiVersion: template.openshift.io/v1
metadata:
  name: deploymentConfigTemplate
  creationTimestamp:
  annotations:
    description: This template will create a deploymentConfig with 1 replica, 4 env vars and a service.
    tags: ''
objects:
  - kind: DeploymentConfig
    apiVersion: apps.openshift.io/v1
    metadata:
      name: deploymentconfig${IDENTIFIER}
    spec:
      template:
        metadata:
          labels:
            name: replicationcontroller${IDENTIFIER}
        spec:
          enableServiceLinks: false
          containers:
          - name: pause${IDENTIFIER}
            image: "${IMAGE}"
            ports:
            - containerPort: 8080
              protocol: TCP
            env:
            - name: ENVVAR1_${IDENTIFIER}
              value: "${ENV_VALUE}"
            - name: ENVVAR2_${IDENTIFIER}
              value: "${ENV_VALUE}"
            - name: ENVVAR3_${IDENTIFIER}
              value: "${ENV_VALUE}"
            - name: ENVVAR4_${IDENTIFIER}
              value: "${ENV_VALUE}"
            resources: {}
            imagePullPolicy: IfNotPresent
            capabilities: {}
            securityContext:
              capabilities: {}
              privileged: false
          restartPolicy: Always
          serviceAccount: ''
      replicas: 1
      selector:
        name: replicationcontroller${IDENTIFIER}
      triggers:
      - type: ConfigChange
      strategy:
        type: Rolling
  - kind: Service
    apiVersion: v1
    metadata:
      name: service${IDENTIFIER}
    spec:
      selector:
        name: replicationcontroller${IDENTIFIER}
      ports:
      - name: serviceport${IDENTIFIER}
        protocol: TCP
        port: 80
        targetPort: 8080
      portalIP: ''
      type: ClusterIP
      sessionAffinity: None
    status:
      loadBalancer: {}
  parameters:
  - name: IDENTIFIER
    description: Number to append to the name of resources
    value: '1'
    required: true
  - name: IMAGE
    description: Image to use for deploymentConfig
    value: gcr.io/google-containers/pause-amd64:3.0
    required: false
  - name: ENV_VALUE
    description: Value to use for environment variables
    generate: expression
    from: "[A-Za-z0-9]{255}"
    required: false
  labels:
template: deploymentConfigTemplate

The number of application pods that can run in a namespace is dependent on the number of services and the length of the service name when the environment variables are used for service discovery. ARG_MAX on the system defines the maximum argument length for a new process and it is set to 2097152 bytes (2 MiB) by default. The kubelet injects environment variables in to each pod scheduled to run in the namespace including:

  • <SERVICE_NAME>_SERVICE_HOST=<IP>
  • <SERVICE_NAME>_SERVICE_PORT=<PORT>
  • <SERVICE_NAME>_PORT=tcp://<IP>:<PORT>
  • <SERVICE_NAME>_PORT_<PORT>_TCP=tcp://<IP>:<PORT>
  • <SERVICE_NAME>_PORT_<PORT>_TCP_PROTO=tcp
  • <SERVICE_NAME>_PORT_<PORT>_TCP_PORT=<PORT>
  • <SERVICE_NAME>_PORT_<PORT>_TCP_ADDR=<ADDR>

The pods in the namespace start to fail if the argument length exceeds the allowed value and if the number of characters in a service name impacts it.

Chapter 6. Required AWS service quotas

Review this list of the required Amazon Web Service (AWS) service quotas that are required to run an Red Hat OpenShift Service on AWS cluster.

6.1. Required AWS service quotas

The table below describes the AWS service quotas and levels required to create and run one Red Hat OpenShift Service on AWS cluster. Although most default values are suitable for most workloads, you might need to request additional quota for the following cases:

  • ROSA clusters require at least 100 vCPUs, but the default maximum value for vCPUs assigned to Running On-Demand Standard Amazon EC2 instances is 5. Therefore if you have not created a ROSA cluster using the same AWS account previously, you must request additional EC2 quota for Running On-Demand Standard (A, C, D, H, I, M, R, T, Z) instances.
  • Some optional cluster configuration features, such as custom security groups, might require you to request additional quota. For example, because ROSA associates 1 security group with network interfaces in worker machine pools by default, and the default quota for Security groups per network interface is 5, if you want to add 5 custom security groups, you must request additional quota, because this would bring the total number of security groups on worker network interfaces to 6.
Note

The AWS SDK allows ROSA to check quotas, but the AWS SDK calculation does not account for your existing usage. Therefore, it is possible that the quota check can pass in the AWS SDK yet the cluster creation can fail. To fix this issue, increase your quota.

If you need to modify or increase a specific quota, see Amazon’s documentation on requesting a quota increase. Large quota requests are submitted to Amazon Support for review, and take some time to be approved. If your quota request is urgent, contact AWS Support.

Table 6.1. ROSA-required service quota

Quota nameService codeQuota codeAWS defaultMinimum requiredDescription

Running On-Demand Standard (A, C, D, H, I, M, R, T, Z) instances

ec2

L-1216C47A

5

100

Maximum number of vCPUs assigned to the Running On-Demand Standard (A, C, D, H, I, M, R, T, Z) instances.

The default value of 5 vCPUs is not sufficient to create ROSA clusters. ROSA has a minimum requirement of 100 vCPUs for cluster creation.

Storage for General Purpose SSD (gp2) volume storage in TiB

ebs

L-D18FCD1D

50

300

The maximum aggregated amount of storage, in TiB, that can be provisioned across General Purpose SSD (gp2) volumes in this Region.

Storage for General Purpose SSD (gp3) volume storage in TiB

ebs

L-7A658B76

50

300

The maximum aggregated amount of storage, in TiB, that can be provisioned across General Purpose SSD (gp3) volumes in this Region.

300 TiB of storage is the required minimum for optimal performance.

Storage for Provisioned IOPS SSD (io1) volumes in TiB

ebs

L-FD252861

50

300

The maximum aggregated amount of storage, in TiB, that can be provisioned across Provisioned IOPS SSD (io1) volumes in this Region.

300 TiB of storage is the required minimum for optimal performance.

Table 6.2. General AWS service quotas

Quota nameService codeQuota codeAWS defaultMinimum requiredDescription

EC2-VPC Elastic IPs

ec2

L-0263D0A3

5

5

The maximum number of Elastic IP addresses that you can allocate for EC2-VPC in this Region.

VPCs per Region

vpc

L-F678F1CE

5

5

The maximum number of VPCs per Region. This quota is directly tied to the maximum number of internet gateways per Region.

Internet gateways per Region

vpc

L-A4707A72

5

5

The maximum number of internet gateways per Region. This quota is directly tied to the maximum number of VPCs per Region. To increase this quota, increase the number of VPCs per Region.

Network interfaces per Region

vpc

L-DF5E4CA3

5,000

5,000

The maximum number of network interfaces per Region.

Security groups per network interface

vpc

L-2AFB9258

5

5

The maximum number of security groups per network interface. This quota, multiplied by the quota for rules per security group, cannot exceed 1000.

Snapshots per Region

ebs

L-309BACF6

10,000

10,000

The maximum number of snapshots per Region

IOPS for Provisioned IOPS SSD (Io1) volumes

ebs

L-B3A130E6

300,000

300,000

The maximum aggregated number of IOPS that can be provisioned across Provisioned IOPS SDD (io1) volumes in this Region.

Application Load Balancers per Region

elasticloadbalancing

L-53DA6B97

50

50

The maximum number of Application Load Balancers that can exist in each region.

Classic Load Balancers per Region

elasticloadbalancing

L-E9E9831D

20

20

The maximum number of Classic Load Balancers that can exist in each region.

6.1.1. Additional resources

6.2. Next steps

Chapter 7. Setting up the environment for using STS

After you meet the AWS prerequisites, set up your environment and install Red Hat OpenShift Service on AWS (ROSA).

Tip

AWS Security Token Service (STS) is the recommended credential mode for installing and interacting with clusters on Red Hat OpenShift Service on AWS (ROSA) because it provides enhanced security.

7.1. Setting up the environment for STS

Before you create a Red Hat OpenShift Service on AWS (ROSA) cluster that uses the AWS Security Token Service (STS), complete the following steps to set up your environment.

Prerequisites

  • Review and complete the deployment prerequisites and policies.
  • Create a Red Hat account, if you do not already have one. Then, check your email for a verification link. You will need these credentials to install ROSA.

Procedure

  1. Log in to the Amazon Web Services (AWS) account that you want to use.

    It is recommended to use a dedicated AWS account to run production clusters. If you are using AWS Organizations, you can use an AWS account within your organization or create a new one.

    If you are using AWS Organizations and you need to have a service control policy (SCP) applied to the AWS account you plan to use, these policies must not be more restrictive than the roles and policies required by the cluster.

  2. Enable the ROSA service in the AWS Management Console.

    1. Sign in to your AWS account.
    2. To enable ROSA, go to the ROSA service and select Enable OpenShift.
  3. Install and configure the AWS CLI.

    1. Follow the AWS command-line interface documentation to install and configure the AWS CLI for your operating system.

      Specify the correct aws_access_key_id and aws_secret_access_key in the .aws/credentials file. See AWS Configuration basics in the AWS documentation.

    2. Set a default AWS region.

      Note

      You can use the environment variable to set the default AWS region.

      The ROSA service evaluates regions in the following priority order:

      1. The region specified when running the rosa command with the --region flag.
      2. The region set in the AWS_DEFAULT_REGION environment variable. See Environment variables to configure the AWS CLI in the AWS documentation.
      3. The default region set in your AWS configuration file. See Quick configuration with aws configure in the AWS documentation.
    3. Optional: Configure your AWS CLI settings and credentials by using an AWS named profile. rosa evaluates AWS named profiles in the following priority order:

      1. The profile specified when running the rosa command with the --profile flag.
      2. The profile set in the AWS_PROFILE environment variable. See Named profiles in the AWS documentation.
    4. Verify the AWS CLI is installed and configured correctly by running the following command to query the AWS API:

      $ aws sts get-caller-identity
  4. Install the latest version of the ROSA CLI (rosa).

    1. Download the latest release of the ROSA CLI for your operating system.
    2. Optional: Rename the file you downloaded to rosa and make the file executable. This documentation uses rosa to refer to the executable file.

      $ chmod +x rosa
    3. Optional: Add rosa to your path.

      $ mv rosa /usr/local/bin/rosa
    4. Enter the following command to verify your installation:

      $ rosa

      Example output

      Command line tool for Red Hat OpenShift Service on AWS.
      For further documentation visit https://access.redhat.com/documentation/en-us/red_hat_openshift_service_on_aws
      
      Usage:
        rosa [command]
      
      Available Commands:
        completion  Generates completion scripts
        create      Create a resource from stdin
        delete      Delete a specific resource
        describe    Show details of a specific resource
        download    Download necessary tools for using your cluster
        edit        Edit a specific resource
        grant       Grant role to a specific resource
        help        Help about any command
        init        Applies templates to support Red Hat OpenShift Service on AWS
        install     Installs a resource into a cluster
        link        Link a ocm/user role from stdin
        list        List all resources of a specific type
        login       Log in to your Red Hat account
        logout      Log out
        logs        Show installation or uninstallation logs for a cluster
        revoke      Revoke role from a specific resource
        uninstall   Uninstalls a resource from a cluster
        unlink      UnLink a ocm/user role from stdin
        upgrade     Upgrade a resource
        verify      Verify resources are configured correctly for cluster install
        version     Prints the version of the tool
        whoami      Displays user account information
      
      Flags:
            --color string   Surround certain characters with escape sequences to display them in color on the terminal. Allowed options are [auto never always] (default "auto")
            --debug          Enable debug mode.
        -h, --help           help for rosa
      
      Use "rosa [command] --help" for more information about a command.

    5. Generate the command completion scripts for the ROSA CLI. The following example generates the Bash completion scripts for a Linux machine:

      $ rosa completion bash | sudo tee /etc/bash_completion.d/rosa
    6. Source the scripts to enable rosa command completion from your existing terminal. The following example sources the Bash completion scripts for rosa on a Linux machine:

      $ source /etc/bash_completion.d/rosa
  5. Log in to your Red Hat account with the ROSA CLI.

    1. Enter the following command.

      $ rosa login
    2. Replace <my_offline_access_token> with your token.

      Example output

      To login to your Red Hat account, get an offline access token at https://console.redhat.com/openshift/token/rosa
      ? Copy the token and paste it here: <my-offline-access-token>

      Example output continued

      I: Logged in as '<rh-rosa-user>' on 'https://api.openshift.com'

  6. Verify that your AWS account has the necessary quota to deploy a ROSA cluster.

    $ rosa verify quota [--region=<aws_region>]

    Example output

    I: Validating AWS quota...
    I: AWS quota ok

    Note

    Sometimes your AWS quota varies by region. If you receive any errors, try a different region.

    If you need to increase your quota, go to the AWS Management Console and request a quota increase for the service that failed.

    After the quota check succeeds, proceed to the next step.

  7. Prepare your AWS account for cluster deployment:

    1. Run the following command to verify your Red Hat and AWS credentials are setup correctly. Check that your AWS Account ID, Default Region and ARN match what you expect. You can safely ignore the rows beginning with OpenShift Cluster Manager for now.

      $ rosa whoami

      Example output

      AWS Account ID:               000000000000
      AWS Default Region:           us-east-1
      AWS ARN:                      arn:aws:iam::000000000000:user/hello
      OCM API:                      https://api.openshift.com
      OCM Account ID:               1DzGIdIhqEWyt8UUXQhSoWaaaaa
      OCM Account Name:             Your Name
      OCM Account Username:         you@domain.com
      OCM Account Email:            you@domain.com
      OCM Organization ID:          1HopHfA2hcmhup5gCr2uH5aaaaa
      OCM Organization Name:        Red Hat
      OCM Organization External ID: 0000000

  8. Install the OpenShift CLI (oc), version 4.7.9 or greater, from the ROSA (rosa) CLI.

    1. Enter this command to download the latest version of the oc CLI:

      $ rosa download openshift-client
    2. After downloading the oc CLI, unzip it and add it to your path.
    3. Enter this command to verify that the oc CLI is installed correctly:

      $ rosa verify openshift-client

Create roles

After completing these steps, you are ready to set up IAM and OIDC access-based roles.

7.2. Next steps

7.3. Additional resources

Chapter 8. Preparing Terraform to install ROSA clusters

Terraform is an infrastructure-as-code tool that provides a way to configure your resources once and replicate those resources as desired. Terraform accomplishes the creation tasks by using declarative language. You declare what you want the final state of the infrastructure resource to be, and Terraform creates these resources to your specifications.

8.1. Prerequisites for Terraform

To use the Red Hat Cloud Services provider inside your Terraform configuration, you must meet the following prerequisites:

  • You have installed the Red Hat OpenShift Service on AWS (ROSA) command-line interface (CLI) tool.

    See the Additional resources for further installation instructions.

  • You have your offline Red Hat OpenShift Cluster Manager token.

    This token is generated through the Red Hat Hybrid Cloud Console. It is unique to your account and should not be shared. The token is generated based off your account access and permissions.

  • You have installed Terraform version 1.4.6 or newer.

    You must have Terraform configured for your local system. The Terraform website contains installation options for MacOS, Windows, and Linux.

  • You have an AWS account and associated credentials that allow you to create resources. The credentials are configured for the AWS provider. See the Authentication and Configuration section in AWS Terraform provider documentation.
  • You have, at minimum, the following permissions in your AWS IAM role policy that is operating Terraform. Check for these permissions in the AWS console.

    Example 8.1. Minimum AWS permissions for Terraform

    {
      "Version": "2012-10-17",
      "Statement": [
        {
          "Sid": "VisualEditor0",
          "Effect": "Allow",
          "Action": [
            "iam:GetPolicyVersion",
            "iam:DeletePolicyVersion",
            "iam:CreatePolicyVersion",
            "iam:UpdateAssumeRolePolicy",
            "secretsmanager:DescribeSecret",
            "iam:ListRoleTags",
            "secretsmanager:PutSecretValue",
            "secretsmanager:CreateSecret",
            "iam:TagRole",
            "secretsmanager:DeleteSecret",
            "iam:UpdateOpenIDConnectProviderThumbprint",
            "iam:DeletePolicy",
            "iam:CreateRole",
            "iam:AttachRolePolicy",
            "iam:ListInstanceProfilesForRole",
            "secretsmanager:GetSecretValue",
            "iam:DetachRolePolicy",
            "iam:ListAttachedRolePolicies",
            "iam:ListPolicyTags",
            "iam:ListRolePolicies",
            "iam:DeleteOpenIDConnectProvider",
            "iam:DeleteInstanceProfile",
            "iam:GetRole",
            "iam:GetPolicy",
            "iam:ListEntitiesForPolicy",
            "iam:DeleteRole",
            "iam:TagPolicy",
            "iam:CreateOpenIDConnectProvider",
            "iam:CreatePolicy",
            "secretsmanager:GetResourcePolicy",
            "iam:ListPolicyVersions",
            "iam:UpdateRole",
            "iam:GetOpenIDConnectProvider",
            "iam:TagOpenIDConnectProvider",
            "secretsmanager:TagResource",
            "sts:AssumeRoleWithWebIdentity",
            "iam:ListRoles"
          ],
          "Resource": [
            "arn:aws:secretsmanager:*:<ACCOUNT_ID>:secret:*",
            "arn:aws:iam::<ACCOUNT_ID>:instance-profile/*",
            "arn:aws:iam::<ACCOUNT_ID>:role/*",
            "arn:aws:iam::<ACCOUNT_ID>:oidc-provider/*",
            "arn:aws:iam::<ACCOUNT_ID>:policy/*"
          ]
        },
        {
          "Sid": "VisualEditor1",
          "Effect": "Allow",
          "Action": [
            "s3:*"
            ],
          "Resource": "*"
        }
      ]
    }

8.2. Considerations when using Terraform

In general, using Terraform to manage cloud resources should be done with the expectation that any changes should be done using the Terraform methodology. Use caution when using tools outside of Terraform, such as the AWS console or Red Hat console, to modify cloud resources created by Terraform. Using tools outside Terraform to manage cloud resources that are already managed by Terraform introduces configuration drift from your declared Terraform configuration.

For example, if you upgrade your Terraform-created cluster by using the Red Hat Hybrid Cloud Console, you need to reconcile your Terraform state before applying any forthcoming configuration changes. For more information, see Manage resources in Terraform state in the HashiCorp Developer documentation.

Additional resources

8.3. Account roles Terraform example

The following example shows how Terraform can be used to create your Amazon Web Services (AWS) Identity and Access Management (IAM) account roles for ROSA.

Important

Do not modify Terraform state files. For more information, see Considerations when using Terraform

Procedure

  1. Check your AWS account for existing roles and policies by running the following command:

    $ rosa list account-roles
  2. In your terminal, run the following command to export your Red Hat OpenShift Cluster Manager token. This value must include the full OpenShift Cluster Manager token:

    $ export RHCS_TOKEN="<your_offline_token>"

    You can verify that your token is saved by running the following command:

    $ echo $RHCS_TOKEN

    You see your token in the command line.

  3. Optional: You can specify your own account-role prefix that prepends the roles you create by running the following command:

    Note

    If you do not specify an account-role prefix, a prefix is generated in the format of account-role- followed by a string of four random characters.

    $ export TF_VAR_account_role_prefix=<account_role_prefix>
  4. Create the Terraform files locally by using the following code templates:

    Note

    These files are created in your current directory. Ensure that you are in the directory where you want to run Terraform.

    1. The main.tf file calls the Red Hat Cloud Services Terraform provider, which allows you to use OpenShift services with Terraform. Run the following command to create the main.tf file:

      $ cat<<-EOF>main.tf
        #
        # Copyright (c) 2022 Red Hat, Inc.
        #
        # Licensed under the Apache License, Version 2.0 (the "License");
        # you may not use this file except in compliance with the License.
        # You may obtain a copy of the License at
        #
        #   http://www.apache.org/licenses/LICENSE-2.0
        #
        # Unless required by applicable law or agreed to in writing, software
        # distributed under the License is distributed on an "AS IS" BASIS,
        # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
        # See the License for the specific language governing permissions and
        # limitations under the License.
        #
      
        terraform {
          required_providers {
            aws = {
              source  = "hashicorp/aws"
              version = ">= 4.20.0"
            }
            rhcs = {
              version = "1.4.0"
              source  = "terraform-redhat/rhcs"
            }
          }
        }
      
        data "rhcs_policies" "all_policies" {}
      
        data "rhcs_versions" "all" {}
      
        module "create_account_roles" {
          source  = "terraform-redhat/rosa-sts/aws"
          version = "0.0.15"
      
          create_operator_roles = false
          create_oidc_provider  = false
          create_account_roles  = true
      
          account_role_prefix    = var.account_role_prefix
          rosa_openshift_version = var.openshift_version
          account_role_policies  = data.rhcs_policies.all_policies.account_role_policies
          operator_role_policies = data.rhcs_policies.all_policies.operator_role_policies
          all_versions           = data.rhcs_versions.all
          tags                   = var.tags
        }
      EOF
    2. You define the account role prefix structure in the output.tf file. This output definition allows you to specify how the various generated roles are constructed. Run the following command to create your output.tf file:

      $ cat<<-EOF>output.tf
        output "account_role_prefix" {
          value = module.create_account_roles.account_role_prefix
        }
      EOF
    3. The variables.tf allows you to specify values you want for select variables. If you exported a variable for the account_role_prefix earlier, leave this variable’s default value blank. Setting the variable in both places with different values can produce unexpected results. Run the following command to create your variables.tf file:

      Important

      Do not include your OpenShift Cluster Manager token in this file if it is not stored in a safe location.

      $ cat<<-EOF>variables.tf
        variable "openshift_version" {
          type = string
          default = "4.13"
          description = "Enter the desired OpenShift version as X.Y. This version should match what you intend for your ROSA cluster. For example, if you plan to create a ROSA cluster using '4.13.10', then this version should be '4.13'. You can see the supported versions of OpenShift by running 'rosa list version'."
        }
      
        variable "account_role_prefix" {
          type    = string
          default = ""
          description = "Your account roles are prepended with whatever value you enter here. The default value in the ROSA CLI is 'ManagedOpenshift-' before all of your account roles."
        }
      
        variable "tags" { 1
          type        = map
          default     = null
          description = "(Optional) List of AWS resource tags to apply."
        }
      EOF
      1
      The tags parameter uses a map of strings variable. The format that it takes looks like the following example:
      variable "tags" {
        type    = "map"
        default = {
          "us-east-1" = "image-1234"
          "us-west-2" = "image-4567"
        }
      }
  5. In the directory where you saved these Terraform files, run the following command to set up Terraform to create these resources:

    $ terraform init
  6. Optional: Run the following command to confirm that the Terraform code you copied is correct:

    $ terraform validate

    Sample output

    Success! The configuration is valid.

  7. Optional: Test your Terraform template and create a reusable Terraform plan file by running the following command:

    $ terraform plan -out account-roles.tfplan
  8. Run the following command to build your account-wide IAM roles with Terraform:

    $ terraform apply "account-roles.tfplan"
    Note

    If you used the terraform plan command first, you can provide your created account-roles.tf file here. Otherwise, Terraform temporarily creates this plan before it applies your desired outcome.

Verification

  • Run the following command to verify that your account-roles have been created:

    $ rosa list account-roles

    Sample output

    I: Fetching account roles
    ROLE NAME                            ROLE TYPE      ROLE ARN                                                            OPENSHIFT VERSION  AWS Managed
    account-role-6kn4-ControlPlane-Role  Control plane  arn:aws:iam::269733383066:role/account-role-6kn4-ControlPlane-Role  4.13               No
    account-role-6kn4-Installer-Role     Installer      arn:aws:iam::269733383066:role/account-role-6kn4-Installer-Role     4.13               No
    account-role-6kn4-Support-Role       Support        arn:aws:iam::269733383066:role/account-role-6kn4-Support-Role       4.13               No
    account-role-6kn4-Worker-Role        Worker         arn:aws:iam::269733383066:role/account-role-6kn4-Worker-Role        4.13               No

Clean up

When you are finished using the resources that you created using Terraform, you should purge these resources with the following command:

$ terraform destroy

8.4. Next steps

8.5. Additional resources

Legal Notice

Copyright © 2024 Red Hat, Inc.
The text of and illustrations in this document are licensed by Red Hat under a Creative Commons Attribution–Share Alike 3.0 Unported license ("CC-BY-SA"). An explanation of CC-BY-SA is available at http://creativecommons.org/licenses/by-sa/3.0/. In accordance with CC-BY-SA, if you distribute this document or an adaptation of it, you must provide the URL for the original version.
Red Hat, as the licensor of this document, waives the right to enforce, and agrees not to assert, Section 4d of CC-BY-SA to the fullest extent permitted by applicable law.
Red Hat, Red Hat Enterprise Linux, the Shadowman logo, the Red Hat logo, JBoss, OpenShift, Fedora, the Infinity logo, and RHCE are trademarks of Red Hat, Inc., registered in the United States and other countries.
Linux® is the registered trademark of Linus Torvalds in the United States and other countries.
Java® is a registered trademark of Oracle and/or its affiliates.
XFS® is a trademark of Silicon Graphics International Corp. or its subsidiaries in the United States and/or other countries.
MySQL® is a registered trademark of MySQL AB in the United States, the European Union and other countries.
Node.js® is an official trademark of Joyent. Red Hat is not formally related to or endorsed by the official Joyent Node.js open source or commercial project.
The OpenStack® Word Mark and OpenStack logo are either registered trademarks/service marks or trademarks/service marks of the OpenStack Foundation, in the United States and other countries and are used with the OpenStack Foundation's permission. We are not affiliated with, endorsed or sponsored by the OpenStack Foundation, or the OpenStack community.
All other trademarks are the property of their respective owners.