Chapter 18. Managing cloud provider credentials

18.1. About the Cloud Credential Operator

The Cloud Credential Operator (CCO) manages cloud provider credentials as custom resource definitions (CRDs). The CCO syncs on CredentialsRequest custom resources (CRs) to allow OpenShift Container Platform components to request cloud provider credentials with the specific permissions that are required for the cluster to run.

By setting different values for the credentialsMode parameter in the install-config.yaml file, the CCO can be configured to operate in several different modes. If no mode is specified, or the credentialsMode parameter is set to an empty string (""), the CCO operates in its default mode.

18.1.1. Modes

By setting different values for the credentialsMode parameter in the install-config.yaml file, the CCO can be configured to operate in mint, passthrough, or manual mode. These options provide transparency and flexibility in how the CCO uses cloud credentials to process CredentialsRequest CRs in the cluster, and allow the CCO to be configured to suit the security requirements of your organization. Not all CCO modes are supported for all cloud providers.

  • Mint: In mint mode, the CCO uses the provided admin-level cloud credential to create new credentials for components in the cluster with only the specific permissions that are required.

    Note

    Mint mode is the default and recommended best practice setting for the CCO to use.

  • Passthrough: In passthrough mode, the CCO passes the provided cloud credential to the components that request cloud credentials.
  • Manual: In manual mode, a user manages cloud credentials instead of the CCO.

    • Manual with AWS STS: In manual mode, you can configure an AWS cluster to use Amazon Web Services Secure Token Service (AWS STS). With this configuration, the CCO uses temporary credentials for different components.
Important

Support for Amazon Web Services Secure Token Service (AWS STS) is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.

For more information about the support scope of Red Hat Technology Preview features, see https://access.redhat.com/support/offerings/techpreview/.

Table 18.1. CCO mode support matrix

Cloud providerMintPassthroughManual

Amazon Web Services (AWS)

X

X

X

Microsoft Azure

X

X

X

Google Cloud Platform (GCP)

X

X

X

Red Hat OpenStack Platform (RHOSP)

 

X

 

Red Hat Virtualization (RHV)

 

X

 

VMware vSphere

 

X

 

18.1.2. Default behavior

For platforms on which multiple modes are supported (AWS, Azure, and GCP), when the CCO operates in its default mode, it checks the provided credentials dynamically to determine for which mode they are sufficient to process CredentialsRequest CRs.

By default, the CCO determines whether the credentials are sufficient for mint mode, which is the preferred mode of operation, and uses those credentials to create appropriate credentials for components in the cluster. If the credentials are not sufficient for mint mode, it determines whether they are sufficient for passthrough mode. If the credentials are not sufficient for passthrough mode, the CCO cannot adequately process CredentialsRequest CRs.

Note

The CCO cannot verify whether Azure credentials are sufficient for passthrough mode. If Azure credentials are insufficient for mint mode, the CCO operates with the assumption that the credentials are sufficient for passthrough mode.

If the provided credentials are determined to be insufficient during installation, the installation fails. For AWS, the installer fails early in the process and indicates which required permissions are missing. Other providers might not provide specific information about the cause of the error until errors are encountered.

If the credentials are changed after a successful installation and the CCO determines that the new credentials are insufficient, the CCO puts conditions on any new CredentialsRequest CRs to indicate that it cannot process them because of the insufficient credentials.

To resolve insufficient credentials issues, provide a credential with sufficient permissions. If an error occurred during installation, try installing again. For issues with new CredentialsRequest CRs, wait for the CCO to try to process the CR again. As an alternative, you can manually create IAM for AWS, Azure, and GCP.

18.1.3. Additional resources

18.2. Using mint mode

Mint mode is supported for Amazon Web Services (AWS), Microsoft Azure, and Google Cloud Platform (GCP).

Mint mode is the default and recommended best practice setting for the Cloud Credential Operator (CCO) to use on the platforms for which it is supported. In this mode, the CCO uses the provided administrator-level cloud credential to create new credentials for components in the cluster with only the specific permissions that are required.

If the credential is not removed after installation, it is stored and used by the CCO to process CredentialsRequest CRs for components in the cluster and create new credentials for each with only the specific permissions that are required. The continuous reconciliation of cloud credentials in mint mode allows actions that require additional credentials or permissions, such as upgrading, to proceed.

If the requirement that mint mode stores the administrator-level credential in the cluster kube-system namespace does not suit the security requirements of your organization, see Alternatives to storing administrator-level secrets in the kube-system project for AWS, Azure, or GCP.

18.2.1. Mint mode permissions requirements

When using the CCO in mint mode, ensure that the credential you provide meets the requirements of the cloud on which you are running or installing OpenShift Container Platform. If the provided credentials are not sufficient for mint mode, the CCO cannot create an IAM user.

18.2.1.1. Amazon Web Services (AWS) permissions

The credential you provide for mint mode in AWS must have the following permissions:

  • iam:CreateAccessKey
  • iam:CreateUser
  • iam:DeleteAccessKey
  • iam:DeleteUser
  • iam:DeleteUserPolicy
  • iam:GetUser
  • iam:GetUserPolicy
  • iam:ListAccessKeys
  • iam:PutUserPolicy
  • iam:TagUser
  • iam:SimulatePrincipalPolicy

18.2.1.2. Microsoft Azure permissions

The credential you provide for mint mode in Azure must have a service principal with the permissions specified in Creating a service principal.

18.2.1.3. Google Cloud Platform (GCP) permissions

The credential you provide for mint mode in GCP must have the following permissions:

  • resourcemanager.projects.get
  • serviceusage.services.list
  • iam.serviceAccountKeys.create
  • iam.serviceAccountKeys.delete
  • iam.serviceAccounts.create
  • iam.serviceAccounts.delete
  • iam.serviceAccounts.get
  • iam.roles.get
  • resourcemanager.projects.getIamPolicy
  • resourcemanager.projects.setIamPolicy

18.2.2. Admin credentials root secret format

Each cloud provider uses a credentials root secret in the kube-system namespace by convention, which is then used to satisfy all credentials requests and create their respective secrets. This is done either by minting new credentials with mint mode, or by copying the credentials root secret with passthrough mode.

The format for the secret varies by cloud, and is also used for each CredentialsRequest secret.

Amazon Web Services (AWS) secret format

apiVersion: v1
kind: Secret
metadata:
  namespace: kube-system
  name: aws-creds
stringData:
  aws_access_key_id: <base64-encoded_access_key_id>
  aws_secret_access_key: <base64-encoded_secret_access_key>

Google Cloud Platform (GCP) secret format

apiVersion: v1
kind: Secret
metadata:
  namespace: kube-system
  name: gcp-credentials
stringData:
  service_account.json: <base64-encoded_service_account>

18.2.3. Mint mode with removal or rotation of the administrator-level credential

Currently, this mode is only supported on AWS and GCP.

In this mode, a user installs OpenShift Container Platform with an administrator-level credential just like the normal mint mode. However, this process removes the administrator-level credential secret from the cluster post-installation.

The administrator can have the Cloud Credential Operator make its own request for a read-only credential that allows it to verify if all CredentialsRequest objects have their required permissions, thus the administrator-level credential is not required unless something needs to be changed. After the associated credential is removed, it can be deleted or deactivated on the underlying cloud, if desired.

Note

Prior to a non z-stream upgrade, you must reinstate the credential secret with the administrator-level credential. If the credential is not present, the upgrade might be blocked.

The administrator-level credential is not stored in the cluster permanently.

Following these steps still requires the administrator-level credential in the cluster for brief periods of time. It also requires manually re-instating the secret with administrator-level credentials for each upgrade.

18.2.3.1. Rotating cloud provider credentials manually

If your cloud provider credentials are changed for any reason, you must manually update the secret that the Cloud Credential Operator (CCO) uses to manage cloud provider credentials.

The process for rotating cloud credentials depends on the mode that the CCO is configured to use. After you rotate credentials for a cluster that is using mint mode, you must manually remove the component credentials that were created by the removed credential.

Prerequisites

  • Your cluster is installed on a platform that supports rotating cloud credentials manually with the CCO mode that you are using:

    • For mint mode, Amazon Web Services (AWS) and Google Cloud Platform (GCP) are supported.
  • You have changed the credentials that are used to interface with your cloud provider.
  • The new credentials have sufficient permissions for the mode CCO is configured to use in your cluster.
Note

When rotating the credentials for an Azure cluster that is using mint mode, do not delete or replace the service principal that was used during installation. Instead, generate new Azure service principal client secrets and update the OpenShift Container Platform secrets accordingly.

Procedure

  1. In the Administrator perspective of the web console, navigate to WorkloadsSecrets.
  2. In the table on the Secrets page, find the root secret for your cloud provider.

    PlatformSecret name

    AWS

    aws-creds

    GCP

    gcp-credentials

  3. Click the Options menu kebab in the same row as the secret and select Edit Secret.
  4. Record the contents of the Value field or fields. You can use this information to verify that the value is different after updating the credentials.
  5. Update the text in the Value field or fields with the new authentication information for your cloud provider, and then click Save.
  6. If the CCO for your cluster is configured to use mint mode, delete each component secret that is referenced by the individual CredentialsRequest objects.

    1. Log in to the OpenShift Container Platform CLI as a user with the cluster-admin role.
    2. Get the names and namespaces of all referenced component secrets:

      $ oc -n openshift-cloud-credential-operator get CredentialsRequest \
        -o json | jq -r '.items[] | select (.spec.providerSpec.kind=="<provider_spec>") | .spec.secretRef'

      where <provider_spec> is the corresponding value for your cloud provider:

      • AWS: AWSProviderSpec
      • GCP: GCPProviderSpec

      Partial example output for AWS

      {
        "name": "ebs-cloud-credentials",
        "namespace": "openshift-cluster-csi-drivers"
      }
      {
        "name": "cloud-credential-operator-iam-ro-creds",
        "namespace": "openshift-cloud-credential-operator"
      }
      ...

    3. Delete each of the referenced component secrets:

      $ oc delete secret <secret_name> \ 1
        -n <secret_namespace> 2
      1
      Specify the name of a secret.
      2
      Specify the namespace that contains the secret.

      Example deletion of an AWS secret

      $ oc delete secret ebs-cloud-credentials -n openshift-cluster-csi-drivers

      You do not need to manually delete the credentials from your provider console. Deleting the referenced component secrets will cause the CCO to delete the existing credentials from the platform and create new ones.

  7. To verify that the credentials have changed:

    1. In the Administrator perspective of the web console, navigate to WorkloadsSecrets.
    2. Verify that the contents of the Value field or fields are different than the previously recorded information.

18.2.3.2. Removing cloud provider credentials

After installing an OpenShift Container Platform cluster with the Cloud Credential Operator (CCO) in mint mode, you can remove the administrator-level credential secret from the kube-system namespace in the cluster. The administrator-level credential is required only during changes that require its elevated permissions, such as upgrades.

Note

Prior to a non z-stream upgrade, you must reinstate the credential secret with the administrator-level credential. If the credential is not present, the upgrade might be blocked.

Prerequisites

  • Your cluster is installed on a platform that supports removing cloud credentials from the CCO. Supported platforms are AWS and GCP.

Procedure

  1. In the Administrator perspective of the web console, navigate to WorkloadsSecrets.
  2. In the table on the Secrets page, find the root secret for your cloud provider.

    PlatformSecret name

    AWS

    aws-creds

    GCP

    gcp-credentials

  3. Click the Options menu kebab in the same row as the secret and select Delete Secret.

18.2.4. Additional resources

18.3. Using passthrough mode

Passthrough mode is supported for Amazon Web Services (AWS), Microsoft Azure, Google Cloud Platform (GCP), Red Hat OpenStack Platform (RHOSP), Red Hat Virtualization (RHV), and VMware vSphere.

In passthrough mode, the Cloud Credential Operator (CCO) passes the provided cloud credential to the components that request cloud credentials. The credential must have permissions to perform the installation and complete the operations that are required by components in the cluster, but does not need to be able to create new credentials. The CCO does not attempt to create additional limited-scoped credentials in passthrough mode.

18.3.1. Passthrough mode permissions requirements

When using the CCO in passthrough mode, ensure that the credential you provide meets the requirements of the cloud on which you are running or installing OpenShift Container Platform. If the provided credentials the CCO passes to a component that creates a CredentialsRequest CR are not sufficient, that component will report an error when it tries to call an API that it does not have permissions for.

18.3.1.1. Amazon Web Services (AWS) permissions

The credential you provide for passthrough mode in AWS must have all the requested permissions for all CredentialsRequest CRs that are required by the version of OpenShift Container Platform you are running or installing.

To locate the CredentialsRequest CRs that are required, see Manually creating IAM for AWS.

18.3.1.2. Microsoft Azure permissions

The credential you provide for passthrough mode in Azure must have all the requested permissions for all CredentialsRequest CRs that are required by the version of OpenShift Container Platform you are running or installing.

To locate the CredentialsRequest CRs that are required, see Manually creating IAM for Azure.

18.3.1.3. Google Cloud Platform (GCP) permissions

The credential you provide for passthrough mode in GCP must have all the requested permissions for all CredentialsRequest CRs that are required by the version of OpenShift Container Platform you are running or installing.

To locate the CredentialsRequest CRs that are required, see Manually creating IAM for GCP.

18.3.1.4. Red Hat OpenStack Platform (RHOSP) permissions

To install an OpenShift Container Platform cluster on RHOSP, the CCO requires a credential with the permissions of a member user role.

18.3.1.5. Red Hat Virtualization (RHV) permissions

To install an OpenShift Container Platform cluster on RHV, the CCO requires a credential with the following privileges:

  • DiskOperator
  • DiskCreator
  • UserTemplateBasedVm
  • TemplateOwner
  • TemplateCreator
  • ClusterAdmin on the specific cluster that is targeted for OpenShift Container Platform deployment

18.3.1.6. VMware vSphere permissions

To install an OpenShift Container Platform cluster on VMware vSphere, the CCO requires a credential with the following vSphere privileges:

Table 18.2. Required vSphere privileges

CategoryPrivileges

Datastore

Allocate space

Folder

Create folder, Delete folder

vSphere Tagging

All privileges

Network

Assign network

Resource

Assign virtual machine to resource pool

Profile-driven storage

All privileges

vApp

All privileges

Virtual machine

All privileges

18.3.2. Admin credentials root secret format

Each cloud provider uses a credentials root secret in the kube-system namespace by convention, which is then used to satisfy all credentials requests and create their respective secrets. This is done either by minting new credentials with mint mode, or by copying the credentials root secret with passthrough mode.

The format for the secret varies by cloud, and is also used for each CredentialsRequest secret.

Amazon Web Services (AWS) secret format

apiVersion: v1
kind: Secret
metadata:
  namespace: kube-system
  name: aws-creds
stringData:
  aws_access_key_id: <base64-encoded_access_key_id>
  aws_secret_access_key: <base64-encoded_secret_access_key>

Microsoft Azure secret format

apiVersion: v1
kind: Secret
metadata:
  namespace: kube-system
  name: azure-credentials
stringData:
  azure_subscription_id: <base64-encoded_subscription_id>
  azure_client_id: <base64-encoded_client_id>
  azure_client_secret: <base64-encoded_client_secret>
  azure_tenant_id: <base64-encoded_tenant_id>
  azure_resource_prefix: <base64-encoded_resource_prefix>
  azure_resourcegroup: <base64-encoded_resource_group>
  azure_region: <base64-encoded_region>

On Microsoft Azure, the credentials secret format includes two properties that must contain the cluster’s infrastructure ID, generated randomly for each cluster installation. This value can be found after running create manifests:

$ cat .openshift_install_state.json | jq '."*installconfig.ClusterID".InfraID' -r

Example output

mycluster-2mpcn

This value would be used in the secret data as follows:

azure_resource_prefix: mycluster-2mpcn
azure_resourcegroup: mycluster-2mpcn-rg

Google Cloud Platform (GCP) secret format

apiVersion: v1
kind: Secret
metadata:
  namespace: kube-system
  name: gcp-credentials
stringData:
  service_account.json: <base64-encoded_service_account>

Red Hat OpenStack Platform (RHOSP) secret format

apiVersion: v1
kind: Secret
metadata:
  namespace: kube-system
  name: openstack-credentials
data:
  clouds.yaml: <base64-encoded_cloud_creds>
  clouds.conf: <base64-encoded_cloud_creds_init>

Red Hat Virtualization (RHV) secret format

apiVersion: v1
kind: Secret
metadata:
  namespace: kube-system
  name: ovirt-credentials
data:
  ovirt_url: <base64-encoded_url>
  ovirt_username: <base64-encoded_username>
  ovirt_password: <base64-encoded_password>
  ovirt_insecure: <base64-encoded_insecure>
  ovirt_ca_bundle: <base64-encoded_ca_bundle>

VMware vSphere secret format

apiVersion: v1
kind: Secret
metadata:
  namespace: kube-system
  name: vsphere-creds
data:
 vsphere.openshift.example.com.username: <base64-encoded_username>
 vsphere.openshift.example.com.password: <base64-encoded_password>

18.3.3. Passthrough mode credential maintenance

If CredentialsRequest CRs change over time as the cluster is upgraded, you must manually update the passthrough mode credential to meet the requirements. To avoid credentials issues during an upgrade, check the CredentialsRequest CRs in the release image for the new version of OpenShift Container Platform before upgrading. To locate the CredentialsRequest CRs that are required for your cloud provider, see Manually creating IAM for AWS, Azure, or GCP.

18.3.3.1. Rotating cloud provider credentials manually

If your cloud provider credentials are changed for any reason, you must manually update the secret that the Cloud Credential Operator (CCO) uses to manage cloud provider credentials.

The process for rotating cloud credentials depends on the mode that the CCO is configured to use. After you rotate credentials for a cluster that is using mint mode, you must manually remove the component credentials that were created by the removed credential.

Prerequisites

  • Your cluster is installed on a platform that supports rotating cloud credentials manually with the CCO mode that you are using:

    • For passthrough mode, Amazon Web Services (AWS), Microsoft Azure, Google Cloud Platform (GCP), Red Hat OpenStack Platform (RHOSP), Red Hat Virtualization (RHV), and VMware vSphere are supported.
  • You have changed the credentials that are used to interface with your cloud provider.
  • The new credentials have sufficient permissions for the mode CCO is configured to use in your cluster.
Note

When rotating the credentials for an Azure cluster that is using mint mode, do not delete or replace the service principal that was used during installation. Instead, generate new Azure service principal client secrets and update the OpenShift Container Platform secrets accordingly.

Procedure

  1. In the Administrator perspective of the web console, navigate to WorkloadsSecrets.
  2. In the table on the Secrets page, find the root secret for your cloud provider.

    PlatformSecret name

    AWS

    aws-creds

    Azure

    azure-credentials

    GCP

    gcp-credentials

    RHOSP

    openstack-credentials

    RHV

    ovirt-credentials

    vSphere

    vsphere-creds

  3. Click the Options menu kebab in the same row as the secret and select Edit Secret.
  4. Record the contents of the Value field or fields. You can use this information to verify that the value is different after updating the credentials.
  5. Update the text in the Value field or fields with the new authentication information for your cloud provider, and then click Save.
  6. If the CCO for your cluster is configured to use mint mode, delete each component secret that is referenced by the individual CredentialsRequest objects.

    1. Log in to the OpenShift Container Platform CLI as a user with the cluster-admin role.
    2. Get the names and namespaces of all referenced component secrets:

      $ oc -n openshift-cloud-credential-operator get CredentialsRequest \
        -o json | jq -r '.items[] | select (.spec.providerSpec.kind=="<provider_spec>") | .spec.secretRef'

      where <provider_spec> is the corresponding value for your cloud provider:

      • AWS: AWSProviderSpec
      • Azure: AzureProviderSpec
      • GCP: GCPProviderSpec
      • RHOSP: OpenStackProviderSpec
      • RHV: OvirtProviderSpec
      • vSphere: VSphereProviderSpec

      Partial example output for AWS

      {
        "name": "ebs-cloud-credentials",
        "namespace": "openshift-cluster-csi-drivers"
      }
      {
        "name": "cloud-credential-operator-iam-ro-creds",
        "namespace": "openshift-cloud-credential-operator"
      }
      ...

    3. Delete each of the referenced component secrets:

      $ oc delete secret <secret_name> \ 1
        -n <secret_namespace> 2
      1
      Specify the name of a secret.
      2
      Specify the namespace that contains the secret.

      Example deletion of an AWS secret

      $ oc delete secret ebs-cloud-credentials -n openshift-cluster-csi-drivers

      You do not need to manually delete the credentials from your provider console. Deleting the referenced component secrets will cause the CCO to delete the existing credentials from the platform and create new ones.

  7. To verify that the credentials have changed:

    1. In the Administrator perspective of the web console, navigate to WorkloadsSecrets.
    2. Verify that the contents of the Value field or fields are different than the previously recorded information.

18.3.4. Reducing permissions after installation

When using passthrough mode, each component has the same permissions used by all other components. If you do not reduce the permissions after installing, all components have the broad permissions that are required to run the installer.

After installation, you can reduce the permissions on your credential to only those that are required to run the cluster, as defined by the CredentialsRequest CRs in the release image for the version of OpenShift Container Platform that you are using.

To locate the CredentialsRequest CRs that are required for AWS, Azure, or GCP and learn how to change the permissions the CCO uses, see Manually creating IAM for AWS, Azure, or GCP.

18.3.5. Additional resources

18.4. Using manual mode

Manual mode is supported for Amazon Web Services (AWS), Microsoft Azure, and Google Cloud Platform (GCP).

In manual mode, a user manages cloud credentials instead of the Cloud Credential Operator (CCO). To use this mode, you must examine the CredentialsRequest CRs in the release image for the version of OpenShift Container Platform that you are running or installing, create corresponding credentials in the underlying cloud provider, and create Kubernetes Secrets in the correct namespaces to satisfy all CredentialsRequest CRs for the cluster’s cloud provider.

Using manual mode allows each cluster component to have only the permissions it requires, without storing an administrator-level credential in the cluster. This mode also does not require connectivity to the AWS public IAM endpoint. However, you must manually reconcile permissions with new release images for every upgrade.

For information about configuring your cloud provider to use manual mode, see Manually creating IAM for AWS, Azure, or GCP.

18.4.1. Upgrading clusters with manually maintained credentials

If credentials are added in a future release, the Cloud Credential Operator (CCO) upgradable status for a cluster with manually maintained credentials changes to false. For minor release, for example, from 4.6 to 4.7, this status prevents you from upgrading until you have addressed any updated permissions. For z-stream releases, for example, from 4.6.10 to 4.6.11, the upgrade is not blocked, but the credentials must still be updated for the new release.

Use the Administrator perspective of the web console to determine if the CCO is upgradeable.

  1. Navigate to AdministrationCluster Settings.
  2. To view the CCO status details, click cloud-credential in the Cluster Operators list.
  3. If the Upgradeable status in the Conditions section is False, examine the CredentialsRequest custom resource for the new release and update the manually maintained credentials on your cluster to match before upgrading.

In addition to creating new credentials for the release image that you are upgrading to, you must review the required permissions for existing credentials and accommodate any new permissions requirements for existing components in the new release. The CCO cannot detect these mismatches and will not set upgradable to false in this case.

The "Manually creating IAM" section of the installation content for your cloud provider explains how to obtain and use the credentials required for your cloud.

18.4.2. Manual mode with AWS STS

You can configure an AWS cluster in manual mode to use Amazon Web Services Secure Token Service (AWS STS). With this configuration, the CCO uses temporary credentials for different components.

Important

Support for Amazon Web Services Secure Token Service (AWS STS) is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.

For more information about the support scope of Red Hat Technology Preview features, see https://access.redhat.com/support/offerings/techpreview/.

18.4.3. Additional resources

18.5. Using manual mode with STS

Manual mode with STS is available as a Technology Preview for Amazon Web Services (AWS).

Important

Support for AWS Secure Token Service (STS) is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.

For more information about the support scope of Red Hat Technology Preview features, see https://access.redhat.com/support/offerings/techpreview/.

Note

This credentials strategy is supported for only new OpenShift Container Platform clusters and must be configured during installation. You cannot reconfigure an existing cluster that uses a different credentials strategy to use this feature.

In manual mode with STS, the individual OpenShift Container Platform cluster components use AWS Secure Token Service (STS) to assign components IAM roles that provide short-term, limited-privilege security credentials. These credentials are associated with IAM roles that are specific to each component that makes AWS API calls.

Requests for new and refreshed credentials are automated by using an appropriately configured AWS IAM OpenID Connect (OIDC) identity provider, combined with AWS IAM roles. OpenShift Container Platform signs service account tokens that are trusted by AWS IAM, and can be projected into a pod and used for authentication. Tokens are refreshed after one hour.

Figure 18.1. STS authentication flow

Using manual mode with STS changes the content of the AWS credentials that are provided to individual OpenShift Container Platform components.

AWS secret format using long-lived credentials

apiVersion: v1
kind: Secret
metadata:
  namespace: <target-namespace> 1
  name: <target-secret-name> 2
data:
  aws_access_key_id: <base64-encoded-access-key-id>
  aws_secret_access_key: <base64-encoded-secret-access-key>

1
The namespace for the component.
2
The name of the component secret.

AWS secret format with STS

apiVersion: v1
kind: Secret
metadata:
  namespace: <target-namespace> 1
  name: <target-secret-name> 2
stringData:
  credentials: |-
    [default]
    role_name: <operator-role-name> 3
    web_identity_token_file: <path-to-token> 4

1
The namespace for the component.
2
The name of the component secret.
3
The IAM role for the component.
4
The path to the service account token inside the pod. By convention, this is /var/run/secrets/openshift/serviceaccount/token for OpenShift Container Platform components.

18.5.1. Installing an OpenShift Container Platform cluster configured for manual mode with STS

To install a cluster that is configured to use the CCO in manual mode with STS in OpenShift Container Platform version 4.7:

18.5.1.1. Creating AWS resources manually

To install an OpenShift Container Platform cluster that is configured to use the CCO in manual mode with STS, you must first manually create the required AWS resources.

Procedure

  1. Generate a private key to sign the ServiceAccount object:

    $ openssl genrsa -out sa-signer 4096
  2. Generate a ServiceAccount object public key:

    $ openssl rsa -in sa-signer -pubout -out sa-signer.pub
  3. Create an S3 bucket to hold the OIDC configuration:

    $ aws s3api create-bucket --bucket <oidc_bucket_name> --region <aws_region> --create-bucket-configuration LocationConstraint=<aws_region>
    Note

    If the value of <aws_region> is us-east-1, do not specify the LocationConstraint parameter.

  4. Retain the S3 bucket URL:

    OPENID_BUCKET_URL="https://<oidc_bucket_name>.s3.<aws_region>.amazonaws.com"
  5. Build an OIDC configuration:

    1. Create a file named keys.json that contains the following information:

      {
          "keys": [
              {
                  "use": "sig",
                  "kty": "RSA",
                  "kid": "<public_signing_key_id>",
                  "alg": "RS256",
                  "n": "<public_signing_key_modulus>",
                  "e": "<public_signing_key_exponent>"
              }
          ]
      }

      Where:

      • <public_signing_key_id> is generated from the public key with:

        $ openssl rsa -in sa-signer.pub -pubin --outform DER | openssl dgst -binary -sha256 | openssl base64 | tr '/+' '_-' | tr -d '='

        This command converts the public key to DER format, performs a SHA-256 checksum on the binary representation, encodes the data with base64 encoding, and then changes the base64-encoded output to base64URL encoding.

      • <public_signing_key_modulus> is generated from the public key with:

        $ openssl rsa -pubin -in sa-signer.pub -modulus -noout | sed  -e 's/Modulus=//' | xxd -r -p | base64 -w0 | tr '/+' '_-' | tr -d '='

        This command prints the modulus of the public key, extracts the hex representation of the modulus, converts the ASCII hex to binary, encodes the data with base64 encoding, and then changes the base64-encoded output to base64URL encoding.

      • <public_signing_key_exponent> is generated from the public key with:

        $ printf "%016x" $(openssl rsa -pubin -in sa-signer.pub -noout -text | grep Exponent | awk '{ print $2 }') |  awk '{ sub(/(00)+/, "", $1); print $1 }' | xxd -r -p | base64 -w0 | tr '/+' '_-' | tr -d '='

        This command extracts the decimal representation of the public key exponent, prints it as hex with a padded 0 if needed, removes leading 00 pairs, converts the ASCII hex to binary, encodes the data with base64 encoding, and then changes the base64-encoded output to use only characters that can be used in a URL.

    2. Create a file named openid-configuration that contains the following information:

      {
      	"issuer": "$OPENID_BUCKET_URL",
      	"jwks_uri": "${OPENID_BUCKET_URL}/keys.json",
          "response_types_supported": [
              "id_token"
          ],
          "subject_types_supported": [
              "public"
          ],
          "id_token_signing_alg_values_supported": [
              "RS256"
          ],
          "claims_supported": [
              "aud",
              "exp",
              "sub",
              "iat",
              "iss",
              "sub"
          ]
      }
  6. Upload the OIDC configuration:

    $ aws s3api put-object --bucket <oidc_bucket_name> --key keys.json --body ./keys.json
    $ aws s3api put-object --bucket <oidc_bucket_name> --key '.well-known/openid-configuration' --body ./openid-configuration

    Where <oidc_bucket_name> is the S3 bucket that was created to hold the OIDC configuration.

  7. Allow the AWS IAM OpenID Connect (OIDC) identity provider to read these files:

    $ aws s3api put-object-acl --bucket <oidc_bucket_name> --key keys.json --acl public-read
    $ aws s3api put-object-acl --bucket <oidc_bucket_name> --key '.well-known/openid-configuration' --acl public-read
  8. Create an AWS IAM OIDC identity provider:

    1. Get the certificate chain from the server that hosts the OIDC configuration:

      $ echo | openssl s_client -servername $<oidc_bucket_name>.s3.$<aws_region>.amazonaws.com -connect $<oidc_bucket_name>.s3.$<aws_region>.amazonaws.com:443 -showcerts 2>/dev/null | awk '/BEGIN/,/END/{ if(/BEGIN/){a++}; out="cert"a".pem"; print >out}'
    2. Calculate the fingerprint for the certificate at the root of the chain:

      $ export BUCKET_FINGERPRINT=$(openssl x509 -in cert<number>.pem -fingerprint -noout | sed -e 's/.*Fingerprint=//' -e 's/://g')

      Where <number> is the highest number in the files that were saved. For example, if 2 is the highest number in the files that were saved, use cert2.pem.

    3. Create the identity provider:

      $ aws iam create-open-id-connect-provider --url $OPENID_BUCKET_URL --thumbprint-list $BUCKET_FINGERPRINT --client-id-list openshift sts.amazonaws.com
    4. Retain the returned ARN of the newly created identity provider. This ARN is later referred to as <aws_iam_openid_arn>.
  9. Generate IAM roles:

    1. Locate all CredentialsRequest CRs in this release image that target the cloud you are deploying on:

      $ oc adm release extract quay.io/openshift-release-dev/ocp-release:4.<y>.<z>-x86_64 --credentials-requests --cloud=aws

      Where <y> and <z> are the numbers corresponding to the version of OpenShift Container Platform you are installing.

    2. For each CredentialsRequest CR, create an IAM role of type Web identity using the previously created IAM Identity Provider that grants the necessary permissions and establishes a trust relationship that trusts the identity provider previously created.

      For example, for the openshift-machine-api-operator CredentialsRequest CR in 0000_30_machine-api-operator_00_credentials-request.yaml, create an IAM role that allows an identity from the created OIDC provider created for the cluster, similar to the following:

      {
          "Role": {
              "Path": "/",
              "RoleName": "openshift-machine-api-aws-cloud-credentials",
              "RoleId": "ARSOMEROLEID",
              "Arn": "arn:aws:iam::123456789012:role/openshift-machine-api-aws-cloud-credentials",
              "CreateDate": "2021-01-06T15:54:13Z",
              "AssumeRolePolicyDocument": {
                  "Version": "2012-10-17",
                  "Statement": [
                      {
                          "Effect": "Allow",
                          "Principal": {
                              "Federated": "<aws_iam_openid_arn>"
                          },
                          "Action": "sts:AssumeRoleWithWebIdentity",
                          "Condition": {
                              "StringEquals": {
                                  "<oidc_bucket_name>.s3.<aws_region>.amazonaws.com/$BUCKET_NAME:aud": "openshift"
                              }
                          }
                      }
                  ]
              },
              "Description": "OpenShift role for openshift-machine-api/aws-cloud-credentials",
              "MaxSessionDuration": 3600,
              "RoleLastUsed": {
                  "LastUsedDate": "2021-02-03T02:51:24Z",
                  "Region": "<aws_region>"
              }
          }
      }

      Where <aws_iam_openid_arn> is the returned ARN of the newly created identity provider.

    3. To further restrict the role such that only specific cluster ServiceAccount objects can assume the role, modify the trust relationship of each role by updating the .Role.AssumeRolePolicyDocument.Statement[].Condition field to the specific ServiceAccount objects for each component.

      • Modify the trust relationship of the cluster-image-registry-operator role to have the following condition:

        "Condition": {
          "StringEquals": {
            "<oidc_bucket_name>.s3.<aws_region>.amazonaws.com:sub": [
              "system:serviceaccount:openshift-image-registry:registry",
              "system:serviceaccount:openshift-image-registry:cluster-image-registry-operator"
            ]
          }
        }
      • Modify the trust relationship of the openshift-ingress-operator to have the following condition:

        "Condition": {
          "StringEquals": {
            "<oidc_bucket_name>.s3.<aws_region>.amazonaws.com:sub": [
              "system:serviceaccount:openshift-ingress-operator:ingress-operator"
            ]
          }
        }
      • Modify the trust relationship of the openshift-cluster-csi-drivers to have the following condition:

        "Condition": {
          "StringEquals": {
            "<oidc_bucket_name>.s3.<aws_region>.amazonaws.com:sub": [
              "system:serviceaccount:openshift-cluster-csi-drivers:aws-ebs-csi-driver-operator",
              "system:serviceaccount:openshift-cluster-csi-drivers:aws-ebs-csi-driver-controller-sa"
            ]
          }
        }
      • Modify the trust relationship of the openshift-machine-api to have the following condition:

        "Condition": {
          "StringEquals": {
            "<oidc_bucket_name>.s3.<aws_region>.amazonaws.com:sub": [
              "system:serviceaccount:openshift-machine-api:machine-api-controllers"
            ]
          }
        }
  10. For each IAM role, attach an IAM policy to the role that reflects the required permissions from the corresponding CredentialsRequest objects.

    For example, for openshift-machine-api, attach an IAM policy similar to the following:

    {
        "RoleName": "openshift-machine-api-aws-cloud-credentials",
        "PolicyName": "openshift-machine-api-aws-cloud-credentials",
        "PolicyDocument": {
            "Version": "2012-10-17",
            "Statement": [
                {
                    "Effect": "Allow",
                    "Action": [
                        "ec2:CreateTags",
                        "ec2:DescribeAvailabilityZones",
                        "ec2:DescribeDhcpOptions",
                        "ec2:DescribeImages",
                        "ec2:DescribeInstances",
                        "ec2:DescribeSecurityGroups",
                        "ec2:DescribeSubnets",
                        "ec2:DescribeVpcs",
                        "ec2:RunInstances",
                        "ec2:TerminateInstances",
                        "elasticloadbalancing:DescribeLoadBalancers",
                        "elasticloadbalancing:DescribeTargetGroups",
                        "elasticloadbalancing:RegisterInstancesWithLoadBalancer",
                        "elasticloadbalancing:RegisterTargets",
                        "iam:PassRole",
                        "iam:CreateServiceLinkedRole"
                    ],
                    "Resource": "*"
                },
                {
                    "Effect": "Allow",
                    "Action": [
                        "kms:Decrypt",
                        "kms:Encrypt",
                        "kms:GenerateDataKey",
                        "kms:GenerateDataKeyWithoutPlainText",
                        "kms:DescribeKey"
                    ],
                    "Resource": "*"
                },
                {
                    "Effect": "Allow",
                    "Action": [
                        "kms:RevokeGrant",
                        "kms:CreateGrant",
                        "kms:ListGrants"
                    ],
                    "Resource": "*",
                    "Condition": {
                        "Bool": {
                            "kms:GrantIsForAWSResource": true
                        }
                    }
                }
            ]
        }
    }
  11. Prepare to run the OpenShift Container Platform installer:

    1. Create the install-config.yaml file:

      $ ./openshift-install create install-config
    2. Configure the cluster to install with the CCO in manual mode:

      $ echo "credentialsMode: Manual" >> install-config.yaml
    3. Create install manifests:

      $ ./openshift-install create manifests
    4. Create a tls directory, and copy the private key generated previously there:

      Note

      The target file name must be ./tls/bound-service-account-signing-key.key.

      $ mkdir tls ; cp <path_to_service_account_signer> ./tls/bound-service-account-signing-key.key
    5. Create a custom Authentication CR with the file name cluster-authentication-02-config.yaml:

      $ cat << EOF > manifests/cluster-authentication-02-config.yaml
      apiVersion: config.openshift.io/v1
      kind: Authentication
      metadata:
        name: cluster
      spec:
        serviceAccountIssuer: $OPENID_BUCKET_URL
      EOF
    6. For each CredentialsRequest CR that is extracted from the release image, create a secret with the target namespace and target name that is indicated in each CredentialsRequest, substituting the AWS IAM role ARN created previously for each component:

      Example secret manifest for openshift-machine-api:

      $ cat manifests/openshift-machine-api-aws-cloud-credentials-credentials.yaml
      apiVersion: v1
      stringData:
        credentials: |-
          [default]
          role_arn = arn:aws:iam::123456789012:role/openshift-machine-api-aws-cloud-credentials
          web_identity_token_file = /var/run/secrets/openshift/serviceaccount/token
      kind: Secret
      metadata:
        name: aws-cloud-credentials
        namespace: openshift-machine-api
      type: Opaque

18.5.1.2. Running the installer

  • Run the OpenShift Container Platform installer:

    $ ./openshift-install create cluster

18.5.1.3. Verifying the installation

  1. Connect to the OpenShift Container Platform cluster.
  2. Verify that the cluster does not have root credentials:

    $ oc get secrets -n kube-system aws-creds

    The output should look similar to:

    Error from server (NotFound): secrets "aws-creds" not found
  3. Verify that the components are assuming the IAM roles that are specified in the secret manifests, instead of using credentials that are created by the CCO:

    Example command with the Image Registry Operator

    $ oc get secrets -n openshift-image-registry installer-cloud-credentials -o json | jq -r .data.credentials | base64 --decode

    The output should show the role and web identity token that are used by the component and look similar to:

    Example output with the Image Registry Operator

    [default]
    role_arn = arn:aws:iam::123456789:role/openshift-image-registry-installer-cloud-credentials
    web_identity_token_file = /var/run/secrets/openshift/serviceaccount/token