Installing on Azure

OpenShift Container Platform 4.6

Installing OpenShift Container Platform Azure clusters

Red Hat OpenShift Documentation Team

Abstract

This document provides instructions for installing and uninstalling OpenShift Container Platform clusters on Microsoft Azure.

Chapter 1. Installing on Azure

1.1. Configuring an Azure account

Before you can install OpenShift Container Platform, you must configure a Microsoft Azure account.

Important

All Azure resources that are available through public endpoints are subject to resource name restrictions, and you cannot create resources that use certain terms. For a list of terms that Azure restricts, see Resolve reserved resource name errors in the Azure documentation.

1.1.1. Azure account limits

The OpenShift Container Platform cluster uses a number of Microsoft Azure components, and the default Azure subscription and service limits, quotas, and constraints affect your ability to install OpenShift Container Platform clusters.

Important

Default limits vary by offer category types, such as Free Trial and Pay-As-You-Go, and by series, such as Dv2, F, and G. For example, the default for Enterprise Agreement subscriptions is 350 cores.

Check the limits for your subscription type and if necessary, increase quota limits for your account before you install a default cluster on Azure.

The following table summarizes the Azure components whose limits can impact your ability to install and run OpenShift Container Platform clusters.

ComponentNumber of components required by defaultDefault Azure limitDescription

vCPU

40

20 per region

A default cluster requires 40 vCPUs, so you must increase the account limit.

By default, each cluster creates the following instances:

  • One bootstrap machine, which is removed after installation
  • Three control plane machines
  • Three compute machines

Because the bootstrap machine uses Standard_D4s_v3 machines, which use 4 vCPUs, the control plane machines use Standard_D8s_v3 virtual machines, which use 8 vCPUs, and the worker machines use Standard_D4s_v3 virtual machines, which use 4 vCPUs, a default cluster requires 40 vCPUs. The bootstrap node VM, which uses 4 vCPUs, is used only during installation.

To deploy more worker nodes, enable autoscaling, deploy large workloads, or use a different instance type, you must further increase the vCPU limit for your account to ensure that your cluster can deploy the machines that you require.

By default, the installation program distributes control plane and compute machines across all availability zones within a region. To ensure high availability for your cluster, select a region with at least three availability zones. If your region contains fewer than three availability zones, the installation program places more than one control plane machine in the available zones.

OS Disk

7

 

VM OS disk must be able to sustain a minimum throughput of 5000 IOPS / 200MBps. This throughput can be provided by having a minimum of 1 TiB Premium SSD (P30). In Azure, disk performance is directly dependent on SSD disk sizes, so to achieve the throughput supported by Standard_D8s_v3, or other similar machine types available, and the target of 5000 IOPS, at least a P30 disk is required.

Host caching must be set to ReadOnly for low read latency and high read IOPS and throughput. The reads performed from the cache, which is present either in the VM memory or in the local SSD disk, are much faster than the reads from the data disk, which is in the blob storage.

VNet

1

1000 per region

Each default cluster requires one Virtual Network (VNet), which contains two subnets.

Network interfaces

6

65,536 per region

Each default cluster requires six network interfaces. If you create more machines or your deployed workloads create load balancers, your cluster uses more network interfaces.

Network security groups

2

5000

Each default cluster Each cluster creates network security groups for each subnet in the VNet. The default cluster creates network security groups for the control plane and for the compute node subnets:

controlplane

Allows the control plane machines to be reached on port 6443 from anywhere

node

Allows worker nodes to be reached from the Internet on ports 80 and 443

Network load balancers

3

1000 per region

Each cluster creates the following load balancers:

default

Public IP address that load balances requests to ports 80 and 443 across worker machines

internal

Private IP address that load balances requests to ports 6443 and 22623 across control plane machines

external

Public IP address that load balances requests to port 6443 across control plane machines

If your applications create more Kubernetes LoadBalancer service objects, your cluster uses more load balancers.

Public IP addresses

3

 

Each of the two public load balancers uses a public IP address. The bootstrap machine also uses a public IP address so that you can SSH into the machine to troubleshoot issues during installation. The IP address for the bootstrap node is used only during installation.

Private IP addresses

7

 

The internal load balancer, each of the three control plane machines, and each of the three worker machines each use a private IP address.

Spot VM vCPUs (optional)

0

If you configure spot VMs, your cluster must have two spot VM vCPUs for every compute node.

20 per region

This is an optional component. To use spot VMs, you must increase the Azure default limit to at least twice the number of compute nodes in your cluster.

Note

Using spot VMs for control plane nodes is not recommended.

1.1.2. Configuring a public DNS zone in Azure

To install OpenShift Container Platform, the Microsoft Azure account you use must have a dedicated public hosted DNS zone in your account. This zone must be authoritative for the domain. This service provides cluster DNS resolution and name lookup for external connections to the cluster.

Procedure

  1. Identify your domain, or subdomain, and registrar. You can transfer an existing domain and registrar or obtain a new one through Azure or another source.

    Note

    For more information about purchasing domains through Azure, see Buy a custom domain name for Azure App Service in the Azure documentation.

  2. If you are using an existing domain and registrar, migrate its DNS to Azure. See Migrate an active DNS name to Azure App Service in the Azure documentation.
  3. Configure DNS for your domain. Follow the steps in the Tutorial: Host your domain in Azure DNS in the Azure documentation to create a public hosted zone for your domain or subdomain, extract the new authoritative name servers, and update the registrar records for the name servers that your domain uses.

    Use an appropriate root domain, such as openshiftcorp.com, or subdomain, such as clusters.openshiftcorp.com.

  4. If you use a subdomain, follow your company’s procedures to add its delegation records to the parent domain.

1.1.3. Increasing Azure account limits

To increase an account limit, file a support request on the Azure portal.

Note

You can increase only one type of quota per support request.

Procedure

  1. From the Azure portal, click Help + support in the lower left corner.
  2. Click New support request and then select the required values:

    1. From the Issue type list, select Service and subscription limits (quotas).
    2. From the Subscription list, select the subscription to modify.
    3. From the Quota type list, select the quota to increase. For example, select Compute-VM (cores-vCPUs) subscription limit increases to increase the number of vCPUs, which is required to install a cluster.
    4. Click Next: Solutions.
  3. On the Problem Details page, provide the required information for your quota increase:

    1. Click Provide details and provide the required details in the Quota details window.
    2. In the SUPPORT METHOD and CONTACT INFO sections, provide the issue severity and your contact details.
  4. Click Next: Review + create and then click Create.

1.1.4. Required Azure roles

OpenShift Container Platform needs a service principal so it can manage Microsoft Azure resources. Before you can create a service principal, your Azure account subscription must have the following roles:

  • User Access Administrator
  • Owner

To set roles on the Azure portal, see the Manage access to Azure resources using RBAC and the Azure portal in the Azure documentation.

1.1.5. Creating a service principal

Because OpenShift Container Platform and its installation program must create Microsoft Azure resources through Azure Resource Manager, you must create a service principal to represent it.

Prerequisites

  • Install or update the Azure CLI.
  • Install the jq package.
  • Your Azure account has the required roles for the subscription that you use.

Procedure

  1. Log in to the Azure CLI:

    $ az login

    Log in to Azure in the web console by using your credentials.

  2. If your Azure account uses subscriptions, ensure that you are using the right subscription.

    1. View the list of available accounts and record the tenantId value for the subscription you want to use for your cluster:

      $ az account list --refresh

      Example output

      [
        {
          "cloudName": "AzureCloud",
          "id": "9bab1460-96d5-40b3-a78e-17b15e978a80",
          "isDefault": true,
          "name": "Subscription Name",
          "state": "Enabled",
          "tenantId": "6057c7e9-b3ae-489d-a54e-de3f6bf6a8ee",
          "user": {
            "name": "you@example.com",
            "type": "user"
          }
        }
      ]

    2. View your active account details and confirm that the tenantId value matches the subscription you want to use:

      $ az account show

      Example output

      {
        "environmentName": "AzureCloud",
        "id": "9bab1460-96d5-40b3-a78e-17b15e978a80",
        "isDefault": true,
        "name": "Subscription Name",
        "state": "Enabled",
        "tenantId": "6057c7e9-b3ae-489d-a54e-de3f6bf6a8ee", 1
        "user": {
          "name": "you@example.com",
          "type": "user"
        }
      }

      1
      Ensure that the value of the tenantId parameter is the UUID of the correct subscription.
    3. If you are not using the right subscription, change the active subscription:

      $ az account set -s <id> 1
      1
      Substitute the value of the id for the subscription that you want to use for <id>.
    4. If you changed the active subscription, display your account information again:

      $ az account show

      Example output

      {
        "environmentName": "AzureCloud",
        "id": "33212d16-bdf6-45cb-b038-f6565b61edda",
        "isDefault": true,
        "name": "Subscription Name",
        "state": "Enabled",
        "tenantId": "8049c7e9-c3de-762d-a54e-dc3f6be6a7ee",
        "user": {
          "name": "you@example.com",
          "type": "user"
        }
      }

  3. Record the values of the tenantId and id parameters from the previous output. You need these values during OpenShift Container Platform installation.
  4. Create the service principal for your account:

    $ az ad sp create-for-rbac --role Contributor --name <service_principal> 1
    1
    Replace <service_principal> with the name to assign to the service principal.

    Example output

    Changing "<service_principal>" to a valid URI of "http://<service_principal>", which is the required format used for service principal names
    Retrying role assignment creation: 1/36
    Retrying role assignment creation: 2/36
    Retrying role assignment creation: 3/36
    Retrying role assignment creation: 4/36
    {
      "appId": "8bd0d04d-0ac2-43a8-928d-705c598c6956",
      "displayName": "<service_principal>",
      "name": "http://<service_principal>",
      "password": "ac461d78-bf4b-4387-ad16-7e32e328aec6",
      "tenant": "6048c7e9-b2ad-488d-a54e-dc3f6be6a7ee"
    }

  5. Record the values of the appId and password parameters from the previous output. You need these values during OpenShift Container Platform installation.
  6. Grant additional permissions to the service principal.

    • You must always add the Contributor and User Access Administrator roles to the app registration service principal so the cluster can assign credentials for its components.
    • To operate the Cloud Credential Operator (CCO) in mint mode, the app registration service principal also requires the Azure Active Directory Graph/Application.ReadWrite.OwnedBy API permission.
    • To operate the CCO in passthrough mode, the app registration service principal does not require additional API permissions.

    For more information about CCO modes, see the Cloud Credential Operator entry in the Red Hat Operators reference content.

    1. To assign the User Access Administrator role, run the following command:

      $ az role assignment create --role "User Access Administrator" \
          --assignee-object-id $(az ad sp list --filter "appId eq '<appId>'" \
             | jq '.[0].id' -r) 1
      1
      Replace <appId> with the appId parameter value for your service principal.
    2. To assign the Azure Active Directory Graph permission, run the following command:

      $ az ad app permission add --id <appId> \ 1
           --api 00000002-0000-0000-c000-000000000000 \
           --api-permissions 824c81eb-e3f8-4ee6-8f6d-de7f50d565b7=Role
      1
      Replace <appId> with the appId parameter value for your service principal.

      Example output

      Invoking "az ad app permission grant --id 46d33abc-b8a3-46d8-8c84-f0fd58177435 --api 00000002-0000-0000-c000-000000000000" is needed to make the change effective

      For more information about the specific permissions that you grant with this command, see the GUID Table for Windows Azure Active Directory Permissions.

    3. Approve the permissions request. If your account does not have the Azure Active Directory tenant administrator role, follow the guidelines for your organization to request that the tenant administrator approve your permissions request.

      $ az ad app permission grant --id <appId> \ 1
           --api 00000002-0000-0000-c000-000000000000
      1
      Replace <appId> with the appId parameter value for your service principal.

1.1.6. Supported Azure regions

The installation program dynamically generates the list of available Microsoft Azure regions based on your subscription. The following Azure regions were tested and validated in OpenShift Container Platform version 4.6.1:

Supported Azure public regions
  • australiacentral (Australia Central)
  • australiaeast (Australia East)
  • australiasoutheast (Australia South East)
  • brazilsouth (Brazil South)
  • canadacentral (Canada Central)
  • canadaeast (Canada East)
  • centralindia (Central India)
  • centralus (Central US)
  • eastasia (East Asia)
  • eastus (East US)
  • eastus2 (East US 2)
  • francecentral (France Central)
  • germanywestcentral (Germany West Central)
  • japaneast (Japan East)
  • japanwest (Japan West)
  • koreacentral (Korea Central)
  • koreasouth (Korea South)
  • northcentralus (North Central US)
  • northeurope (North Europe)
  • norwayeast (Norway East)
  • southafricanorth (South Africa North)
  • southcentralus (South Central US)
  • southeastasia (Southeast Asia)
  • southindia (South India)
  • switzerlandnorth (Switzerland North)
  • uaenorth (UAE North)
  • uksouth (UK South)
  • ukwest (UK West)
  • westcentralus (West Central US)
  • westeurope (West Europe)
  • westindia (West India)
  • westus (West US)
  • westus2 (West US 2)
Supported Azure Government regions

Support for the following Microsoft Azure Government (MAG) regions was added in OpenShift Container Platform version 4.6:

  • usgovtexas (US Gov Texas)
  • usgovvirginia (US Gov Virginia)

You can reference all available MAG regions in the Azure documentation. Other provided MAG regions are expected to work with OpenShift Container Platform, but have not been tested.

1.1.7. Next steps

1.2. Manually creating IAM for Azure

In environments where the cloud identity and access management (IAM) APIs are not reachable, or the administrator prefers not to store an administrator-level credential secret in the cluster kube-system namespace, you can put the Cloud Credential Operator (CCO) into manual mode before you install the cluster.

1.2.1. Alternatives to storing administrator-level secrets in the kube-system project

The Cloud Credential Operator (CCO) manages cloud provider credentials as Kubernetes custom resource definitions (CRDs). You can configure the CCO to suit the security requirements of your organization by setting different values for the credentialsMode parameter in the install-config.yaml file.

If you prefer not to store an administrator-level credential secret in the cluster kube-system project, you can set the credentialsMode parameter for the CCO to Manual when installing OpenShift Container Platform and manage your cloud credentials manually.

Using manual mode allows each cluster component to have only the permissions it requires, without storing an administrator-level credential in the cluster. You can also use this mode if your environment does not have connectivity to the cloud provider public IAM endpoint. However, you must manually reconcile permissions with new release images for every upgrade. You must also manually supply credentials for every component that requests them.

Additional resources

  • For a detailed description of all available CCO credential modes and their supported platforms, see the Cloud Credential Operator reference.

1.2.2. Manually create IAM

The Cloud Credential Operator (CCO) can be put into manual mode prior to installation in environments where the cloud identity and access management (IAM) APIs are not reachable, or the administrator prefers not to store an administrator-level credential secret in the cluster kube-system namespace.

Procedure

  1. To generate the manifests, run the following command from the directory that contains the installation program:

    $ openshift-install create manifests --dir <installation_directory> 1
    1
    For <installation_directory>, specify the directory name to store the files that the installation program creates.
  2. Insert a config map into the manifests directory so that the Cloud Credential Operator is placed in manual mode:

    $ cat <<EOF > mycluster/manifests/cco-configmap.yaml
    apiVersion: v1
    kind: ConfigMap
    metadata:
      name: cloud-credential-operator-config
      namespace: openshift-cloud-credential-operator
      annotations:
        release.openshift.io/create-only: "true"
    data:
      disabled: "true"
    EOF
  3. Remove the admin credential secret created using your local cloud credentials. This removal prevents your admin credential from being stored in the cluster:

    $ rm mycluster/openshift/99_cloud-creds-secret.yaml
  4. From the directory that contains the installation program, obtain details of the OpenShift Container Platform release image that your openshift-install binary is built to use:

    $ openshift-install version

    Example output

    release image quay.io/openshift-release-dev/ocp-release:4.y.z-x86_64

  5. Locate all CredentialsRequest objects in this release image that target the cloud you are deploying on:

    $ oc adm release extract quay.io/openshift-release-dev/ocp-release:4.y.z-x86_64 --credentials-requests --cloud=azure

    This displays the details for each request.

    Sample CredentialsRequest object

    apiVersion: cloudcredential.openshift.io/v1
    kind: CredentialsRequest
    metadata:
      labels:
        controller-tools.k8s.io: "1.0"
      name: openshift-image-registry-azure
      namespace: openshift-cloud-credential-operator
    spec:
      secretRef:
        name: installer-cloud-credentials
        namespace: openshift-image-registry
      providerSpec:
        apiVersion: cloudcredential.openshift.io/v1
        kind: AzureProviderSpec
        roleBindings:
        - role: Contributor

  6. Create YAML files for secrets in the openshift-install manifests directory that you generated previously. The secrets must be stored using the namespace and secret name defined in the spec.secretRef for each credentialsRequest. The format for the secret data varies for each cloud provider.
  7. From the directory that contains the installation program, proceed with your cluster creation:

    $ openshift-install create cluster --dir <installation_directory>
    Important

    Before upgrading a cluster that uses manually maintained credentials, you must ensure that the CCO is in an upgradeable state. For details, see the Upgrading clusters with manually maintained credentials section of the installation content for your cloud provider.

1.2.3. Admin credentials root secret format

Each cloud provider uses a credentials root secret in the kube-system namespace by convention, which is then used to satisfy all credentials requests and create their respective secrets. This is done either by minting new credentials, with mint mode, or by copying the credentials root secret, with passthrough mode.

The format for the secret varies by cloud, and is also used for each CredentialsRequest secret.

Microsoft Azure secret format

apiVersion: v1
kind: Secret
metadata:
  namespace: kube-system
  name: azure-credentials
stringData:
  azure_subscription_id: <SubscriptionID>
  azure_client_id: <ClientID>
  azure_client_secret: <ClientSecret>
  azure_tenant_id: <TenantID>
  azure_resource_prefix: <ResourcePrefix>
  azure_resourcegroup: <ResourceGroup>
  azure_region: <Region>

On Microsoft Azure, the credentials secret format includes two properties that must contain the cluster’s infrastructure ID, generated randomly for each cluster installation. This value can be found after running create manifests:

$ cat .openshift_install_state.json | jq '."*installconfig.ClusterID".InfraID' -r

Example output

mycluster-2mpcn

This value would be used in the secret data as follows:

azure_resource_prefix: mycluster-2mpcn
azure_resourcegroup: mycluster-2mpcn-rg

1.2.4. Upgrading clusters with manually maintained credentials

If credentials are added in a future release, the Cloud Credential Operator (CCO) upgradable status for a cluster with manually maintained credentials changes to false. For minor release, for example, from 4.5 to 4.6, this status prevents you from upgrading until you have addressed any updated permissions. For z-stream releases, for example, from 4.5.10 to 4.5.11, the upgrade is not blocked, but the credentials must still be updated for the new release.

Use the Administrator perspective of the web console to determine if the CCO is upgradeable.

  1. Navigate to AdministrationCluster Settings.
  2. To view the CCO status details, click cloud-credential in the Cluster Operators list.
  3. If the Upgradeable status in the Conditions section is False, examine the credentialsRequests for the new release and update the manually maintained credentials on your cluster to match before upgrading.

In addition to creating new credentials for the release image that you are upgrading to, you must review the required permissions for existing credentials and accommodate any new permissions requirements for existing components in the new release. The CCO cannot detect these mismatches and will not set upgradable to false in this case.

The Manually creating IAM section of the installation content for your cloud provider explains how to obtain and use the credentials required for your cloud.

1.2.5. Mint mode

Mint mode is the default and recommended Cloud Credential Operator (CCO) credentials mode for OpenShift Container Platform. In this mode, the CCO uses the provided administrator-level cloud credential to run the cluster. Mint mode is supported for AWS, GCP, and Azure.

In mint mode, the admin credential is stored in the kube-system namespace and then used by the CCO to process the CredentialsRequest objects in the cluster and create users for each with specific permissions.

The benefits of mint mode include:

  • Each cluster component has only the permissions it requires
  • Automatic, on-going reconciliation for cloud credentials, including additional credentials or permissions that might be required for upgrades

One drawback is that mint mode requires admin credential storage in a cluster kube-system secret.

1.2.6. Next steps

1.3. Installing a cluster quickly on Azure

In OpenShift Container Platform version 4.6, you can install a cluster on Microsoft Azure that uses the default configuration options.

1.3.1. Prerequisites

1.3.2. Internet access for OpenShift Container Platform

In OpenShift Container Platform 4.6, you require access to the Internet to install your cluster.

You must have Internet access to:

  • Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster.
  • Access Quay.io to obtain the packages that are required to install your cluster.
  • Obtain the packages that are required to perform cluster updates.
Important

If your cluster cannot have direct Internet access, you can perform a restricted network installation on some types of infrastructure that you provision. During that process, you download the content that is required and use it to populate a mirror registry with the packages that you need to install a cluster and generate the installation program. With some installation types, the environment that you install your cluster in will not require Internet access. Before you update the cluster, you update the content of the mirror registry.

1.3.3. Generating an SSH private key and adding it to the agent

If you want to perform installation debugging or disaster recovery on your cluster, you must provide an SSH key to both your ssh-agent and the installation program. You can use this key to access the bootstrap machine in a public cluster to troubleshoot installation issues.

Note

In a production environment, you require disaster recovery and debugging.

You can use this key to SSH into the master nodes as the user core. When you deploy the cluster, the key is added to the core user’s ~/.ssh/authorized_keys list.

Note

You must use a local key, not one that you configured with platform-specific approaches such as AWS key pairs.

Procedure

  1. If you do not have an SSH key that is configured for password-less authentication on your computer, create one. For example, on a computer that uses a Linux operating system, run the following command:

    $ ssh-keygen -t ed25519 -N '' \
        -f <path>/<file_name> 1
    1
    Specify the path and file name, such as ~/.ssh/id_rsa, of the new SSH key. If you have an existing key pair, ensure your public key is in the your ~/.ssh directory.

    Running this command generates an SSH key that does not require a password in the location that you specified.

    Note

    If you plan to install an OpenShift Container Platform cluster that uses FIPS Validated / Modules in Process cryptographic libraries on the x86_64 architecture, do not create a key that uses the ed25519 algorithm. Instead, create a key that uses the rsa or ecdsa algorithm.

  2. Start the ssh-agent process as a background task:

    $ eval "$(ssh-agent -s)"

    Example output

    Agent pid 31874

    Note

    If your cluster is in FIPS mode, only use FIPS-compliant algorithms to generate the SSH key. The key must be either RSA or ECDSA.

  3. Add your SSH private key to the ssh-agent:

    $ ssh-add <path>/<file_name> 1

    Example output

    Identity added: /home/<you>/<path>/<file_name> (<computer_name>)

    1
    Specify the path and file name for your SSH private key, such as ~/.ssh/id_rsa

Next steps

  • When you install OpenShift Container Platform, provide the SSH public key to the installation program.

1.3.4. Obtaining the installation program

Before you install OpenShift Container Platform, download the installation file on a local computer.

Prerequisites

  • You have a computer that runs Linux or macOS, with 500 MB of local disk space

Procedure

  1. Access the Infrastructure Provider page on the OpenShift Cluster Manager site. If you have a Red Hat account, log in with your credentials. If you do not, create an account.
  2. Select your infrastructure provider.
  3. Navigate to the page for your installation type, download the installation program for your operating system, and place the file in the directory where you will store the installation configuration files.

    Important

    The installation program creates several files on the computer that you use to install your cluster. You must keep the installation program and the files that the installation program creates after you finish installing the cluster. Both files are required to delete the cluster.

    Important

    Deleting the files created by the installation program does not remove your cluster, even if the cluster failed during installation. To remove your cluster, complete the OpenShift Container Platform uninstallation procedures for your specific cloud provider.

  4. Extract the installation program. For example, on a computer that uses a Linux operating system, run the following command:

    $ tar xvf openshift-install-linux.tar.gz
  5. Download your installation pull secret from the Red Hat OpenShift Cluster Manager. This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components.

1.3.5. Deploying the cluster

You can install OpenShift Container Platform on a compatible cloud platform.

Important

You can run the create cluster command of the installation program only once, during initial installation.

Prerequisites

  • Configure an account with the cloud platform that hosts your cluster.
  • Obtain the OpenShift Container Platform installation program and the pull secret for your cluster.

Procedure

  1. Change to the directory that contains the installation program and initialize the cluster deployment:

    $ ./openshift-install create cluster --dir <installation_directory> \ 1
        --log-level=info 2
    1
    For <installation_directory>, specify the directory name to store the files that the installation program creates.
    2
    To view different installation details, specify warn, debug, or error instead of info.
    Important

    Specify an empty directory. Some installation assets, like bootstrap X.509 certificates have short expiration intervals, so you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version.

    Provide values at the prompts:

    1. Optional: Select an SSH key to use to access your cluster machines.

      Note

      For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses.

    2. Select azure as the platform to target.
    3. If you do not have a Microsoft Azure profile stored on your computer, specify the following Azure parameter values for your subscription and service principal:

      • azure subscription id: The subscription ID to use for the cluster. Specify the id value in your account output.
      • azure tenant id: The tenant ID. Specify the tenantId value in your account output.
      • azure service principal client id: The value of the appId parameter for the service principal.
      • azure service principal client secret: The value of the password parameter for the service principal.
    4. Select the region to deploy the cluster to.
    5. Select the base domain to deploy the cluster to. The base domain corresponds to the Azure DNS Zone that you created for your cluster.
    6. Enter a descriptive name for your cluster.

      Important

      All Azure resources that are available through public endpoints are subject to resource name restrictions, and you cannot create resources that use certain terms. For a list of terms that Azure restricts, see Resolve reserved resource name errors in the Azure documentation.

    7. Paste the pull secret from the Red Hat OpenShift Cluster Manager.
    Note

    If the cloud provider account that you configured on your host does not have sufficient permissions to deploy the cluster, the installation process stops, and the missing permissions are displayed.

    When the cluster deployment completes, directions for accessing your cluster, including a link to its web console and credentials for the kubeadmin user, display in your terminal.

    Example output

    ...
    INFO Install complete!
    INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig'
    INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com
    INFO Login to the console with user: "kubeadmin", and password: "4vYBz-Ee6gm-ymBZj-Wt5AL"
    INFO Time elapsed: 36m22s

    Note

    The cluster access and credential information also outputs to <installation_directory>/.openshift_install.log when an installation succeeds.

    Important
    • The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information.
    • It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation.
    Important

    You must not delete the installation program or the files that the installation program creates. Both are required to delete the cluster.

1.3.6. Installing the OpenShift CLI by downloading the binary

You can install the OpenShift CLI (oc) in order to interact with OpenShift Container Platform from a command-line interface. You can install oc on Linux, Windows, or macOS.

Important

If you installed an earlier version of oc, you cannot use it to complete all of the commands in OpenShift Container Platform 4.6. Download and install the new version of oc.

1.3.6.1. Installing the OpenShift CLI on Linux

You can install the OpenShift CLI (oc) binary on Linux by using the following procedure.

Procedure

  1. Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal.
  2. Select the appropriate version in the Version drop-down menu.
  3. Click Download Now next to the OpenShift v4.6 Linux Client entry and save the file.
  4. Unpack the archive:

    $ tar xvzf <file>
  5. Place the oc binary in a directory that is on your PATH.

    To check your PATH, execute the following command:

    $ echo $PATH

After you install the OpenShift CLI, it is available using the oc command:

$ oc <command>

1.3.6.2. Installing the OpenShift CLI on Windows

You can install the OpenShift CLI (oc) binary on Windows by using the following procedure.

Procedure

  1. Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal.
  2. Select the appropriate version in the Version drop-down menu.
  3. Click Download Now next to the OpenShift v4.6 Windows Client entry and save the file.
  4. Unzip the archive with a ZIP program.
  5. Move the oc binary to a directory that is on your PATH.

    To check your PATH, open the command prompt and execute the following command:

    C:\> path

After you install the OpenShift CLI, it is available using the oc command:

C:\> oc <command>

1.3.6.3. Installing the OpenShift CLI on macOS

You can install the OpenShift CLI (oc) binary on macOS by using the following procedure.

Procedure

  1. Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal.
  2. Select the appropriate version in the Version drop-down menu.
  3. Click Download Now next to the OpenShift v4.6 MacOSX Client entry and save the file.
  4. Unpack and unzip the archive.
  5. Move the oc binary to a directory on your PATH.

    To check your PATH, open a terminal and execute the following command:

    $ echo $PATH

After you install the OpenShift CLI, it is available using the oc command:

$ oc <command>

1.3.7. Logging in to the cluster by using the CLI

You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation.

Prerequisites

  • You deployed an OpenShift Container Platform cluster.
  • You installed the oc CLI.

Procedure

  1. Export the kubeadmin credentials:

    $ export KUBECONFIG=<installation_directory>/auth/kubeconfig 1
    1
    For <installation_directory>, specify the path to the directory that you stored the installation files in.
  2. Verify you can run oc commands successfully using the exported configuration:

    $ oc whoami

    Example output

    system:admin

Additional resources

  • See Accessing the web console for more details about accessing and understanding the OpenShift Container Platform web console.

1.3.8. Telemetry access for OpenShift Container Platform

In OpenShift Container Platform 4.6, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager.

After you confirm that your OpenShift Cluster Manager inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level.

Additional resources

1.3.9. Next steps

1.4. Installing a cluster on Azure with customizations

In OpenShift Container Platform version 4.6, you can install a customized cluster on infrastructure that the installation program provisions on Microsoft Azure. To customize the installation, you modify parameters in the install-config.yaml file before you install the cluster.

1.4.1. Prerequisites

1.4.2. Internet access for OpenShift Container Platform

In OpenShift Container Platform 4.6, you require access to the Internet to install your cluster.

You must have Internet access to:

  • Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster.
  • Access Quay.io to obtain the packages that are required to install your cluster.
  • Obtain the packages that are required to perform cluster updates.
Important

If your cluster cannot have direct Internet access, you can perform a restricted network installation on some types of infrastructure that you provision. During that process, you download the content that is required and use it to populate a mirror registry with the packages that you need to install a cluster and generate the installation program. With some installation types, the environment that you install your cluster in will not require Internet access. Before you update the cluster, you update the content of the mirror registry.

1.4.3. Generating an SSH private key and adding it to the agent

If you want to perform installation debugging or disaster recovery on your cluster, you must provide an SSH key to both your ssh-agent and the installation program. You can use this key to access the bootstrap machine in a public cluster to troubleshoot installation issues.

Note

In a production environment, you require disaster recovery and debugging.

You can use this key to SSH into the master nodes as the user core. When you deploy the cluster, the key is added to the core user’s ~/.ssh/authorized_keys list.

Note

You must use a local key, not one that you configured with platform-specific approaches such as AWS key pairs.

Procedure

  1. If you do not have an SSH key that is configured for password-less authentication on your computer, create one. For example, on a computer that uses a Linux operating system, run the following command:

    $ ssh-keygen -t ed25519 -N '' \
        -f <path>/<file_name> 1
    1
    Specify the path and file name, such as ~/.ssh/id_rsa, of the new SSH key. If you have an existing key pair, ensure your public key is in the your ~/.ssh directory.

    Running this command generates an SSH key that does not require a password in the location that you specified.

    Note

    If you plan to install an OpenShift Container Platform cluster that uses FIPS Validated / Modules in Process cryptographic libraries on the x86_64 architecture, do not create a key that uses the ed25519 algorithm. Instead, create a key that uses the rsa or ecdsa algorithm.

  2. Start the ssh-agent process as a background task:

    $ eval "$(ssh-agent -s)"

    Example output

    Agent pid 31874

    Note

    If your cluster is in FIPS mode, only use FIPS-compliant algorithms to generate the SSH key. The key must be either RSA or ECDSA.

  3. Add your SSH private key to the ssh-agent:

    $ ssh-add <path>/<file_name> 1

    Example output

    Identity added: /home/<you>/<path>/<file_name> (<computer_name>)

    1
    Specify the path and file name for your SSH private key, such as ~/.ssh/id_rsa

Next steps

  • When you install OpenShift Container Platform, provide the SSH public key to the installation program.

1.4.4. Obtaining the installation program

Before you install OpenShift Container Platform, download the installation file on a local computer.

Prerequisites

  • You have a computer that runs Linux or macOS, with 500 MB of local disk space

Procedure

  1. Access the Infrastructure Provider page on the OpenShift Cluster Manager site. If you have a Red Hat account, log in with your credentials. If you do not, create an account.
  2. Select your infrastructure provider.
  3. Navigate to the page for your installation type, download the installation program for your operating system, and place the file in the directory where you will store the installation configuration files.

    Important

    The installation program creates several files on the computer that you use to install your cluster. You must keep the installation program and the files that the installation program creates after you finish installing the cluster. Both files are required to delete the cluster.

    Important

    Deleting the files created by the installation program does not remove your cluster, even if the cluster failed during installation. To remove your cluster, complete the OpenShift Container Platform uninstallation procedures for your specific cloud provider.

  4. Extract the installation program. For example, on a computer that uses a Linux operating system, run the following command:

    $ tar xvf openshift-install-linux.tar.gz
  5. Download your installation pull secret from the Red Hat OpenShift Cluster Manager. This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components.

1.4.5. Creating the installation configuration file

You can customize the OpenShift Container Platform cluster you install on Microsoft Azure.

Prerequisites

  • Obtain the OpenShift Container Platform installation program and the pull secret for your cluster.

Procedure

  1. Create the install-config.yaml file.

    1. Change to the directory that contains the installation program and run the following command:

      $ ./openshift-install create install-config --dir <installation_directory> 1
      1
      For <installation_directory>, specify the directory name to store the files that the installation program creates.
      Important

      Specify an empty directory. Some installation assets, like bootstrap X.509 certificates have short expiration intervals, so you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version.

    2. At the prompts, provide the configuration details for your cloud:

      1. Optional: Select an SSH key to use to access your cluster machines.

        Note

        For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses.

      2. Select azure as the platform to target.
      3. If you do not have a Microsoft Azure profile stored on your computer, specify the following Azure parameter values for your subscription and service principal:

        • azure subscription id: The subscription ID to use for the cluster. Specify the id value in your account output.
        • azure tenant id: The tenant ID. Specify the tenantId value in your account output.
        • azure service principal client id: The value of the appId parameter for the service principal.
        • azure service principal client secret: The value of the password parameter for the service principal.
      4. Select the region to deploy the cluster to.
      5. Select the base domain to deploy the cluster to. The base domain corresponds to the Azure DNS Zone that you created for your cluster.
      6. Enter a descriptive name for your cluster.

        Important

        All Azure resources that are available through public endpoints are subject to resource name restrictions, and you cannot create resources that use certain terms. For a list of terms that Azure restricts, see Resolve reserved resource name errors in the Azure documentation.

      7. Paste the pull secret from the Red Hat OpenShift Cluster Manager.
  2. Modify the install-config.yaml file. You can find more information about the available parameters in the Installation configuration parameters section.
  3. Back up the install-config.yaml file so that you can use it to install multiple clusters.

    Important

    The install-config.yaml file is consumed during the installation process. If you want to reuse the file, you must back it up now.

1.4.5.1. Installation configuration parameters

Before you deploy an OpenShift Container Platform cluster, you provide parameter values to describe your account on the cloud platform that hosts your cluster and optionally customize your cluster’s platform. When you create the install-config.yaml installation configuration file, you provide values for the required parameters through the command line. If you customize your cluster, you can modify the install-config.yaml file to provide more details about the platform.

Note

After installation, you cannot modify these parameters in the install-config.yaml file.

Important

The openshift-install command does not validate field names for parameters. If an incorrect name is specified, the related file or object is not created, and no error is reported. Ensure that the field names for any parameters that are specified are correct.

1.4.5.1.1. Required configuration parameters

Required installation configuration parameters are described in the following table:

Table 1.1. Required parameters

ParameterDescriptionValues

apiVersion

The API version for the install-config.yaml content. The current version is v1. The installer may also support older API versions.

String

baseDomain

The base domain of your cloud provider. The base domain is used to create routes to your OpenShift Container Platform cluster components. The full DNS name for your cluster is a combination of the baseDomain and metadata.name parameter values that uses the <metadata.name>.<baseDomain> format.

A fully-qualified domain or subdomain name, such as example.com.

metadata

Kubernetes resource ObjectMeta, from which only the name parameter is consumed.

Object

metadata.name

The name of the cluster. DNS records for the cluster are all subdomains of {{.metadata.name}}.{{.baseDomain}}.

String of lowercase letters, hyphens (-), and periods (.), such as dev.

platform

The configuration for the specific platform upon which to perform the installation: aws, baremetal, azure, openstack, ovirt, vsphere. For additional information about platform.<platform> parameters, consult the following table for your specific platform.

Object

pullSecret

Get a pull secret from the Red Hat OpenShift Cluster Manager to authenticate downloading container images for OpenShift Container Platform components from services such as Quay.io.

{
   "auths":{
      "cloud.openshift.com":{
         "auth":"b3Blb=",
         "email":"you@example.com"
      },
      "quay.io":{
         "auth":"b3Blb=",
         "email":"you@example.com"
      }
   }
}
1.4.5.1.2. Network configuration parameters

You can customize your installation configuration based on the requirements of your existing network infrastructure. For example, you can expand the IP address block for the cluster network or provide different IP address blocks than the defaults.

Only IPv4 addresses are supported.

Table 1.2. Network parameters

ParameterDescriptionValues

networking

The configuration for the cluster network.

Object

Note

You cannot modify parameters specified by the networking object after installation.

networking.networkType

The cluster network provider Container Network Interface (CNI) plug-in to install.

Either OpenShiftSDN or OVNKubernetes. The default value is OpenShiftSDN.

networking.clusterNetwork

The IP address blocks for pods.

The default value is 10.128.0.0/14 with a host prefix of /23.

If you specify multiple IP address blocks, the blocks must not overlap.

An array of objects. For example:

networking:
  clusterNetwork:
  - cidr: 10.128.0.0/14
    hostPrefix: 23

networking.clusterNetwork.cidr

Required if you use networking.clusterNetwork. An IP address block.

An IPv4 network.

An IP address block in Classless Inter-Domain Routing (CIDR) notation. The prefix length for an IPv4 block is between 0 and 32.

networking.clusterNetwork.hostPrefix

The subnet prefix length to assign to each individual node. For example, if hostPrefix is set to 23 then each node is assigned a /23 subnet out of the given cidr. A hostPrefix value of 23 provides 510 (2^(32 - 23) - 2) pod IP addresses.

A subnet prefix.

The default value is 23.

networking.serviceNetwork

The IP address block for services. The default value is 172.30.0.0/16.

The OpenShift SDN and OVN-Kubernetes network providers support only a single IP address block for the service network.

An array with an IP address block in CIDR format. For example:

networking:
  serviceNetwork:
   - 172.30.0.0/16

networking.machineNetwork

The IP address blocks for machines.

If you specify multiple IP address blocks, the blocks must not overlap.

An array of objects. For example:

networking:
  machineNetwork:
  - cidr: 10.0.0.0/16

networking.machineNetwork.cidr

Required if you use networking.machineNetwork. An IP address block. The default value is 10.0.0.0/16 for all platforms other than libvirt. For libvirt, the default value is 192.168.126.0/24.

An IP network block in CIDR notation.

For example, 10.0.0.0/16.

Note

Set the networking.machineNetwork to match the CIDR that the preferred NIC resides in.

1.4.5.1.3. Optional configuration parameters

Optional installation configuration parameters are described in the following table:

Table 1.3. Optional parameters

ParameterDescriptionValues

additionalTrustBundle

A PEM-encoded X.509 certificate bundle that is added to the nodes' trusted certificate store. This trust bundle may also be used when a proxy has been configured.

String

compute

The configuration for the machines that comprise the compute nodes.

Array of machine-pool objects. For details, see the following "Machine-pool" table.

compute.architecture

Determines the instruction set architecture of the machines in the pool. Currently, heteregeneous clusters are not supported, so all pools must specify the same architecture. Valid values are amd64 (the default).

String

compute.hyperthreading

Whether to enable or disable simultaneous multithreading, or hyperthreading, on compute machines. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores.

Important

If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance.

Enabled or Disabled

compute.name

Required if you use compute. The name of the machine pool.

worker

compute.platform

Required if you use compute. Use this parameter to specify the cloud provider to host the worker machines. This parameter value must match the controlPlane.platform parameter value.

aws, azure, gcp, openstack, ovirt, vsphere, or {}

compute.replicas

The number of compute machines, which are also known as worker machines, to provision.

A positive integer greater than or equal to 2. The default value is 3.

controlPlane

The configuration for the machines that comprise the control plane.

Array of MachinePool objects. For details, see the following "Machine-pool" table.

controlPlane.architecture

Determines the instruction set architecture of the machines in the pool. Currently, heterogeneous clusters are not supported, so all pools must specify the same architecture. Valid values are amd64 (the default).

String

controlPlane.hyperthreading

Whether to enable or disable simultaneous multithreading, or hyperthreading, on control plane machines. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores.

Important

If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance.

Enabled or Disabled

controlPlane.name

Required if you use controlPlane. The name of the machine pool.

master

controlPlane.platform

Required if you use controlPlane. Use this parameter to specify the cloud provider that hosts the control plane machines. This parameter value must match the compute.platform parameter value.

aws, azure, gcp, openstack, ovirt, vsphere, or {}

controlPlane.replicas

The number of control plane machines to provision.

The only supported value is 3, which is the default value.

credentialsMode

The Cloud Credential Operator (CCO) mode. If no mode is specified, the CCO dynamically tries to determine the capabilities of the provided credentials, with a preference for mint mode on the platforms where multiple modes are supported.

Note

Not all CCO modes are supported for all cloud providers. For more information on CCO modes, see the Cloud Credential Operator entry in the Red Hat Operators reference content.

Mint, Passthrough, Manual, or an empty string ("").

fips

Enable or disable FIPS mode. The default is false (disabled). If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead.

Important

The use of FIPS Validated / Modules in Process cryptographic libraries is only supported on OpenShift Container Platform deployments on the x86_64 architecture.

Note

If you are using Azure File storage, you cannot enable FIPS mode.

false or true

imageContentSources

Sources and repositories for the release-image content.

Array of objects. Includes a source and, optionally, mirrors, as described in the following rows of this table.

imageContentSources.source

Required if you use imageContentSources. Specify the repository that users refer to, for example, in image pull specifications.

String

imageContentSources.mirrors

Specify one or more repositories that may also contain the same images.

Array of strings

publish

How to publish or expose the user-facing endpoints of your cluster, such as the Kubernetes API, OpenShift routes.

Internal or External. To deploy a private cluster, which cannot be accessed from the internet, set publish to Internal. The default value is External.

sshKey

The SSH key or keys to authenticate access your cluster machines.

Note

For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses.

One or more keys. For example:

sshKey:
  <key1>
  <key2>
  <key3>
1.4.5.1.4. Additional Azure configuration parameters

Additional Azure configuration parameters are described in the following table:

Table 1.4. Additional Azure parameters

ParameterDescriptionValues

compute.platform.azure.osDisk.diskSizeGB

The Azure disk size for the VM.

Integer that represents the size of the disk in GB. The default is 128.

compute.platform.azure.osDisk.diskType

Defines the type of disk.

standard_LRS, premium_LRS, or standardSSD_LRS. The default is premium_LRS.

controlPlane.platform.azure.osDisk.diskSizeGB

The Azure disk size for the VM.

Integer that represents the size of the disk in GB. The default is 1024.

controlPlane.platform.azure.osDisk.diskType

Defines the type of disk.

premium_LRS or standardSSD_LRS. The default is premium_LRS.

platform.azure.baseDomainResourceGroupName

The name of the resource group that contains the DNS zone for your base domain.

String, for example production_cluster.

platform.azure.outboundType

The outbound routing strategy used to connect your cluster to the internet. If you are using user-defined routing, you must have pre-existing networking available where the outbound routing has already been configured prior to installing a cluster. The installation program is not responsible for configuring user-defined routing.

LoadBalancer or UserDefinedRouting. The default is LoadBalancer.

platform.azure.region

The name of the Azure region that hosts your cluster.

Any valid region name, such as centralus.

platform.azure.zone

List of availability zones to place machines in. For high availability, specify at least two zones.

List of zones, for example ["1", "2", "3"].

platform.azure.networkResourceGroupName

The name of the resource group that contains the existing VNet that you want to deploy your cluster to. This name cannot be the same as the platform.azure.baseDomainResourceGroupName.

String.

platform.azure.virtualNetwork

The name of the existing VNet that you want to deploy your cluster to.

String.

platform.azure.controlPlaneSubnet

The name of the existing subnet in your VNet that you want to deploy your control plane machines to.

Valid CIDR, for example 10.0.0.0/16.

platform.azure.computeSubnet

The name of the existing subnet in your VNet that you want to deploy your compute machines to.

Valid CIDR, for example 10.0.0.0/16.

platform.azure.cloudName

The name of the Azure cloud environment that is used to configure the Azure SDK with the appropriate Azure API endpoints. If empty, the default value AzurePublicCloud is used.

Any valid cloud environment, such as AzurePublicCloud or AzureUSGovernmentCloud.

Note

You cannot customize Azure Availability Zones or Use tags to organize your Azure resources with an Azure cluster.

1.4.5.2. Sample customized install-config.yaml file for Azure

You can customize the install-config.yaml file to specify more details about your OpenShift Container Platform cluster’s platform or modify the values of the required parameters.

Important

This sample YAML file is provided for reference only. You must obtain your install-config.yaml file by using the installation program and modify it.

apiVersion: v1
baseDomain: example.com 1
controlPlane: 2
  hyperthreading: Enabled 3 4
  name: master
  platform:
    azure:
      osDisk:
        diskSizeGB: 1024 5
        diskType: Premium_LRS
      type: Standard_D8s_v3
  replicas: 3
compute: 6
- hyperthreading: Enabled 7
  name: worker
  platform:
    azure:
      type: Standard_D2s_v3
      osDisk:
        diskSizeGB: 512 8
        diskType: Standard_LRS
      zones: 9
      - "1"
      - "2"
      - "3"
  replicas: 5
metadata:
  name: test-cluster 10
networking:
  clusterNetwork:
  - cidr: 10.128.0.0/14
    hostPrefix: 23
  machineNetwork:
  - cidr: 10.0.0.0/16
  networkType: OpenShiftSDN
  serviceNetwork:
  - 172.30.0.0/16
platform:
  azure:
    baseDomainResourceGroupName: resource_group 11
    region: centralus 12
    resourceGroupName: existing_resource_group 13
    outboundType: Loadbalancer
    cloudName: AzurePublicCloud
pullSecret: '{"auths": ...}' 14
fips: false 15
sshKey: ssh-ed25519 AAAA... 16
1 10 12 14
Required. The installation program prompts you for this value.
2 6
If you do not provide these parameters and values, the installation program provides the default value.
3 7
The controlPlane section is a single mapping, but the compute section is a sequence of mappings. To meet the requirements of the different data structures, the first line of the compute section must begin with a hyphen, -, and the first line of the controlPlane section must not. Only one control plane pool is used.
4
Whether to enable or disable simultaneous multithreading, or hyperthreading. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. You can disable it by setting the parameter value to Disabled. If you disable simultaneous multithreading in some cluster machines, you must disable it in all cluster machines.
Important

If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Use larger virtual machine types, such as Standard_D8s_v3, for your machines if you disable simultaneous multithreading.

5 8
You can specify the size of the disk to use in GB. Minimum recommendation for control plane nodes (also known as the master nodes) is 1024 GB.
9
Specify a list of zones to deploy your machines to. For high availability, specify at least two zones.
11
Specify the name of the resource group that contains the DNS zone for your base domain.
13
Specify the name of an already existing resource group to install your cluster to. If undefined, a new resource group is created for the cluster.
15
Whether to enable or disable FIPS mode. By default, FIPS mode is not enabled. If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead.
Important

The use of FIPS Validated / Modules in Process cryptographic libraries is only supported on OpenShift Container Platform deployments on the x86_64 architecture.

16
You can optionally provide the sshKey value that you use to access the machines in your cluster.
Note

For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses.

1.4.5.3. Configuring the cluster-wide proxy during installation

Production environments can deny direct access to the Internet and instead have an HTTP or HTTPS proxy available. You can configure a new OpenShift Container Platform cluster to use a proxy by configuring the proxy settings in the install-config.yaml file.

Prerequisites

  • You have an existing install-config.yaml file.
  • You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the Proxy object’s spec.noProxy field to bypass the proxy if necessary.

    Note

    The Proxy object status.noProxy field is populated with the values of the networking.machineNetwork[].cidr, networking.clusterNetwork[].cidr, and networking.serviceNetwork[] fields from your installation configuration.

    For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the Proxy object status.noProxy field is also populated with the instance metadata endpoint (169.254.169.254).

Procedure

  1. Edit your install-config.yaml file and add the proxy settings. For example:

    apiVersion: v1
    baseDomain: my.domain.com
    proxy:
      httpProxy: http://<username>:<pswd>@<ip>:<port> 1
      httpsProxy: https://<username>:<pswd>@<ip>:<port> 2
      noProxy: example.com 3
    additionalTrustBundle: | 4
        -----BEGIN CERTIFICATE-----
        <MY_TRUSTED_CA_CERT>
        -----END CERTIFICATE-----
    ...
    1
    A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be http.
    2
    A proxy URL to use for creating HTTPS connections outside the cluster.
    3
    A comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com, but not y.com. Use * to bypass the proxy for all destinations.
    4
    If provided, the installation program generates a config map that is named user-ca-bundle in the openshift-config namespace to hold the additional CA certificates. If you provide additionalTrustBundle and at least one proxy setting, the Proxy object is configured to reference the user-ca-bundle config map in the trustedCA field. The Cluster Network Operator then creates a trusted-ca-bundle config map that merges the contents specified for the trustedCA parameter with the RHCOS trust bundle. The additionalTrustBundle field is required unless the proxy’s identity certificate is signed by an authority from the RHCOS trust bundle.
    Note

    The installation program does not support the proxy readinessEndpoints field.

  2. Save the file and reference it when installing OpenShift Container Platform.

The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settings in the provided install-config.yaml file. If no proxy settings are provided, a cluster Proxy object is still created, but it will have a nil spec.

Note

Only the Proxy object named cluster is supported, and no additional proxies can be created.

1.4.6. Deploying the cluster

You can install OpenShift Container Platform on a compatible cloud platform.

Important

You can run the create cluster command of the installation program only once, during initial installation.

Prerequisites

  • Configure an account with the cloud platform that hosts your cluster.
  • Obtain the OpenShift Container Platform installation program and the pull secret for your cluster.

Procedure

  1. Change to the directory that contains the installation program and initialize the cluster deployment:

    $ ./openshift-install create cluster --dir <installation_directory> \ 1
        --log-level=info 2
    1
    For <installation_directory>, specify the location of your customized ./install-config.yaml file.
    2
    To view different installation details, specify warn, debug, or error instead of info.
    Note

    If the cloud provider account that you configured on your host does not have sufficient permissions to deploy the cluster, the installation process stops, and the missing permissions are displayed.

    When the cluster deployment completes, directions for accessing your cluster, including a link to its web console and credentials for the kubeadmin user, display in your terminal.

    Example output

    ...
    INFO Install complete!
    INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig'
    INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com
    INFO Login to the console with user: "kubeadmin", and password: "4vYBz-Ee6gm-ymBZj-Wt5AL"
    INFO Time elapsed: 36m22s

    Note

    The cluster access and credential information also outputs to <installation_directory>/.openshift_install.log when an installation succeeds.

    Important
    • The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information.
    • It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation.
    Important

    You must not delete the installation program or the files that the installation program creates. Both are required to delete the cluster.

1.4.7. Installing the OpenShift CLI by downloading the binary

You can install the OpenShift CLI (oc) in order to interact with OpenShift Container Platform from a command-line interface. You can install oc on Linux, Windows, or macOS.

Important

If you installed an earlier version of oc, you cannot use it to complete all of the commands in OpenShift Container Platform 4.6. Download and install the new version of oc.

1.4.7.1. Installing the OpenShift CLI on Linux

You can install the OpenShift CLI (oc) binary on Linux by using the following procedure.

Procedure

  1. Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal.
  2. Select the appropriate version in the Version drop-down menu.
  3. Click Download Now next to the OpenShift v4.6 Linux Client entry and save the file.
  4. Unpack the archive:

    $ tar xvzf <file>
  5. Place the oc binary in a directory that is on your PATH.

    To check your PATH, execute the following command:

    $ echo $PATH

After you install the OpenShift CLI, it is available using the oc command:

$ oc <command>

1.4.7.2. Installing the OpenShift CLI on Windows

You can install the OpenShift CLI (oc) binary on Windows by using the following procedure.

Procedure

  1. Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal.
  2. Select the appropriate version in the Version drop-down menu.
  3. Click Download Now next to the OpenShift v4.6 Windows Client entry and save the file.
  4. Unzip the archive with a ZIP program.
  5. Move the oc binary to a directory that is on your PATH.

    To check your PATH, open the command prompt and execute the following command:

    C:\> path

After you install the OpenShift CLI, it is available using the oc command:

C:\> oc <command>

1.4.7.3. Installing the OpenShift CLI on macOS

You can install the OpenShift CLI (oc) binary on macOS by using the following procedure.

Procedure

  1. Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal.
  2. Select the appropriate version in the Version drop-down menu.
  3. Click Download Now next to the OpenShift v4.6 MacOSX Client entry and save the file.
  4. Unpack and unzip the archive.
  5. Move the oc binary to a directory on your PATH.

    To check your PATH, open a terminal and execute the following command:

    $ echo $PATH

After you install the OpenShift CLI, it is available using the oc command:

$ oc <command>

1.4.8. Logging in to the cluster by using the CLI

You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation.

Prerequisites

  • You deployed an OpenShift Container Platform cluster.
  • You installed the oc CLI.

Procedure

  1. Export the kubeadmin credentials:

    $ export KUBECONFIG=<installation_directory>/auth/kubeconfig 1
    1
    For <installation_directory>, specify the path to the directory that you stored the installation files in.
  2. Verify you can run oc commands successfully using the exported configuration:

    $ oc whoami

    Example output

    system:admin

Additional resources

  • See Accessing the web console for more details about accessing and understanding the OpenShift Container Platform web console.

1.4.9. Telemetry access for OpenShift Container Platform

In OpenShift Container Platform 4.6, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager.

After you confirm that your OpenShift Cluster Manager inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level.

Additional resources

1.4.10. Next steps

1.5. Installing a cluster on Azure with network customizations

In OpenShift Container Platform version 4.6, you can install a cluster with a customized network configuration on infrastructure that the installation program provisions on Microsoft Azure. By customizing your network configuration, your cluster can coexist with existing IP address allocations in your environment and integrate with existing MTU and VXLAN configurations.

You must set most of the network configuration parameters during installation, and you can modify only kubeProxy configuration parameters in a running cluster.

1.5.1. Prerequisites

1.5.2. Internet access for OpenShift Container Platform

In OpenShift Container Platform 4.6, you require access to the Internet to install your cluster.

You must have Internet access to:

  • Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster.
  • Access Quay.io to obtain the packages that are required to install your cluster.
  • Obtain the packages that are required to perform cluster updates.
Important

If your cluster cannot have direct Internet access, you can perform a restricted network installation on some types of infrastructure that you provision. During that process, you download the content that is required and use it to populate a mirror registry with the packages that you need to install a cluster and generate the installation program. With some installation types, the environment that you install your cluster in will not require Internet access. Before you update the cluster, you update the content of the mirror registry.

1.5.3. Generating an SSH private key and adding it to the agent

If you want to perform installation debugging or disaster recovery on your cluster, you must provide an SSH key to both your ssh-agent and the installation program. You can use this key to access the bootstrap machine in a public cluster to troubleshoot installation issues.

Note

In a production environment, you require disaster recovery and debugging.

You can use this key to SSH into the master nodes as the user core. When you deploy the cluster, the key is added to the core user’s ~/.ssh/authorized_keys list.

Note

You must use a local key, not one that you configured with platform-specific approaches such as AWS key pairs.

Procedure

  1. If you do not have an SSH key that is configured for password-less authentication on your computer, create one. For example, on a computer that uses a Linux operating system, run the following command:

    $ ssh-keygen -t ed25519 -N '' \
        -f <path>/<file_name> 1
    1
    Specify the path and file name, such as ~/.ssh/id_rsa, of the new SSH key. If you have an existing key pair, ensure your public key is in the your ~/.ssh directory.

    Running this command generates an SSH key that does not require a password in the location that you specified.

    Note

    If you plan to install an OpenShift Container Platform cluster that uses FIPS Validated / Modules in Process cryptographic libraries on the x86_64 architecture, do not create a key that uses the ed25519 algorithm. Instead, create a key that uses the rsa or ecdsa algorithm.

  2. Start the ssh-agent process as a background task:

    $ eval "$(ssh-agent -s)"

    Example output

    Agent pid 31874

    Note

    If your cluster is in FIPS mode, only use FIPS-compliant algorithms to generate the SSH key. The key must be either RSA or ECDSA.

  3. Add your SSH private key to the ssh-agent:

    $ ssh-add <path>/<file_name> 1

    Example output

    Identity added: /home/<you>/<path>/<file_name> (<computer_name>)

    1
    Specify the path and file name for your SSH private key, such as ~/.ssh/id_rsa

Next steps

  • When you install OpenShift Container Platform, provide the SSH public key to the installation program.

1.5.4. Obtaining the installation program

Before you install OpenShift Container Platform, download the installation file on a local computer.

Prerequisites

  • You have a computer that runs Linux or macOS, with 500 MB of local disk space

Procedure

  1. Access the Infrastructure Provider page on the OpenShift Cluster Manager site. If you have a Red Hat account, log in with your credentials. If you do not, create an account.
  2. Select your infrastructure provider.
  3. Navigate to the page for your installation type, download the installation program for your operating system, and place the file in the directory where you will store the installation configuration files.

    Important

    The installation program creates several files on the computer that you use to install your cluster. You must keep the installation program and the files that the installation program creates after you finish installing the cluster. Both files are required to delete the cluster.

    Important

    Deleting the files created by the installation program does not remove your cluster, even if the cluster failed during installation. To remove your cluster, complete the OpenShift Container Platform uninstallation procedures for your specific cloud provider.

  4. Extract the installation program. For example, on a computer that uses a Linux operating system, run the following command:

    $ tar xvf openshift-install-linux.tar.gz
  5. Download your installation pull secret from the Red Hat OpenShift Cluster Manager. This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components.

1.5.5. Creating the installation configuration file

You can customize the OpenShift Container Platform cluster you install on Microsoft Azure.

Prerequisites

  • Obtain the OpenShift Container Platform installation program and the pull secret for your cluster.

Procedure

  1. Create the install-config.yaml file.

    1. Change to the directory that contains the installation program and run the following command:

      $ ./openshift-install create install-config --dir <installation_directory> 1
      1
      For <installation_directory>, specify the directory name to store the files that the installation program creates.
      Important

      Specify an empty directory. Some installation assets, like bootstrap X.509 certificates have short expiration intervals, so you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version.

    2. At the prompts, provide the configuration details for your cloud:

      1. Optional: Select an SSH key to use to access your cluster machines.

        Note

        For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses.

      2. Select azure as the platform to target.
      3. If you do not have a Microsoft Azure profile stored on your computer, specify the following Azure parameter values for your subscription and service principal:

        • azure subscription id: The subscription ID to use for the cluster. Specify the id value in your account output.
        • azure tenant id: The tenant ID. Specify the tenantId value in your account output.
        • azure service principal client id: The value of the appId parameter for the service principal.
        • azure service principal client secret: The value of the password parameter for the service principal.
      4. Select the region to deploy the cluster to.
      5. Select the base domain to deploy the cluster to. The base domain corresponds to the Azure DNS Zone that you created for your cluster.
      6. Enter a descriptive name for your cluster.

        Important

        All Azure resources that are available through public endpoints are subject to resource name restrictions, and you cannot create resources that use certain terms. For a list of terms that Azure restricts, see Resolve reserved resource name errors in the Azure documentation.

      7. Paste the pull secret from the Red Hat OpenShift Cluster Manager.
  2. Modify the install-config.yaml file. You can find more information about the available parameters in the Installation configuration parameters section.
  3. Back up the install-config.yaml file so that you can use it to install multiple clusters.

    Important

    The install-config.yaml file is consumed during the installation process. If you want to reuse the file, you must back it up now.

1.5.5.1. Installation configuration parameters

Before you deploy an OpenShift Container Platform cluster, you provide parameter values to describe your account on the cloud platform that hosts your cluster and optionally customize your cluster’s platform. When you create the install-config.yaml installation configuration file, you provide values for the required parameters through the command line. If you customize your cluster, you can modify the install-config.yaml file to provide more details about the platform.

Note

After installation, you cannot modify these parameters in the install-config.yaml file.

Important

The openshift-install command does not validate field names for parameters. If an incorrect name is specified, the related file or object is not created, and no error is reported. Ensure that the field names for any parameters that are specified are correct.

1.5.5.1.1. Required configuration parameters

Required installation configuration parameters are described in the following table:

Table 1.5. Required parameters

ParameterDescriptionValues

apiVersion

The API version for the install-config.yaml content. The current version is v1. The installer may also support older API versions.

String

baseDomain

The base domain of your cloud provider. The base domain is used to create routes to your OpenShift Container Platform cluster components. The full DNS name for your cluster is a combination of the baseDomain and metadata.name parameter values that uses the <metadata.name>.<baseDomain> format.

A fully-qualified domain or subdomain name, such as example.com.

metadata

Kubernetes resource ObjectMeta, from which only the name parameter is consumed.

Object

metadata.name

The name of the cluster. DNS records for the cluster are all subdomains of {{.metadata.name}}.{{.baseDomain}}.

String of lowercase letters, hyphens (-), and periods (.), such as dev.

platform

The configuration for the specific platform upon which to perform the installation: aws, baremetal, azure, openstack, ovirt, vsphere. For additional information about platform.<platform> parameters, consult the following table for your specific platform.

Object

pullSecret

Get a pull secret from the Red Hat OpenShift Cluster Manager to authenticate downloading container images for OpenShift Container Platform components from services such as Quay.io.

{
   "auths":{
      "cloud.openshift.com":{
         "auth":"b3Blb=",
         "email":"you@example.com"
      },
      "quay.io":{
         "auth":"b3Blb=",
         "email":"you@example.com"
      }
   }
}
1.5.5.1.2. Network configuration parameters

You can customize your installation configuration based on the requirements of your existing network infrastructure. For example, you can expand the IP address block for the cluster network or provide different IP address blocks than the defaults.

Only IPv4 addresses are supported.

Table 1.6. Network parameters

ParameterDescriptionValues

networking

The configuration for the cluster network.

Object

Note

You cannot modify parameters specified by the networking object after installation.

networking.networkType

The cluster network provider Container Network Interface (CNI) plug-in to install.

Either OpenShiftSDN or OVNKubernetes. The default value is OpenShiftSDN.

networking.clusterNetwork

The IP address blocks for pods.

The default value is 10.128.0.0/14 with a host prefix of /23.

If you specify multiple IP address blocks, the blocks must not overlap.

An array of objects. For example:

networking:
  clusterNetwork:
  - cidr: 10.128.0.0/14
    hostPrefix: 23

networking.clusterNetwork.cidr

Required if you use networking.clusterNetwork. An IP address block.

An IPv4 network.

An IP address block in Classless Inter-Domain Routing (CIDR) notation. The prefix length for an IPv4 block is between 0 and 32.

networking.clusterNetwork.hostPrefix

The subnet prefix length to assign to each individual node. For example, if hostPrefix is set to 23 then each node is assigned a /23 subnet out of the given cidr. A hostPrefix value of 23 provides 510 (2^(32 - 23) - 2) pod IP addresses.

A subnet prefix.

The default value is 23.

networking.serviceNetwork

The IP address block for services. The default value is 172.30.0.0/16.

The OpenShift SDN and OVN-Kubernetes network providers support only a single IP address block for the service network.

An array with an IP address block in CIDR format. For example:

networking:
  serviceNetwork:
   - 172.30.0.0/16

networking.machineNetwork

The IP address blocks for machines.

If you specify multiple IP address blocks, the blocks must not overlap.

An array of objects. For example:

networking:
  machineNetwork:
  - cidr: 10.0.0.0/16

networking.machineNetwork.cidr

Required if you use networking.machineNetwork. An IP address block. The default value is 10.0.0.0/16 for all platforms other than libvirt. For libvirt, the default value is 192.168.126.0/24.

An IP network block in CIDR notation.

For example, 10.0.0.0/16.

Note

Set the networking.machineNetwork to match the CIDR that the preferred NIC resides in.

1.5.5.1.3. Optional configuration parameters

Optional installation configuration parameters are described in the following table:

Table 1.7. Optional parameters

ParameterDescriptionValues

additionalTrustBundle

A PEM-encoded X.509 certificate bundle that is added to the nodes' trusted certificate store. This trust bundle may also be used when a proxy has been configured.

String

compute

The configuration for the machines that comprise the compute nodes.

Array of machine-pool objects. For details, see the following "Machine-pool" table.

compute.architecture

Determines the instruction set architecture of the machines in the pool. Currently, heteregeneous clusters are not supported, so all pools must specify the same architecture. Valid values are amd64 (the default).

String

compute.hyperthreading

Whether to enable or disable simultaneous multithreading, or hyperthreading, on compute machines. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores.

Important

If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance.

Enabled or Disabled

compute.name

Required if you use compute. The name of the machine pool.

worker

compute.platform

Required if you use compute. Use this parameter to specify the cloud provider to host the worker machines. This parameter value must match the controlPlane.platform parameter value.

aws, azure, gcp, openstack, ovirt, vsphere, or {}

compute.replicas

The number of compute machines, which are also known as worker machines, to provision.

A positive integer greater than or equal to 2. The default value is 3.

controlPlane

The configuration for the machines that comprise the control plane.

Array of MachinePool objects. For details, see the following "Machine-pool" table.

controlPlane.architecture

Determines the instruction set architecture of the machines in the pool. Currently, heterogeneous clusters are not supported, so all pools must specify the same architecture. Valid values are amd64 (the default).

String

controlPlane.hyperthreading

Whether to enable or disable simultaneous multithreading, or hyperthreading, on control plane machines. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores.

Important

If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance.

Enabled or Disabled

controlPlane.name

Required if you use controlPlane. The name of the machine pool.

master

controlPlane.platform

Required if you use controlPlane. Use this parameter to specify the cloud provider that hosts the control plane machines. This parameter value must match the compute.platform parameter value.

aws, azure, gcp, openstack, ovirt, vsphere, or {}

controlPlane.replicas

The number of control plane machines to provision.

The only supported value is 3, which is the default value.

credentialsMode

The Cloud Credential Operator (CCO) mode. If no mode is specified, the CCO dynamically tries to determine the capabilities of the provided credentials, with a preference for mint mode on the platforms where multiple modes are supported.

Note

Not all CCO modes are supported for all cloud providers. For more information on CCO modes, see the Cloud Credential Operator entry in the Red Hat Operators reference content.

Mint, Passthrough, Manual, or an empty string ("").

fips

Enable or disable FIPS mode. The default is false (disabled). If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead.

Important

The use of FIPS Validated / Modules in Process cryptographic libraries is only supported on OpenShift Container Platform deployments on the x86_64 architecture.

Note

If you are using Azure File storage, you cannot enable FIPS mode.

false or true

imageContentSources

Sources and repositories for the release-image content.

Array of objects. Includes a source and, optionally, mirrors, as described in the following rows of this table.

imageContentSources.source

Required if you use imageContentSources. Specify the repository that users refer to, for example, in image pull specifications.

String

imageContentSources.mirrors

Specify one or more repositories that may also contain the same images.

Array of strings

publish

How to publish or expose the user-facing endpoints of your cluster, such as the Kubernetes API, OpenShift routes.

Internal or External. To deploy a private cluster, which cannot be accessed from the internet, set publish to Internal. The default value is External.

sshKey

The SSH key or keys to authenticate access your cluster machines.

Note

For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses.

One or more keys. For example:

sshKey:
  <key1>
  <key2>
  <key3>
1.5.5.1.4. Additional Azure configuration parameters

Additional Azure configuration parameters are described in the following table:

Table 1.8. Additional Azure parameters

ParameterDescriptionValues

compute.platform.azure.osDisk.diskSizeGB

The Azure disk size for the VM.

Integer that represents the size of the disk in GB. The default is 128.

compute.platform.azure.osDisk.diskType

Defines the type of disk.

standard_LRS, premium_LRS, or standardSSD_LRS. The default is premium_LRS.

controlPlane.platform.azure.osDisk.diskSizeGB

The Azure disk size for the VM.

Integer that represents the size of the disk in GB. The default is 1024.

controlPlane.platform.azure.osDisk.diskType

Defines the type of disk.

premium_LRS or standardSSD_LRS. The default is premium_LRS.

platform.azure.baseDomainResourceGroupName

The name of the resource group that contains the DNS zone for your base domain.

String, for example production_cluster.

platform.azure.outboundType

The outbound routing strategy used to connect your cluster to the internet. If you are using user-defined routing, you must have pre-existing networking available where the outbound routing has already been configured prior to installing a cluster. The installation program is not responsible for configuring user-defined routing.

LoadBalancer or UserDefinedRouting. The default is LoadBalancer.

platform.azure.region

The name of the Azure region that hosts your cluster.

Any valid region name, such as centralus.

platform.azure.zone

List of availability zones to place machines in. For high availability, specify at least two zones.

List of zones, for example ["1", "2", "3"].

platform.azure.networkResourceGroupName

The name of the resource group that contains the existing VNet that you want to deploy your cluster to. This name cannot be the same as the platform.azure.baseDomainResourceGroupName.

String.

platform.azure.virtualNetwork

The name of the existing VNet that you want to deploy your cluster to.

String.

platform.azure.controlPlaneSubnet

The name of the existing subnet in your VNet that you want to deploy your control plane machines to.

Valid CIDR, for example 10.0.0.0/16.

platform.azure.computeSubnet

The name of the existing subnet in your VNet that you want to deploy your compute machines to.

Valid CIDR, for example 10.0.0.0/16.

platform.azure.cloudName

The name of the Azure cloud environment that is used to configure the Azure SDK with the appropriate Azure API endpoints. If empty, the default value AzurePublicCloud is used.

Any valid cloud environment, such as AzurePublicCloud or AzureUSGovernmentCloud.

Note

You cannot customize Azure Availability Zones or Use tags to organize your Azure resources with an Azure cluster.

1.5.5.2. Sample customized install-config.yaml file for Azure

You can customize the install-config.yaml file to specify more details about your OpenShift Container Platform cluster’s platform or modify the values of the required parameters.

Important

This sample YAML file is provided for reference only. You must obtain your install-config.yaml file by using the installation program and modify it.

apiVersion: v1
baseDomain: example.com 1
controlPlane: 2
  hyperthreading: Enabled 3 4
  name: master
  platform:
    azure:
      osDisk:
        diskSizeGB: 1024 5
        diskType: Premium_LRS
      type: Standard_D8s_v3
  replicas: 3
compute: 6
- hyperthreading: Enabled 7
  name: worker
  platform:
    azure:
      type: Standard_D2s_v3
      osDisk:
        diskSizeGB: 512 8
        diskType: Standard_LRS
      zones: 9
      - "1"
      - "2"
      - "3"
  replicas: 5
metadata:
  name: test-cluster 10
networking: 11
  clusterNetwork:
  - cidr: 10.128.0.0/14
    hostPrefix: 23
  machineNetwork:
  - cidr: 10.0.0.0/16
  networkType: OpenShiftSDN
  serviceNetwork:
  - 172.30.0.0/16
platform:
  azure:
    baseDomainResourceGroupName: resource_group 12
    region: centralus 13
    resourceGroupName: existing_resource_group 14
    outboundType: Loadbalancer
    cloudName: AzurePublicCloud
pullSecret: '{"auths": ...}' 15
fips: false 16
sshKey: ssh-ed25519 AAAA... 17
1 10 13 15
Required. The installation program prompts you for this value.
2 6 11
If you do not provide these parameters and values, the installation program provides the default value.
3 7
The controlPlane section is a single mapping, but the compute section is a sequence of mappings. To meet the requirements of the different data structures, the first line of the compute section must begin with a hyphen, -, and the first line of the controlPlane section must not. Only one control plane pool is used.
4
Whether to enable or disable simultaneous multithreading, or hyperthreading. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. You can disable it by setting the parameter value to Disabled. If you disable simultaneous multithreading in some cluster machines, you must disable it in all cluster machines.
Important

If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Use larger virtual machine types, such as Standard_D8s_v3, for your machines if you disable simultaneous multithreading.

5 8
You can specify the size of the disk to use in GB. Minimum recommendation for control plane nodes (also known as the master nodes) is 1024 GB.
9
Specify a list of zones to deploy your machines to. For high availability, specify at least two zones.
12
Specify the name of the resource group that contains the DNS zone for your base domain.
14
Specify the name of an already existing resource group to install your cluster to. If undefined, a new resource group is created for the cluster.
16
Whether to enable or disable FIPS mode. By default, FIPS mode is not enabled. If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead.
Important

The use of FIPS Validated / Modules in Process cryptographic libraries is only supported on OpenShift Container Platform deployments on the x86_64 architecture.

17
You can optionally provide the sshKey value that you use to access the machines in your cluster.
Note

For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses.

1.5.5.3. Configuring the cluster-wide proxy during installation

Production environments can deny direct access to the Internet and instead have an HTTP or HTTPS proxy available. You can configure a new OpenShift Container Platform cluster to use a proxy by configuring the proxy settings in the install-config.yaml file.

Prerequisites

  • You have an existing install-config.yaml file.
  • You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the Proxy object’s spec.noProxy field to bypass the proxy if necessary.

    Note

    The Proxy object status.noProxy field is populated with the values of the networking.machineNetwork[].cidr, networking.clusterNetwork[].cidr, and networking.serviceNetwork[] fields from your installation configuration.

    For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the Proxy object status.noProxy field is also populated with the instance metadata endpoint (169.254.169.254).

Procedure

  1. Edit your install-config.yaml file and add the proxy settings. For example:

    apiVersion: v1
    baseDomain: my.domain.com
    proxy:
      httpProxy: http://<username>:<pswd>@<ip>:<port> 1
      httpsProxy: https://<username>:<pswd>@<ip>:<port> 2
      noProxy: example.com 3
    additionalTrustBundle: | 4
        -----BEGIN CERTIFICATE-----
        <MY_TRUSTED_CA_CERT>
        -----END CERTIFICATE-----
    ...
    1
    A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be http.
    2
    A proxy URL to use for creating HTTPS connections outside the cluster.
    3
    A comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com, but not y.com. Use * to bypass the proxy for all destinations.
    4
    If provided, the installation program generates a config map that is named user-ca-bundle in the openshift-config namespace to hold the additional CA certificates. If you provide additionalTrustBundle and at least one proxy setting, the Proxy object is configured to reference the user-ca-bundle config map in the trustedCA field. The Cluster Network Operator then creates a trusted-ca-bundle config map that merges the contents specified for the trustedCA parameter with the RHCOS trust bundle. The additionalTrustBundle field is required unless the proxy’s identity certificate is signed by an authority from the RHCOS trust bundle.
    Note

    The installation program does not support the proxy readinessEndpoints field.

  2. Save the file and reference it when installing OpenShift Container Platform.

The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settings in the provided install-config.yaml file. If no proxy settings are provided, a cluster Proxy object is still created, but it will have a nil spec.

Note

Only the Proxy object named cluster is supported, and no additional proxies can be created.

1.5.6. Network configuration phases

When specifying a cluster configuration prior to installation, there are several phases in the installation procedures when you can modify the network configuration:

Phase 1

After entering the openshift-install create install-config command. In the install-config.yaml file, you can customize the following network-related fields:

  • networking.networkType
  • networking.clusterNetwork
  • networking.serviceNetwork
  • networking.machineNetwork

    For more information on these fields, refer to "Installation configuration parameters".

    Note

    Set the networking.machineNetwork to match the CIDR that the preferred NIC resides in.

Phase 2
After entering the openshift-install create manifests command. If you must specify advanced network configuration, during this phase you can define a customized Cluster Network Operator manifest with only the fields you want to modify.

You cannot override the values specified in phase 1 in the install-config.yaml file during phase 2. However, you can further customize the cluster network provider during phase 2.

1.5.7. Specifying advanced network configuration

You can use advanced configuration customization to integrate your cluster into your existing network environment by specifying additional configuration for your cluster network provider. You can specify advanced network configuration only before you install the cluster.

Important

Modifying the OpenShift Container Platform manifest files created by the installation program is not supported. Applying a manifest file that you create, as in the following procedure, is supported.

Prerequisites

  • Create the install-config.yaml file and complete any modifications to it.

Procedure

  1. Change to the directory that contains the installation program and create the manifests:

    $ ./openshift-install create manifests --dir <installation_directory>

    where:

    <installation_directory>
    Specifies the name of the directory that contains the install-config.yaml file for your cluster.
  2. Create a stub manifest file for the advanced network configuration that is named cluster-network-03-config.yml in the <installation_directory>/manifests/ directory:

    $ cat <<EOF > <installation_directory>/manifests/cluster-network-03-config.yml
    apiVersion: operator.openshift.io/v1
    kind: Network
    metadata:
      name: cluster
    spec:
    EOF

    where:

    <installation_directory>
    Specifies the directory name that contains the manifests/ directory for your cluster.
  3. Open the cluster-network-03-config.yml file in an editor and specify the advanced network configuration for your cluster, such as in the following example:

    Specify a different VXLAN port for the OpenShift SDN network provider

    apiVersion: operator.openshift.io/v1
    kind: Network
    metadata:
      name: cluster
    spec:
      defaultNetwork:
        openshiftSDNConfig:
          vxlanPort: 4800

  4. Save the cluster-network-03-config.yml file and quit the text editor.
  5. Optional: Back up the manifests/cluster-network-03-config.yml file. The installation program deletes the manifests/ directory when creating the cluster.

1.5.8. Cluster Network Operator configuration

The configuration for the cluster network is specified as part of the Cluster Network Operator (CNO) configuration and stored in a custom resource (CR) object that is named cluster. The CR specifies the fields for the Network API in the operator.openshift.io API group.

The CNO configuration inherits the following fields during cluster installation from the Network API in the Network.config.openshift.io API group and these fields cannot be changed:

clusterNetwork
IP address pools from which pod IP addresses are allocated.
serviceNetwork
IP address pool for services.
defaultNetwork.type
Cluster network provider, such as OpenShift SDN or OVN-Kubernetes.

You can specify the cluster network provider configuration for your cluster by setting the fields for the defaultNetwork object in the CNO object named cluster.

1.5.8.1. Cluster Network Operator configuration object

The fields for the Cluster Network Operator (CNO) are described in the following table:

Table 1.9. Cluster Network Operator configuration object

FieldTypeDescription

metadata.name

string

The name of the CNO object. This name is always cluster.

spec.clusterNetwork

array

A list specifying the blocks of IP addresses from which pod IP addresses are allocated and the subnet prefix length assigned to each individual node in the cluster. For example:

spec:
  clusterNetwork:
  - cidr: 10.128.0.0/19
    hostPrefix: 23
  - cidr: 10.128.32.0/19
    hostPrefix: 23

This value is ready-only and specified in the install-config.yaml file.

spec.serviceNetwork

array

A block of IP addresses for services. The OpenShift SDN and OVN-Kubernetes Container Network Interface (CNI) network providers support only a single IP address block for the service network. For example:

spec:
  serviceNetwork:
  - 172.30.0.0/14

This value is ready-only and specified in the install-config.yaml file.

spec.defaultNetwork

object

Configures the Container Network Interface (CNI) cluster network provider for the cluster network.

spec.kubeProxyConfig

object

The fields for this object specify the kube-proxy configuration. If you are using the OVN-Kubernetes cluster network provider, the kube-proxy configuration has no effect.

defaultNetwork object configuration

The values for the defaultNetwork object are defined in the following table:

Table 1.10. defaultNetwork object

FieldTypeDescription

type

string

Either OpenShiftSDN or OVNKubernetes. The cluster network provider is selected during installation. This value cannot be changed after cluster installation.

Note

OpenShift Container Platform uses the OpenShift SDN Container Network Interface (CNI) cluster network provider by default.

openshiftSDNConfig

object

This object is only valid for the OpenShift SDN cluster network provider.

ovnKubernetesConfig

object

This object is only valid for the OVN-Kubernetes cluster network provider.

Configuration for the OpenShift SDN CNI cluster network provider

The following table describes the configuration fields for the OpenShift SDN Container Network Interface (CNI) cluster network provider.

Table 1.11. openshiftSDNConfig object

FieldTypeDescription

mode

string

Configures the network isolation mode for OpenShift SDN. The default value is NetworkPolicy.

The values Multitenant and Subnet are available for backwards compatibility with OpenShift Container Platform 3.x but are not recommended. This value cannot be changed after cluster installation.

mtu

integer

The maximum transmission unit (MTU) for the VXLAN overlay network. This is detected automatically based on the MTU of the primary network interface. You do not normally need to override the detected MTU.

If the auto-detected value is not what you expected it to be, confirm that the MTU on the primary network interface on your nodes is correct. You cannot use this option to change the MTU value of the primary network interface on the nodes.

If your cluster requires different MTU values for different nodes, you must set this value to 50 less than the lowest MTU value in your cluster. For example, if some nodes in your cluster have an MTU of 9001, and some have an MTU of 1500, you must set this value to 1450.

This value cannot be changed after cluster installation.

vxlanPort

integer

The port to use for all VXLAN packets. The default value is 4789. This value cannot be changed after cluster installation.

If you are running in a virtualized environment with existing nodes that are part of another VXLAN network, then you might be required to change this. For example, when running an OpenShift SDN overlay on top of VMware NSX-T, you must select an alternate port for the VXLAN, because both SDNs use the same default VXLAN port number.

On Amazon Web Services (AWS), you can select an alternate port for the VXLAN between port 9000 and port 9999.

Example OpenShift SDN configuration

defaultNetwork:
  type: OpenShiftSDN
  openshiftSDNConfig:
    mode: NetworkPolicy
    mtu: 1450
    vxlanPort: 4789

Configuration for the OVN-Kubernetes CNI cluster network provider

The following table describes the configuration fields for the OVN-Kubernetes CNI cluster network provider.

Table 1.12. ovnKubernetesConfig object

FieldTypeDescription

mtu

integer

The maximum transmission unit (MTU) for the Geneve (Generic Network Virtualization Encapsulation) overlay network. This is detected automatically based on the MTU of the primary network interface. You do not normally need to override the detected MTU.

If the auto-detected value is not what you expected it to be, confirm that the MTU on the primary network interface on your nodes is correct. You cannot use this option to change the MTU value of the primary network interface on the nodes.

If your cluster requires different MTU values for different nodes, you must set this value to 100 less than the lowest MTU value in your cluster. For example, if some nodes in your cluster have an MTU of 9001, and some have an MTU of 1500, you must set this value to 1400.

This value cannot be changed after cluster installation.

genevePort

integer

The port to use for all Geneve packets. The default value is 6081. This value cannot be changed after cluster installation.

Example OVN-Kubernetes configuration

defaultNetwork:
  type: OVNKubernetes
  ovnKubernetesConfig:
    mtu: 1400
    genevePort: 6081

kubeProxyConfig object configuration

The values for the kubeProxyConfig object are defined in the following table:

Table 1.13. kubeProxyConfig object

FieldTypeDescription

iptablesSyncPeriod

string

The refresh period for iptables rules. The default value is 30s. Valid suffixes include s, m, and h and are described in the Go time package documentation.

Note

Because of performance improvements introduced in OpenShift Container Platform 4.3 and greater, adjusting the iptablesSyncPeriod parameter is no longer necessary.

proxyArguments.iptables-min-sync-period

array

The minimum duration before refreshing iptables rules. This field ensures that the refresh does not happen too frequently. Valid suffixes include s, m, and h and are described in the Go time package. The default value is:

kubeProxyConfig:
  proxyArguments:
    iptables-min-sync-period:
    - 0s

1.5.9. Configuring hybrid networking with OVN-Kubernetes

You can configure your cluster to use hybrid networking with OVN-Kubernetes. This allows a hybrid cluster that supports different node networking configurations. For example, this is necessary to run both Linux and Windows nodes in a cluster.

Important

You must configure hybrid networking with OVN-Kubernetes during the installation of your cluster. You cannot switch to hybrid networking after the installation process.

Prerequisites

  • You defined OVNKubernetes for the networking.networkType parameter in the install-config.yaml file. See the installation documentation for configuring OpenShift Container Platform network customizations on your chosen cloud provider for more information.

Procedure

  1. Change to the directory that contains the installation program and create the manifests:

    $ ./openshift-install create manifests --dir <installation_directory>

    where:

    <installation_directory>
    Specifies the name of the directory that contains the install-config.yaml file for your cluster.
  2. Create a stub manifest file for the advanced network configuration that is named cluster-network-03-config.yml in the <installation_directory>/manifests/ directory:

    $ cat <<EOF > <installation_directory>/manifests/cluster-network-03-config.yml
    apiVersion: operator.openshift.io/v1
    kind: Network
    metadata:
      name: cluster
    spec:
    EOF

    where:

    <installation_directory>
    Specifies the directory name that contains the manifests/ directory for your cluster.
  3. Open the cluster-network-03-config.yml file in an editor and configure OVN-Kubernetes with hybrid networking, such as in the following example:

    Specify a hybrid networking configuration

    apiVersion: operator.openshift.io/v1
    kind: Network
    metadata:
      name: cluster
    spec:
      defaultNetwork:
        ovnKubernetesConfig:
          hybridOverlayConfig:
            hybridClusterNetwork: 1
            - cidr: 10.132.0.0/14
              hostPrefix: 23
            hybridOverlayVXLANPort: 9898 2

    1
    Specify the CIDR configuration used for nodes on the additional overlay network. The hybridClusterNetwork CIDR cannot overlap with the clusterNetwork CIDR.
    2
    Specify a custom VXLAN port for the additional overlay network. This is required for running Windows nodes in a cluster installed on vSphere, and must not be configured for any other cloud provider. The custom port can be any open port excluding the default 4789 port. For more information on this requirement, see the Microsoft documentation on Pod-to-pod connectivity between hosts is broken.
  4. Save the cluster-network-03-config.yml file and quit the text editor.
  5. Optional: Back up the manifests/cluster-network-03-config.yml file. The installation program deletes the manifests/ directory when creating the cluster.
Note

For more information on using Linux and Windows nodes in the same cluster, see Understanding Windows container workloads.

1.5.10. Deploying the cluster

You can install OpenShift Container Platform on a compatible cloud platform.

Important

You can run the create cluster command of the installation program only once, during initial installation.

Prerequisites

  • Configure an account with the cloud platform that hosts your cluster.
  • Obtain the OpenShift Container Platform installation program and the pull secret for your cluster.

Procedure

  1. Change to the directory that contains the installation program and initialize the cluster deployment:

    $ ./openshift-install create cluster --dir <installation_directory> \ 1
        --log-level=info 2
    1
    For <installation_directory>, specify the location of your customized ./install-config.yaml file.
    2
    To view different installation details, specify warn, debug, or error instead of info.
    Note

    If the cloud provider account that you configured on your host does not have sufficient permissions to deploy the cluster, the installation process stops, and the missing permissions are displayed.

    When the cluster deployment completes, directions for accessing your cluster, including a link to its web console and credentials for the kubeadmin user, display in your terminal.

    Example output

    ...
    INFO Install complete!
    INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig'
    INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com
    INFO Login to the console with user: "kubeadmin", and password: "4vYBz-Ee6gm-ymBZj-Wt5AL"
    INFO Time elapsed: 36m22s

    Note

    The cluster access and credential information also outputs to <installation_directory>/.openshift_install.log when an installation succeeds.

    Important
    • The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information.
    • It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation.
    Important

    You must not delete the installation program or the files that the installation program creates. Both are required to delete the cluster.

1.5.11. Installing the OpenShift CLI by downloading the binary

You can install the OpenShift CLI (oc) in order to interact with OpenShift Container Platform from a command-line interface. You can install oc on Linux, Windows, or macOS.

Important

If you installed an earlier version of oc, you cannot use it to complete all of the commands in OpenShift Container Platform 4.6. Download and install the new version of oc.

1.5.11.1. Installing the OpenShift CLI on Linux

You can install the OpenShift CLI (oc) binary on Linux by using the following procedure.

Procedure

  1. Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal.
  2. Select the appropriate version in the Version drop-down menu.
  3. Click Download Now next to the OpenShift v4.6 Linux Client entry and save the file.
  4. Unpack the archive:

    $ tar xvzf <file>
  5. Place the oc binary in a directory that is on your PATH.

    To check your PATH, execute the following command:

    $ echo $PATH

After you install the OpenShift CLI, it is available using the oc command:

$ oc <command>

1.5.11.2. Installing the OpenShift CLI on Windows

You can install the OpenShift CLI (oc) binary on Windows by using the following procedure.

Procedure

  1. Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal.
  2. Select the appropriate version in the Version drop-down menu.
  3. Click Download Now next to the OpenShift v4.6 Windows Client entry and save the file.
  4. Unzip the archive with a ZIP program.
  5. Move the oc binary to a directory that is on your PATH.

    To check your PATH, open the command prompt and execute the following command:

    C:\> path

After you install the OpenShift CLI, it is available using the oc command:

C:\> oc <command>

1.5.11.3. Installing the OpenShift CLI on macOS

You can install the OpenShift CLI (oc) binary on macOS by using the following procedure.

Procedure

  1. Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal.
  2. Select the appropriate version in the Version drop-down menu.
  3. Click Download Now next to the OpenShift v4.6 MacOSX Client entry and save the file.
  4. Unpack and unzip the archive.
  5. Move the oc binary to a directory on your PATH.

    To check your PATH, open a terminal and execute the following command:

    $ echo $PATH

After you install the OpenShift CLI, it is available using the oc command:

$ oc <command>

1.5.12. Logging in to the cluster by using the CLI

You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation.

Prerequisites

  • You deployed an OpenShift Container Platform cluster.
  • You installed the oc CLI.

Procedure

  1. Export the kubeadmin credentials:

    $ export KUBECONFIG=<installation_directory>/auth/kubeconfig 1
    1
    For <installation_directory>, specify the path to the directory that you stored the installation files in.
  2. Verify you can run oc commands successfully using the exported configuration:

    $ oc whoami

    Example output

    system:admin

Additional resources

  • See Accessing the web console for more details about accessing and understanding the OpenShift Container Platform web console.

1.5.13. Telemetry access for OpenShift Container Platform

In OpenShift Container Platform 4.6, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager.

After you confirm that your OpenShift Cluster Manager inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level.

Additional resources

1.5.14. Next steps

1.6. Installing a cluster on Azure into an existing VNet

In OpenShift Container Platform version 4.6, you can install a cluster into an existing Azure Virtual Network (VNet) on Microsoft Azure. The installation program provisions the rest of the required infrastructure, which you can further customize. To customize the installation, you modify parameters in the install-config.yaml file before you install the cluster.

1.6.1. Prerequisites

1.6.2. About reusing a VNet for your OpenShift Container Platform cluster

In OpenShift Container Platform 4.6, you can deploy a cluster into an existing Azure Virtual Network (VNet) in Microsoft Azure. If you do, you must also use existing subnets within the VNet and routing rules.

By deploying OpenShift Container Platform into an existing Azure VNet, you might be able to avoid service limit constraints in new accounts or more easily abide by the operational constraints that your company’s guidelines set. This is a good option to use if you cannot obtain the infrastructure creation permissions that are required to create the VNet.

1.6.2.1. Requirements for using your VNet

When you deploy a cluster by using an existing VNet, you must perform additional network configuration before you install the cluster. In installer-provisioned infrastructure clusters, the installer usually creates the following components, but it does not create them when you install into an existing VNet:

  • Subnets
  • Route tables
  • VNets
  • Network Security Groups
Note

The installation program requires that you use the cloud-provided DNS server. Using a custom DNS server is not supported and causes the installation to fail.

If you use a custom VNet, you must correctly configure it and its subnets for the installation program and the cluster to use. The installation program cannot subdivide network ranges for the cluster to use, set route tables for the subnets, or set VNet options like DHCP, so you must do so before you install the cluster.

The cluster must be able to access the resource group that contains the existing VNet and subnets. While all of the resources that the cluster creates are placed in a separate resource group that it creates, some network resources are used from a separate group. Some cluster Operators must be able to access resources in both resource groups. For example, the Machine API controller attaches NICS for the virtual machines that it creates to subnets from the networking resource group.

Your VNet must meet the following characteristics:

  • The VNet’s CIDR block must contain the Networking.MachineCIDR range, which is the IP address pool for cluster machines.
  • The VNet and its subnets must belong to the same resource group, and the subnets must be configured to use Azure-assigned DHCP IP addresses instead of static IP addresses.

You must provide two subnets within your VNet, one for the control plane machines and one for the compute machines. Because Azure distributes machines in different availability zones within the region that you specify, your cluster will have high availability by default.

To ensure that the subnets that you provide are suitable, the installation program confirms the following data:

  • All the specified subnets exist.
  • There are two private subnets, one for the control plane machines and one for the compute machines.
  • The subnet CIDRs belong to the machine CIDR that you specified. Machines are not provisioned in availability zones that you do not provide private subnets for. If required, the installation program creates public load balancers that manage the control plane and worker nodes, and Azure allocates a public IP address to them.
Note

If you destroy a cluster that uses an existing VNet, the VNet is not deleted.

1.6.2.1.1. Network security group requirements

The network security groups for the subnets that host the compute and control plane machines require specific access to ensure that the cluster communication is correct. You must create rules to allow access to the required cluster communication ports.

Important

The network security group rules must be in place before you install the cluster. If you attempt to install a cluster without the required access, the installation program cannot reach the Azure APIs, and installation fails.

Table 1.14. Required ports

PortDescriptionControl planeCompute

80

Allows HTTP traffic

 

x

443

Allows HTTPS traffic

 

x

6443

Allows communication to the control plane machines

x

 

22623

Allows communication to the machine config server

x

 
Note

Since cluster components do not modify the user-provided network security groups, which the Kubernetes controllers update, a pseudo-network security group is created for the Kubernetes controller to modify without impacting the rest of the environment.

1.6.2.2. Division of permissions

Starting with OpenShift Container Platform 4.3, you do not need all of the permissions that are required for an installation program-provisioned infrastructure cluster to deploy a cluster. This change mimics the division of permissions that you might have at your company: some individuals can create different resources in your clouds than others. For example, you might be able to create application-specific items, like instances, storage, and load balancers, but not networking-related components such as VNets, subnet, or ingress rules.

The Azure credentials that you use when you create your cluster do not need the networking permissions that are required to make VNets and core networking components within the VNet, such as subnets, routing tables, internet gateways, NAT, and VPN. You still need permission to make the application resources that the machines within the cluster require, such as load balancers, security groups, storage accounts, and nodes.

1.6.2.3. Isolation between clusters

Because the cluster is unable to modify network security groups in an existing subnet, there is no way to isolate clusters from each other on the VNet.

1.6.3. Internet access for OpenShift Container Platform

In OpenShift Container Platform 4.6, you require access to the Internet to install your cluster.

You must have Internet access to:

  • Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster.
  • Access Quay.io to obtain the packages that are required to install your cluster.
  • Obtain the packages that are required to perform cluster updates.
Important

If your cluster cannot have direct Internet access, you can perform a restricted network installation on some types of infrastructure that you provision. During that process, you download the content that is required and use it to populate a mirror registry with the packages that you need to install a cluster and generate the installation program. With some installation types, the environment that you install your cluster in will not require Internet access. Before you update the cluster, you update the content of the mirror registry.

1.6.4. Generating an SSH private key and adding it to the agent

If you want to perform installation debugging or disaster recovery on your cluster, you must provide an SSH key to both your ssh-agent and the installation program. You can use this key to access the bootstrap machine in a public cluster to troubleshoot installation issues.

Note

In a production environment, you require disaster recovery and debugging.

You can use this key to SSH into the master nodes as the user core. When you deploy the cluster, the key is added to the core user’s ~/.ssh/authorized_keys list.

Note

You must use a local key, not one that you configured with platform-specific approaches such as AWS key pairs.

Procedure

  1. If you do not have an SSH key that is configured for password-less authentication on your computer, create one. For example, on a computer that uses a Linux operating system, run the following command:

    $ ssh-keygen -t ed25519 -N '' \
        -f <path>/<file_name> 1
    1
    Specify the path and file name, such as ~/.ssh/id_rsa, of the new SSH key. If you have an existing key pair, ensure your public key is in the your ~/.ssh directory.

    Running this command generates an SSH key that does not require a password in the location that you specified.

    Note

    If you plan to install an OpenShift Container Platform cluster that uses FIPS Validated / Modules in Process cryptographic libraries on the x86_64 architecture, do not create a key that uses the ed25519 algorithm. Instead, create a key that uses the rsa or ecdsa algorithm.

  2. Start the ssh-agent process as a background task:

    $ eval "$(ssh-agent -s)"

    Example output

    Agent pid 31874

    Note

    If your cluster is in FIPS mode, only use FIPS-compliant algorithms to generate the SSH key. The key must be either RSA or ECDSA.

  3. Add your SSH private key to the ssh-agent:

    $ ssh-add <path>/<file_name> 1

    Example output

    Identity added: /home/<you>/<path>/<file_name> (<computer_name>)

    1
    Specify the path and file name for your SSH private key, such as ~/.ssh/id_rsa

Next steps

  • When you install OpenShift Container Platform, provide the SSH public key to the installation program.

1.6.5. Obtaining the installation program

Before you install OpenShift Container Platform, download the installation file on a local computer.

Prerequisites

  • You have a computer that runs Linux or macOS, with 500 MB of local disk space

Procedure

  1. Access the Infrastructure Provider page on the OpenShift Cluster Manager site. If you have a Red Hat account, log in with your credentials. If you do not, create an account.
  2. Select your infrastructure provider.
  3. Navigate to the page for your installation type, download the installation program for your operating system, and place the file in the directory where you will store the installation configuration files.

    Important

    The installation program creates several files on the computer that you use to install your cluster. You must keep the installation program and the files that the installation program creates after you finish installing the cluster. Both files are required to delete the cluster.

    Important

    Deleting the files created by the installation program does not remove your cluster, even if the cluster failed during installation. To remove your cluster, complete the OpenShift Container Platform uninstallation procedures for your specific cloud provider.

  4. Extract the installation program. For example, on a computer that uses a Linux operating system, run the following command:

    $ tar xvf openshift-install-linux.tar.gz
  5. Download your installation pull secret from the Red Hat OpenShift Cluster Manager. This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components.

1.6.6. Creating the installation configuration file

You can customize the OpenShift Container Platform cluster you install on Microsoft Azure.

Prerequisites

  • Obtain the OpenShift Container Platform installation program and the pull secret for your cluster.

Procedure

  1. Create the install-config.yaml file.

    1. Change to the directory that contains the installation program and run the following command:

      $ ./openshift-install create install-config --dir <installation_directory> 1
      1
      For <installation_directory>, specify the directory name to store the files that the installation program creates.
      Important

      Specify an empty directory. Some installation assets, like bootstrap X.509 certificates have short expiration intervals, so you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version.

    2. At the prompts, provide the configuration details for your cloud:

      1. Optional: Select an SSH key to use to access your cluster machines.

        Note

        For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses.

      2. Select azure as the platform to target.
      3. If you do not have a Microsoft Azure profile stored on your computer, specify the following Azure parameter values for your subscription and service principal:

        • azure subscription id: The subscription ID to use for the cluster. Specify the id value in your account output.
        • azure tenant id: The tenant ID. Specify the tenantId value in your account output.
        • azure service principal client id: The value of the appId parameter for the service principal.
        • azure service principal client secret: The value of the password parameter for the service principal.
      4. Select the region to deploy the cluster to.
      5. Select the base domain to deploy the cluster to. The base domain corresponds to the Azure DNS Zone that you created for your cluster.
      6. Enter a descriptive name for your cluster.

        Important

        All Azure resources that are available through public endpoints are subject to resource name restrictions, and you cannot create resources that use certain terms. For a list of terms that Azure restricts, see Resolve reserved resource name errors in the Azure documentation.

      7. Paste the pull secret from the Red Hat OpenShift Cluster Manager.
  2. Modify the install-config.yaml file. You can find more information about the available parameters in the Installation configuration parameters section.
  3. Back up the install-config.yaml file so that you can use it to install multiple clusters.

    Important

    The install-config.yaml file is consumed during the installation process. If you want to reuse the file, you must back it up now.

1.6.6.1. Installation configuration parameters

Before you deploy an OpenShift Container Platform cluster, you provide parameter values to describe your account on the cloud platform that hosts your cluster and optionally customize your cluster’s platform. When you create the install-config.yaml installation configuration file, you provide values for the required parameters through the command line. If you customize your cluster, you can modify the install-config.yaml file to provide more details about the platform.

Note

After installation, you cannot modify these parameters in the install-config.yaml file.

Important

The openshift-install command does not validate field names for parameters. If an incorrect name is specified, the related file or object is not created, and no error is reported. Ensure that the field names for any parameters that are specified are correct.

1.6.6.1.1. Required configuration parameters

Required installation configuration parameters are described in the following table:

Table 1.15. Required parameters

ParameterDescriptionValues

apiVersion

The API version for the install-config.yaml content. The current version is v1. The installer may also support older API versions.

String

baseDomain

The base domain of your cloud provider. The base domain is used to create routes to your OpenShift Container Platform cluster components. The full DNS name for your cluster is a combination of the baseDomain and metadata.name parameter values that uses the <metadata.name>.<baseDomain> format.

A fully-qualified domain or subdomain name, such as example.com.

metadata

Kubernetes resource ObjectMeta, from which only the name parameter is consumed.

Object

metadata.name

The name of the cluster. DNS records for the cluster are all subdomains of {{.metadata.name}}.{{.baseDomain}}.

String of lowercase letters, hyphens (-), and periods (.), such as dev.

platform

The configuration for the specific platform upon which to perform the installation: aws, baremetal, azure, openstack, ovirt, vsphere. For additional information about platform.<platform> parameters, consult the following table for your specific platform.

Object

pullSecret

Get a pull secret from the Red Hat OpenShift Cluster Manager to authenticate downloading container images for OpenShift Container Platform components from services such as Quay.io.

{
   "auths":{
      "cloud.openshift.com":{
         "auth":"b3Blb=",
         "email":"you@example.com"
      },
      "quay.io":{
         "auth":"b3Blb=",
         "email":"you@example.com"
      }
   }
}
1.6.6.1.2. Network configuration parameters

You can customize your installation configuration based on the requirements of your existing network infrastructure. For example, you can expand the IP address block for the cluster network or provide different IP address blocks than the defaults.

Only IPv4 addresses are supported.

Table 1.16. Network parameters

ParameterDescriptionValues

networking

The configuration for the cluster network.

Object

Note

You cannot modify parameters specified by the networking object after installation.

networking.networkType

The cluster network provider Container Network Interface (CNI) plug-in to install.

Either OpenShiftSDN or OVNKubernetes. The default value is OpenShiftSDN.

networking.clusterNetwork

The IP address blocks for pods.

The default value is 10.128.0.0/14 with a host prefix of /23.

If you specify multiple IP address blocks, the blocks must not overlap.

An array of objects. For example:

networking:
  clusterNetwork:
  - cidr: 10.128.0.0/14
    hostPrefix: 23

networking.clusterNetwork.cidr

Required if you use networking.clusterNetwork. An IP address block.

An IPv4 network.

An IP address block in Classless Inter-Domain Routing (CIDR) notation. The prefix length for an IPv4 block is between 0 and 32.

networking.clusterNetwork.hostPrefix

The subnet prefix length to assign to each individual node. For example, if hostPrefix is set to 23 then each node is assigned a /23 subnet out of the given cidr. A hostPrefix value of 23 provides 510 (2^(32 - 23) - 2) pod IP addresses.

A subnet prefix.

The default value is 23.

networking.serviceNetwork

The IP address block for services. The default value is 172.30.0.0/16.

The OpenShift SDN and OVN-Kubernetes network providers support only a single IP address block for the service network.

An array with an IP address block in CIDR format. For example:

networking:
  serviceNetwork:
   - 172.30.0.0/16

networking.machineNetwork

The IP address blocks for machines.

If you specify multiple IP address blocks, the blocks must not overlap.

An array of objects. For example:

networking:
  machineNetwork:
  - cidr: 10.0.0.0/16

networking.machineNetwork.cidr

Required if you use networking.machineNetwork. An IP address block. The default value is 10.0.0.0/16 for all platforms other than libvirt. For libvirt, the default value is 192.168.126.0/24.

An IP network block in CIDR notation.

For example, 10.0.0.0/16.

Note

Set the networking.machineNetwork to match the CIDR that the preferred NIC resides in.

1.6.6.1.3. Optional configuration parameters

Optional installation configuration parameters are described in the following table:

Table 1.17. Optional parameters

ParameterDescriptionValues

additionalTrustBundle

A PEM-encoded X.509 certificate bundle that is added to the nodes' trusted certificate store. This trust bundle may also be used when a proxy has been configured.

String

compute

The configuration for the machines that comprise the compute nodes.

Array of machine-pool objects. For details, see the following "Machine-pool" table.

compute.architecture

Determines the instruction set architecture of the machines in the pool. Currently, heteregeneous clusters are not supported, so all pools must specify the same architecture. Valid values are amd64 (the default).

String

compute.hyperthreading

Whether to enable or disable simultaneous multithreading, or hyperthreading, on compute machines. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores.

Important

If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance.

Enabled or Disabled

compute.name

Required if you use compute. The name of the machine pool.

worker

compute.platform

Required if you use compute. Use this parameter to specify the cloud provider to host the worker machines. This parameter value must match the controlPlane.platform parameter value.

aws, azure, gcp, openstack, ovirt, vsphere, or {}

compute.replicas

The number of compute machines, which are also known as worker machines, to provision.

A positive integer greater than or equal to 2. The default value is 3.

controlPlane

The configuration for the machines that comprise the control plane.

Array of MachinePool objects. For details, see the following "Machine-pool" table.

controlPlane.architecture

Determines the instruction set architecture of the machines in the pool. Currently, heterogeneous clusters are not supported, so all pools must specify the same architecture. Valid values are amd64 (the default).

String

controlPlane.hyperthreading

Whether to enable or disable simultaneous multithreading, or hyperthreading, on control plane machines. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores.

Important

If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance.

Enabled or Disabled

controlPlane.name

Required if you use controlPlane. The name of the machine pool.

master

controlPlane.platform

Required if you use controlPlane. Use this parameter to specify the cloud provider that hosts the control plane machines. This parameter value must match the compute.platform parameter value.

aws, azure, gcp, openstack, ovirt, vsphere, or {}

controlPlane.replicas

The number of control plane machines to provision.

The only supported value is 3, which is the default value.

credentialsMode

The Cloud Credential Operator (CCO) mode. If no mode is specified, the CCO dynamically tries to determine the capabilities of the provided credentials, with a preference for mint mode on the platforms where multiple modes are supported.

Note

Not all CCO modes are supported for all cloud providers. For more information on CCO modes, see the Cloud Credential Operator entry in the Red Hat Operators reference content.

Mint, Passthrough, Manual, or an empty string ("").

fips

Enable or disable FIPS mode. The default is false (disabled). If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead.

Important

The use of FIPS Validated / Modules in Process cryptographic libraries is only supported on OpenShift Container Platform deployments on the x86_64 architecture.

Note

If you are using Azure File storage, you cannot enable FIPS mode.

false or true

imageContentSources

Sources and repositories for the release-image content.

Array of objects. Includes a source and, optionally, mirrors, as described in the following rows of this table.

imageContentSources.source

Required if you use imageContentSources. Specify the repository that users refer to, for example, in image pull specifications.

String

imageContentSources.mirrors

Specify one or more repositories that may also contain the same images.

Array of strings

publish

How to publish or expose the user-facing endpoints of your cluster, such as the Kubernetes API, OpenShift routes.

Internal or External. To deploy a private cluster, which cannot be accessed from the internet, set publish to Internal. The default value is External.

sshKey

The SSH key or keys to authenticate access your cluster machines.

Note

For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses.

One or more keys. For example:

sshKey:
  <key1>
  <key2>
  <key3>
1.6.6.1.4. Additional Azure configuration parameters

Additional Azure configuration parameters are described in the following table:

Table 1.18. Additional Azure parameters

ParameterDescriptionValues

compute.platform.azure.osDisk.diskSizeGB

The Azure disk size for the VM.

Integer that represents the size of the disk in GB. The default is 128.

compute.platform.azure.osDisk.diskType

Defines the type of disk.

standard_LRS, premium_LRS, or standardSSD_LRS. The default is premium_LRS.

controlPlane.platform.azure.osDisk.diskSizeGB

The Azure disk size for the VM.

Integer that represents the size of the disk in GB. The default is 1024.

controlPlane.platform.azure.osDisk.diskType

Defines the type of disk.

premium_LRS or standardSSD_LRS. The default is premium_LRS.

platform.azure.baseDomainResourceGroupName

The name of the resource group that contains the DNS zone for your base domain.

String, for example production_cluster.

platform.azure.outboundType

The outbound routing strategy used to connect your cluster to the internet. If you are using user-defined routing, you must have pre-existing networking available where the outbound routing has already been configured prior to installing a cluster. The installation program is not responsible for configuring user-defined routing.

LoadBalancer or UserDefinedRouting. The default is LoadBalancer.

platform.azure.region

The name of the Azure region that hosts your cluster.

Any valid region name, such as centralus.

platform.azure.zone

List of availability zones to place machines in. For high availability, specify at least two zones.

List of zones, for example ["1", "2", "3"].

platform.azure.networkResourceGroupName

The name of the resource group that contains the existing VNet that you want to deploy your cluster to. This name cannot be the same as the platform.azure.baseDomainResourceGroupName.

String.

platform.azure.virtualNetwork

The name of the existing VNet that you want to deploy your cluster to.

String.

platform.azure.controlPlaneSubnet

The name of the existing subnet in your VNet that you want to deploy your control plane machines to.

Valid CIDR, for example 10.0.0.0/16.

platform.azure.computeSubnet

The name of the existing subnet in your VNet that you want to deploy your compute machines to.

Valid CIDR, for example 10.0.0.0/16.

platform.azure.cloudName

The name of the Azure cloud environment that is used to configure the Azure SDK with the appropriate Azure API endpoints. If empty, the default value AzurePublicCloud is used.

Any valid cloud environment, such as AzurePublicCloud or AzureUSGovernmentCloud.

Note

You cannot customize Azure Availability Zones or Use tags to organize your Azure resources with an Azure cluster.

1.6.6.2. Sample customized install-config.yaml file for Azure

You can customize the install-config.yaml file to specify more details about your OpenShift Container Platform cluster’s platform or modify the values of the required parameters.

Important

This sample YAML file is provided for reference only. You must obtain your install-config.yaml file by using the installation program and modify it.

apiVersion: v1
baseDomain: example.com 1
controlPlane: 2
  hyperthreading: Enabled 3 4
  name: master
  platform:
    azure:
      osDisk:
        diskSizeGB: 1024 5
        diskType: Premium_LRS
      type: Standard_D8s_v3
  replicas: 3
compute: 6
- hyperthreading: Enabled 7
  name: worker
  platform:
    azure:
      type: Standard_D2s_v3
      osDisk:
        diskSizeGB: 512 8
        diskType: Standard_LRS
      zones: 9
      - "1"
      - "2"
      - "3"
  replicas: 5
metadata:
  name: test-cluster 10
networking:
  clusterNetwork:
  - cidr: 10.128.0.0/14
    hostPrefix: 23
  machineNetwork:
  - cidr: 10.0.0.0/16
  networkType: OpenShiftSDN
  serviceNetwork:
  - 172.30.0.0/16
platform:
  azure:
    baseDomainResourceGroupName: resource_group 11
    region: centralus 12
    resourceGroupName: existing_resource_group 13
    networkResourceGroupName: vnet_resource_group 14
    virtualNetwork: vnet 15
    controlPlaneSubnet: control_plane_subnet 16
    computeSubnet: compute_subnet 17
    outboundType: Loadbalancer
    cloudName: AzurePublicCloud
pullSecret: '{"auths": ...}' 18
fips: false 19
sshKey: ssh-ed25519 AAAA... 20
1 10 12 18
Required. The installation program prompts you for this value.
2 6
If you do not provide these parameters and values, the installation program provides the default value.
3 7
The controlPlane section is a single mapping, but the compute section is a sequence of mappings. To meet the requirements of the different data structures, the first line of the compute section must begin with a hyphen, -, and the first line of the controlPlane section must not. Only one control plane pool is used.
4
Whether to enable or disable simultaneous multithreading, or hyperthreading. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. You can disable it by setting the parameter value to Disabled. If you disable simultaneous multithreading in some cluster machines, you must disable it in all cluster machines.
Important

If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Use larger virtual machine types, such as Standard_D8s_v3, for your machines if you disable simultaneous multithreading.

5 8
You can specify the size of the disk to use in GB. Minimum recommendation for control plane nodes (also known as the master nodes) is 1024 GB.
9
Specify a list of zones to deploy your machines to. For high availability, specify at least two zones.
11
Specify the name of the resource group that contains the DNS zone for your base domain.
13
Specify the name of an already existing resource group to install your cluster to. If undefined, a new resource group is created for the cluster.
14
If you use an existing VNet, specify the name of the resource group that contains it.
15
If you use an existing VNet, specify its name.
16
If you use an existing VNet, specify the name of the subnet to host the control plane machines.
17
If you use an existing VNet, specify the name of the subnet to host the compute machines.
19
Whether to enable or disable FIPS mode. By default, FIPS mode is not enabled. If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead.
Important

The use of FIPS Validated / Modules in Process cryptographic libraries is only supported on OpenShift Container Platform deployments on the x86_64 architecture.

20
You can optionally provide the sshKey value that you use to access the machines in your cluster.
Note

For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses.

1.6.6.3. Configuring the cluster-wide proxy during installation

Production environments can deny direct access to the Internet and instead have an HTTP or HTTPS proxy available. You can configure a new OpenShift Container Platform cluster to use a proxy by configuring the proxy settings in the install-config.yaml file.

Prerequisites

  • You have an existing install-config.yaml file.
  • You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the Proxy object’s spec.noProxy field to bypass the proxy if necessary.

    Note

    The Proxy object status.noProxy field is populated with the values of the networking.machineNetwork[].cidr, networking.clusterNetwork[].cidr, and networking.serviceNetwork[] fields from your installation configuration.

    For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the Proxy object status.noProxy field is also populated with the instance metadata endpoint (169.254.169.254).

Procedure

  1. Edit your install-config.yaml file and add the proxy settings. For example:

    apiVersion: v1
    baseDomain: my.domain.com
    proxy:
      httpProxy: http://<username>:<pswd>@<ip>:<port> 1
      httpsProxy: https://<username>:<pswd>@<ip>:<port> 2
      noProxy: example.com 3
    additionalTrustBundle: | 4
        -----BEGIN CERTIFICATE-----
        <MY_TRUSTED_CA_CERT>
        -----END CERTIFICATE-----
    ...
    1
    A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be http.
    2
    A proxy URL to use for creating HTTPS connections outside the cluster.
    3
    A comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com, but not y.com. Use * to bypass the proxy for all destinations.
    4
    If provided, the installation program generates a config map that is named user-ca-bundle in the openshift-config namespace to hold the additional CA certificates. If you provide additionalTrustBundle and at least one proxy setting, the Proxy object is configured to reference the user-ca-bundle config map in the trustedCA field. The Cluster Network Operator then creates a trusted-ca-bundle config map that merges the contents specified for the trustedCA parameter with the RHCOS trust bundle. The additionalTrustBundle field is required unless the proxy’s identity certificate is signed by an authority from the RHCOS trust bundle.
    Note

    The installation program does not support the proxy readinessEndpoints field.

  2. Save the file and reference it when installing OpenShift Container Platform.

The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settings in the provided install-config.yaml file. If no proxy settings are provided, a cluster Proxy object is still created, but it will have a nil spec.

Note

Only the Proxy object named cluster is supported, and no additional proxies can be created.

1.6.7. Deploying the cluster

You can install OpenShift Container Platform on a compatible cloud platform.

Important

You can run the create cluster command of the installation program only once, during initial installation.

Prerequisites

  • Configure an account with the cloud platform that hosts your cluster.
  • Obtain the OpenShift Container Platform installation program and the pull secret for your cluster.

Procedure

  1. Change to the directory that contains the installation program and initialize the cluster deployment:

    $ ./openshift-install create cluster --dir <installation_directory> \ 1
        --log-level=info 2
    1
    For <installation_directory>, specify the location of your customized ./install-config.yaml file.
    2
    To view different installation details, specify warn, debug, or error instead of info.
    Note

    If the cloud provider account that you configured on your host does not have sufficient permissions to deploy the cluster, the installation process stops, and the missing permissions are displayed.

    When the cluster deployment completes, directions for accessing your cluster, including a link to its web console and credentials for the kubeadmin user, display in your terminal.

    Example output

    ...
    INFO Install complete!
    INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig'
    INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com
    INFO Login to the console with user: "kubeadmin", and password: "4vYBz-Ee6gm-ymBZj-Wt5AL"
    INFO Time elapsed: 36m22s

    Note

    The cluster access and credential information also outputs to <installation_directory>/.openshift_install.log when an installation succeeds.

    Important
    • The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information.
    • It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation.
    Important

    You must not delete the installation program or the files that the installation program creates. Both are required to delete the cluster.

1.6.8. Installing the OpenShift CLI by downloading the binary

You can install the OpenShift CLI (oc) in order to interact with OpenShift Container Platform from a command-line interface. You can install oc on Linux, Windows, or macOS.

Important

If you installed an earlier version of oc, you cannot use it to complete all of the commands in OpenShift Container Platform 4.6. Download and install the new version of oc.

1.6.8.1. Installing the OpenShift CLI on Linux

You can install the OpenShift CLI (oc) binary on Linux by using the following procedure.

Procedure

  1. Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal.
  2. Select the appropriate version in the Version drop-down menu.
  3. Click Download Now next to the OpenShift v4.6 Linux Client entry and save the file.
  4. Unpack the archive:

    $ tar xvzf <file>
  5. Place the oc binary in a directory that is on your PATH.

    To check your PATH, execute the following command:

    $ echo $PATH

After you install the OpenShift CLI, it is available using the oc command:

$ oc <command>

1.6.8.2. Installing the OpenShift CLI on Windows

You can install the OpenShift CLI (oc) binary on Windows by using the following procedure.

Procedure

  1. Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal.
  2. Select the appropriate version in the Version drop-down menu.
  3. Click Download Now next to the OpenShift v4.6 Windows Client entry and save the file.
  4. Unzip the archive with a ZIP program.
  5. Move the oc binary to a directory that is on your PATH.

    To check your PATH, open the command prompt and execute the following command:

    C:\> path

After you install the OpenShift CLI, it is available using the oc command:

C:\> oc <command>

1.6.8.3. Installing the OpenShift CLI on macOS

You can install the OpenShift CLI (oc) binary on macOS by using the following procedure.

Procedure

  1. Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal.
  2. Select the appropriate version in the Version drop-down menu.
  3. Click Download Now next to the OpenShift v4.6 MacOSX Client entry and save the file.
  4. Unpack and unzip the archive.
  5. Move the oc binary to a directory on your PATH.

    To check your PATH, open a terminal and execute the following command:

    $ echo $PATH

After you install the OpenShift CLI, it is available using the oc command:

$ oc <command>

1.6.9. Logging in to the cluster by using the CLI

You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation.

Prerequisites

  • You deployed an OpenShift Container Platform cluster.
  • You installed the oc CLI.

Procedure

  1. Export the kubeadmin credentials:

    $ export KUBECONFIG=<installation_directory>/auth/kubeconfig 1
    1
    For <installation_directory>, specify the path to the directory that you stored the installation files in.
  2. Verify you can run oc commands successfully using the exported configuration:

    $ oc whoami

    Example output

    system:admin

Additional resources

  • See Accessing the web console for more details about accessing and understanding the OpenShift Container Platform web console.

1.6.10. Telemetry access for OpenShift Container Platform

In OpenShift Container Platform 4.6, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager.

After you confirm that your OpenShift Cluster Manager inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level.

Additional resources

1.6.11. Next steps

1.7. Installing a private cluster on Azure

In OpenShift Container Platform version 4.6, you can install a private cluster into an existing Azure Virtual Network (VNet) on Microsoft Azure. The installation program provisions the rest of the required infrastructure, which you can further customize. To customize the installation, you modify parameters in the install-config.yaml file before you install the cluster.

1.7.1. Prerequisites

1.7.2. Private clusters

You can deploy a private OpenShift Container Platform cluster that does not expose external endpoints. Private clusters are accessible from only an internal network and are not visible to the Internet.

By default, OpenShift Container Platform is provisioned to use publicly-accessible DNS and endpoints. A private cluster sets the DNS, Ingress Controller, and API server to private when you deploy your cluster. This means that the cluster resources are only accessible from your internal network and are not visible to the internet.

To deploy a private cluster, you must use existing networking that meets your requirements. Your cluster resources might be shared between other clusters on the network.

Additionally, you must deploy a private cluster from a machine that has access the API services for the cloud you provision to, the hosts on the network that you provision, and to the internet to obtain installation media. You can use any machine that meets these access requirements and follows your company’s guidelines. For example, this machine can be a bastion host on your cloud network or a machine that has access to the network through a VPN.

1.7.2.1. Private clusters in Azure

To create a private cluster on Microsoft Azure, you must provide an existing private VNet and subnets to host the cluster. The installation program must also be able to resolve the DNS records that the cluster requires. The installation program configures the Ingress Operator and API server for only internal traffic.

Depending how your network connects to the private VNET, you might need to use a DNS forwarder in order to resolve the cluster’s private DNS records. The cluster’s machines use 168.63.129.16 internally for DNS resolution. For more information, see What is Azure Private DNS? and What is IP address 168.63.129.16? in the Azure documentation.

The cluster still requires access to Internet to access the Azure APIs.

The following items are not required or created when you install a private cluster:

  • A BaseDomainResourceGroup, since the cluster does not create public records
  • Public IP addresses
  • Public DNS records
  • Public endpoints

    The cluster is configured so that the Operators do not create public records for the cluster and all cluster machines are placed in the private subnets that you specify.
1.7.2.1.1. Limitations

Private clusters on Azure are subject to only the limitations that are associated with the use of an existing VNet.

1.7.2.2. User-defined outbound routing

In OpenShift Container Platform, you can choose your own outbound routing for a cluster to connect to the Internet. This allows you to skip the creation of public IP addresses and the public load balancer.

You can configure user-defined routing by modifying parameters in the install-config.yaml file before installing your cluster. A pre-existing VNet is required to use outbound routing when installing a cluster; the installation program is not responsible for configuring this.

When configuring a cluster to use user-defined routing, the installation program does not create the following resources:

  • Outbound rules for access to the Internet.
  • Public IPs for the public load balancer.
  • Kubernetes Service object to add the cluster machines to the public load balancer for outbound requests.

You must ensure the following items are available before setting user-defined routing:

  • Egress to the Internet is possible to pull container images, unless using an internal registry mirror.
  • The cluster can access Azure APIs.
  • Various allowlist endpoints are configured. You can reference these endpoints in the Configuring your firewall section.

There are several pre-existing networking setups that are supported for Internet access using user-defined routing.

Private cluster with network address translation

You can use Azure VNET network address translation (NAT) to provide outbound Internet access for the subnets in your cluster. You can reference Create a NAT gateway using Azure CLI in the Azure documentation for configuration instructions.

When using a VNet setup with Azure NAT and user-defined routing configured, you can create a private cluster with no public endpoints.

Private cluster with Azure Firewall

You can use Azure Firewall to provide outbound routing for the VNet used to install the cluster. You can learn more about providing user-defined routing with Azure Firewall in the Azure documentation.

When using a VNet setup with Azure Firewall and user-defined routing configured, you can create a private cluster with no public endpoints.

Private cluster with a proxy configuration

You can use a proxy with user-defined routing to allow egress to the Internet. You must ensure that cluster Operators do not access Azure APIs using a proxy; Operators must have access to Azure APIs outside of the proxy.

When using the default route table for subnets, with 0.0.0.0/0 populated automatically by Azure, all Azure API requests are routed over Azure’s internal network even though the IP addresses are public. As long as the Network Security Group rules allow egress to Azure API endpoints, proxies with user-defined routing configured allow you to create private clusters with no public endpoints.

Private cluster with no Internet access

You can install a private network that restricts all access to the internet, except the Azure API. This is accomplished by mirroring the release image registry locally. Your cluster must have access to the following:

  • An internal registry mirror that allows for pulling container images
  • Access to Azure APIs

With these requirements available, you can use user-defined routing to create private clusters with no public endpoints.

1.7.3. About reusing a VNet for your OpenShift Container Platform cluster

In OpenShift Container Platform 4.6, you can deploy a cluster into an existing Azure Virtual Network (VNet) in Microsoft Azure. If you do, you must also use existing subnets within the VNet and routing rules.

By deploying OpenShift Container Platform into an existing Azure VNet, you might be able to avoid service limit constraints in new accounts or more easily abide by the operational constraints that your company’s guidelines set. This is a good option to use if you cannot obtain the infrastructure creation permissions that are required to create the VNet.

1.7.3.1. Requirements for using your VNet

When you deploy a cluster by using an existing VNet, you must perform additional network configuration before you install the cluster. In installer-provisioned infrastructure clusters, the installer usually creates the following components, but it does not create them when you install into an existing VNet:

  • Subnets
  • Route tables
  • VNets
  • Network Security Groups
Note

The installation program requires that you use the cloud-provided DNS server. Using a custom DNS server is not supported and causes the installation to fail.

If you use a custom VNet, you must correctly configure it and its subnets for the installation program and the cluster to use. The installation program cannot subdivide network ranges for the cluster to use, set route tables for the subnets, or set VNet options like DHCP, so you must do so before you install the cluster.

The cluster must be able to access the resource group that contains the existing VNet and subnets. While all of the resources that the cluster creates are placed in a separate resource group that it creates, some network resources are used from a separate group. Some cluster Operators must be able to access resources in both resource groups. For example, the Machine API controller attaches NICS for the virtual machines that it creates to subnets from the networking resource group.

Your VNet must meet the following characteristics:

  • The VNet’s CIDR block must contain the Networking.MachineCIDR range, which is the IP address pool for cluster machines.
  • The VNet and its subnets must belong to the same resource group, and the subnets must be configured to use Azure-assigned DHCP IP addresses instead of static IP addresses.

You must provide two subnets within your VNet, one for the control plane machines and one for the compute machines. Because Azure distributes machines in different availability zones within the region that you specify, your cluster will have high availability by default.

To ensure that the subnets that you provide are suitable, the installation program confirms the following data:

  • All the specified subnets exist.
  • There are two private subnets, one for the control plane machines and one for the compute machines.
  • The subnet CIDRs belong to the machine CIDR that you specified. Machines are not provisioned in availability zones that you do not provide private subnets for.
Note

If you destroy a cluster that uses an existing VNet, the VNet is not deleted.

1.7.3.1.1. Network security group requirements

The network security groups for the subnets that host the compute and control plane machines require specific access to ensure that the cluster communication is correct. You must create rules to allow access to the required cluster communication ports.

Important

The network security group rules must be in place before you install the cluster. If you attempt to install a cluster without the required access, the installation program cannot reach the Azure APIs, and installation fails.

Table 1.19. Required ports

PortDescriptionControl planeCompute

80

Allows HTTP traffic

 

x

443

Allows HTTPS traffic

 

x

6443

Allows communication to the control plane machines

x

 

22623

Allows communication to the machine config server

x

 
Note

Since cluster components do not modify the user-provided network security groups, which the Kubernetes controllers update, a pseudo-network security group is created for the Kubernetes controller to modify without impacting the rest of the environment.

1.7.3.2. Division of permissions

Starting with OpenShift Container Platform 4.3, you do not need all of the permissions that are required for an installation program-provisioned infrastructure cluster to deploy a cluster. This change mimics the division of permissions that you might have at your company: some individuals can create different resources in your clouds than others. For example, you might be able to create application-specific items, like instances, storage, and load balancers, but not networking-related components such as VNets, subnet, or ingress rules.

The Azure credentials that you use when you create your cluster do not need the networking permissions that are required to make VNets and core networking components within the VNet, such as subnets, routing tables, internet gateways, NAT, and VPN. You still need permission to make the application resources that the machines within the cluster require, such as load balancers, security groups, storage accounts, and nodes.

1.7.3.3. Isolation between clusters

Because the cluster is unable to modify network security groups in an existing subnet, there is no way to isolate clusters from each other on the VNet.

1.7.4. Internet access for OpenShift Container Platform

In OpenShift Container Platform 4.6, you require access to the Internet to install your cluster.

You must have Internet access to:

  • Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster.
  • Access Quay.io to obtain the packages that are required to install your cluster.
  • Obtain the packages that are required to perform cluster updates.
Important

If your cluster cannot have direct Internet access, you can perform a restricted network installation on some types of infrastructure that you provision. During that process, you download the content that is required and use it to populate a mirror registry with the packages that you need to install a cluster and generate the installation program. With some installation types, the environment that you install your cluster in will not require Internet access. Before you update the cluster, you update the content of the mirror registry.

1.7.5. Generating an SSH private key and adding it to the agent

If you want to perform installation debugging or disaster recovery on your cluster, you must provide an SSH key to both your ssh-agent and the installation program. You can use this key to access the bootstrap machine in a public cluster to troubleshoot installation issues.

Note

In a production environment, you require disaster recovery and debugging.

You can use this key to SSH into the master nodes as the user core. When you deploy the cluster, the key is added to the core user’s ~/.ssh/authorized_keys list.

Note

You must use a local key, not one that you configured with platform-specific approaches such as AWS key pairs.

Procedure

  1. If you do not have an SSH key that is configured for password-less authentication on your computer, create one. For example, on a computer that uses a Linux operating system, run the following command:

    $ ssh-keygen -t ed25519 -N '' \
        -f <path>/<file_name> 1
    1
    Specify the path and file name, such as ~/.ssh/id_rsa, of the new SSH key. If you have an existing key pair, ensure your public key is in the your ~/.ssh directory.

    Running this command generates an SSH key that does not require a password in the location that you specified.

    Note

    If you plan to install an OpenShift Container Platform cluster that uses FIPS Validated / Modules in Process cryptographic libraries on the x86_64 architecture, do not create a key that uses the ed25519 algorithm. Instead, create a key that uses the rsa or ecdsa algorithm.

  2. Start the ssh-agent process as a background task:

    $ eval "$(ssh-agent -s)"

    Example output

    Agent pid 31874

    Note

    If your cluster is in FIPS mode, only use FIPS-compliant algorithms to generate the SSH key. The key must be either RSA or ECDSA.

  3. Add your SSH private key to the ssh-agent:

    $ ssh-add <path>/<file_name> 1

    Example output

    Identity added: /home/<you>/<path>/<file_name> (<computer_name>)

    1
    Specify the path and file name for your SSH private key, such as ~/.ssh/id_rsa

Next steps

  • When you install OpenShift Container Platform, provide the SSH public key to the installation program.

1.7.6. Obtaining the installation program

Before you install OpenShift Container Platform, download the installation file on a local computer.

Prerequisites

  • You have a computer that runs Linux or macOS, with 500 MB of local disk space

Procedure

  1. Access the Infrastructure Provider page on the OpenShift Cluster Manager site. If you have a Red Hat account, log in with your credentials. If you do not, create an account.
  2. Select your infrastructure provider.
  3. Navigate to the page for your installation type, download the installation program for your operating system, and place the file in the directory where you will store the installation configuration files.

    Important

    The installation program creates several files on the computer that you use to install your cluster. You must keep the installation program and the files that the installation program creates after you finish installing the cluster. Both files are required to delete the cluster.

    Important

    Deleting the files created by the installation program does not remove your cluster, even if the cluster failed during installation. To remove your cluster, complete the OpenShift Container Platform uninstallation procedures for your specific cloud provider.

  4. Extract the installation program. For example, on a computer that uses a Linux operating system, run the following command:

    $ tar xvf openshift-install-linux.tar.gz
  5. Download your installation pull secret from the Red Hat OpenShift Cluster Manager. This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components.

1.7.7. Manually creating the installation configuration file

For installations of a private OpenShift Container Platform cluster that are only accessible from an internal network and are not visible to the Internet, you must manually generate your installation configuration file.

Prerequisites

  • Obtain the OpenShift Container Platform installation program and the access token for your cluster.

Procedure

  1. Create an installation directory to store your required installation assets in:

    $ mkdir <installation_directory>
    Important

    You must create a directory. Some installation assets, like bootstrap X.509 certificates have short expiration intervals, so you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version.

  2. Customize the following install-config.yaml file template and save it in the <installation_directory>.

    Note

    You must name this configuration file install-config.yaml.

  3. Back up the install-config.yaml file so that you can use it to install multiple clusters.

    Important

    The install-config.yaml file is consumed during the next step of the installation process. You must back it up now.

1.7.7.1. Installation configuration parameters

Before you deploy an OpenShift Container Platform cluster, you provide parameter values to describe your account on the cloud platform that hosts your cluster and optionally customize your cluster’s platform. When you create the install-config.yaml installation configuration file, you provide values for the required parameters through the command line. If you customize your cluster, you can modify the install-config.yaml file to provide more details about the platform.

Note

After installation, you cannot modify these parameters in the install-config.yaml file.

Important

The openshift-install command does not validate field names for parameters. If an incorrect name is specified, the related file or object is not created, and no error is reported. Ensure that the field names for any parameters that are specified are correct.

1.7.7.1.1. Required configuration parameters

Required installation configuration parameters are described in the following table:

Table 1.20. Required parameters

ParameterDescriptionValues

apiVersion

The API version for the install-config.yaml content. The current version is v1. The installer may also support older API versions.

String

baseDomain

The base domain of your cloud provider. The base domain is used to create routes to your OpenShift Container Platform cluster components. The full DNS name for your cluster is a combination of the baseDomain and metadata.name parameter values that uses the <metadata.name>.<baseDomain> format.

A fully-qualified domain or subdomain name, such as example.com.

metadata

Kubernetes resource ObjectMeta, from which only the name parameter is consumed.

Object

metadata.name

The name of the cluster. DNS records for the cluster are all subdomains of {{.metadata.name}}.{{.baseDomain}}.

String of lowercase letters, hyphens (-), and periods (.), such as dev.

platform

The configuration for the specific platform upon which to perform the installation: aws, baremetal, azure, openstack, ovirt, vsphere. For additional information about platform.<platform> parameters, consult the following table for your specific platform.

Object

pullSecret

Get a pull secret from the Red Hat OpenShift Cluster Manager to authenticate downloading container images for OpenShift Container Platform components from services such as Quay.io.

{
   "auths":{
      "cloud.openshift.com":{
         "auth":"b3Blb=",
         "email":"you@example.com"
      },
      "quay.io":{
         "auth":"b3Blb=",
         "email":"you@example.com"
      }
   }
}
1.7.7.1.2. Network configuration parameters

You can customize your installation configuration based on the requirements of your existing network infrastructure. For example, you can expand the IP address block for the cluster network or provide different IP address blocks than the defaults.

Only IPv4 addresses are supported.

Table 1.21. Network parameters

ParameterDescriptionValues

networking

The configuration for the cluster network.

Object

Note

You cannot modify parameters specified by the networking object after installation.

networking.networkType

The cluster network provider Container Network Interface (CNI) plug-in to install.

Either OpenShiftSDN or OVNKubernetes. The default value is OpenShiftSDN.

networking.clusterNetwork

The IP address blocks for pods.

The default value is 10.128.0.0/14 with a host prefix of /23.

If you specify multiple IP address blocks, the blocks must not overlap.

An array of objects. For example:

networking:
  clusterNetwork:
  - cidr: 10.128.0.0/14
    hostPrefix: 23

networking.clusterNetwork.cidr

Required if you use networking.clusterNetwork. An IP address block.

An IPv4 network.

An IP address block in Classless Inter-Domain Routing (CIDR) notation. The prefix length for an IPv4 block is between 0 and 32.

networking.clusterNetwork.hostPrefix

The subnet prefix length to assign to each individual node. For example, if hostPrefix is set to 23 then each node is assigned a /23 subnet out of the given cidr. A hostPrefix value of 23 provides 510 (2^(32 - 23) - 2) pod IP addresses.

A subnet prefix.

The default value is 23.

networking.serviceNetwork

The IP address block for services. The default value is 172.30.0.0/16.

The OpenShift SDN and OVN-Kubernetes network providers support only a single IP address block for the service network.

An array with an IP address block in CIDR format. For example:

networking:
  serviceNetwork:
   - 172.30.0.0/16

networking.machineNetwork

The IP address blocks for machines.

If you specify multiple IP address blocks, the blocks must not overlap.

An array of objects. For example:

networking:
  machineNetwork:
  - cidr: 10.0.0.0/16

networking.machineNetwork.cidr

Required if you use networking.machineNetwork. An IP address block. The default value is 10.0.0.0/16 for all platforms other than libvirt. For libvirt, the default value is 192.168.126.0/24.

An IP network block in CIDR notation.

For example, 10.0.0.0/16.

Note

Set the networking.machineNetwork to match the CIDR that the preferred NIC resides in.

1.7.7.1.3. Optional configuration parameters

Optional installation configuration parameters are described in the following table:

Table 1.22. Optional parameters

ParameterDescriptionValues

additionalTrustBundle

A PEM-encoded X.509 certificate bundle that is added to the nodes' trusted certificate store. This trust bundle may also be used when a proxy has been configured.

String

compute

The configuration for the machines that comprise the compute nodes.

Array of machine-pool objects. For details, see the following "Machine-pool" table.

compute.architecture

Determines the instruction set architecture of the machines in the pool. Currently, heteregeneous clusters are not supported, so all pools must specify the same architecture. Valid values are amd64 (the default).

String

compute.hyperthreading

Whether to enable or disable simultaneous multithreading, or hyperthreading, on compute machines. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores.

Important

If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance.

Enabled or Disabled

compute.name

Required if you use compute. The name of the machine pool.

worker

compute.platform

Required if you use compute. Use this parameter to specify the cloud provider to host the worker machines. This parameter value must match the controlPlane.platform parameter value.

aws, azure, gcp, openstack, ovirt, vsphere, or {}

compute.replicas

The number of compute machines, which are also known as worker machines, to provision.

A positive integer greater than or equal to 2. The default value is 3.

controlPlane

The configuration for the machines that comprise the control plane.

Array of MachinePool objects. For details, see the following "Machine-pool" table.

controlPlane.architecture

Determines the instruction set architecture of the machines in the pool. Currently, heterogeneous clusters are not supported, so all pools must specify the same architecture. Valid values are amd64 (the default).

String

controlPlane.hyperthreading

Whether to enable or disable simultaneous multithreading, or hyperthreading, on control plane machines. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores.

Important

If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance.

Enabled or Disabled

controlPlane.name

Required if you use controlPlane. The name of the machine pool.

master

controlPlane.platform

Required if you use controlPlane. Use this parameter to specify the cloud provider that hosts the control plane machines. This parameter value must match the compute.platform parameter value.

aws, azure, gcp, openstack, ovirt, vsphere, or {}

controlPlane.replicas

The number of control plane machines to provision.

The only supported value is 3, which is the default value.

credentialsMode

The Cloud Credential Operator (CCO) mode. If no mode is specified, the CCO dynamically tries to determine the capabilities of the provided credentials, with a preference for mint mode on the platforms where multiple modes are supported.

Note

Not all CCO modes are supported for all cloud providers. For more information on CCO modes, see the Cloud Credential Operator entry in the Red Hat Operators reference content.

Mint, Passthrough, Manual, or an empty string ("").

fips

Enable or disable FIPS mode. The default is false (disabled). If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead.

Important

The use of FIPS Validated / Modules in Process cryptographic libraries is only supported on OpenShift Container Platform deployments on the x86_64 architecture.

Note

If you are using Azure File storage, you cannot enable FIPS mode.

false or true

imageContentSources

Sources and repositories for the release-image content.

Array of objects. Includes a source and, optionally, mirrors, as described in the following rows of this table.

imageContentSources.source

Required if you use imageContentSources. Specify the repository that users refer to, for example, in image pull specifications.

String

imageContentSources.mirrors

Specify one or more repositories that may also contain the same images.

Array of strings

publish

How to publish or expose the user-facing endpoints of your cluster, such as the Kubernetes API, OpenShift routes.

Internal or External. To deploy a private cluster, which cannot be accessed from the internet, set publish to Internal. The default value is External.

sshKey

The SSH key or keys to authenticate access your cluster machines.

Note

For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses.

One or more keys. For example:

sshKey:
  <key1>
  <key2>
  <key3>
1.7.7.1.4. Additional Azure configuration parameters

Additional Azure configuration parameters are described in the following table:

Table 1.23. Additional Azure parameters

ParameterDescriptionValues

compute.platform.azure.osDisk.diskSizeGB

The Azure disk size for the VM.

Integer that represents the size of the disk in GB. The default is 128.

compute.platform.azure.osDisk.diskType

Defines the type of disk.

standard_LRS, premium_LRS, or standardSSD_LRS. The default is premium_LRS.

controlPlane.platform.azure.osDisk.diskSizeGB

The Azure disk size for the VM.

Integer that represents the size of the disk in GB. The default is 1024.

controlPlane.platform.azure.osDisk.diskType

Defines the type of disk.

premium_LRS or standardSSD_LRS. The default is premium_LRS.

platform.azure.baseDomainResourceGroupName

The name of the resource group that contains the DNS zone for your base domain.

String, for example production_cluster.

platform.azure.outboundType

The outbound routing strategy used to connect your cluster to the internet. If you are using user-defined routing, you must have pre-existing networking available where the outbound routing has already been configured prior to installing a cluster. The installation program is not responsible for configuring user-defined routing.

LoadBalancer or UserDefinedRouting. The default is LoadBalancer.

platform.azure.region

The name of the Azure region that hosts your cluster.

Any valid region name, such as centralus.

platform.azure.zone

List of availability zones to place machines in. For high availability, specify at least two zones.

List of zones, for example ["1", "2", "3"].

platform.azure.networkResourceGroupName

The name of the resource group that contains the existing VNet that you want to deploy your cluster to. This name cannot be the same as the platform.azure.baseDomainResourceGroupName.

String.

platform.azure.virtualNetwork

The name of the existing VNet that you want to deploy your cluster to.

String.

platform.azure.controlPlaneSubnet

The name of the existing subnet in your VNet that you want to deploy your control plane machines to.

Valid CIDR, for example 10.0.0.0/16.

platform.azure.computeSubnet

The name of the existing subnet in your VNet that you want to deploy your compute machines to.

Valid CIDR, for example 10.0.0.0/16.

platform.azure.cloudName

The name of the Azure cloud environment that is used to configure the Azure SDK with the appropriate Azure API endpoints. If empty, the default value AzurePublicCloud is used.

Any valid cloud environment, such as AzurePublicCloud or AzureUSGovernmentCloud.

Note

You cannot customize Azure Availability Zones or Use tags to organize your Azure resources with an Azure cluster.

1.7.7.2. Sample customized install-config.yaml file for Azure

You can customize the install-config.yaml file to specify more details about your OpenShift Container Platform cluster’s platform or modify the values of the required parameters.

Important

This sample YAML file is provided for reference only. You must obtain your install-config.yaml file by using the installation program and modify it.

apiVersion: v1
baseDomain: example.com 1
controlPlane: 2
  hyperthreading: Enabled 3 4
  name: master
  platform:
    azure:
      osDisk:
        diskSizeGB: 1024 5
        diskType: Premium_LRS
      type: Standard_D8s_v3
  replicas: 3
compute: 6
- hyperthreading: Enabled 7
  name: worker
  platform:
    azure:
      type: Standard_D2s_v3
      osDisk:
        diskSizeGB: 512 8
        diskType: Standard_LRS
      zones: 9
      - "1"
      - "2"
      - "3"
  replicas: 5
metadata:
  name: test-cluster 10
networking:
  clusterNetwork:
  - cidr: 10.128.0.0/14
    hostPrefix: 23
  machineNetwork:
  - cidr: 10.0.0.0/16
  networkType: OpenShiftSDN
  serviceNetwork:
  - 172.30.0.0/16
platform:
  azure:
    baseDomainResourceGroupName: resource_group 11
    region: centralus 12
    resourceGroupName: existing_resource_group 13
    networkResourceGroupName: vnet_resource_group 14
    virtualNetwork: vnet 15
    controlPlaneSubnet: control_plane_subnet 16
    computeSubnet: compute_subnet 17
    outboundType: UserDefinedRouting 18
    cloudName: AzurePublicCloud
pullSecret: '{"auths": ...}' 19
fips: false 20
sshKey: ssh-ed25519 AAAA... 21
publish: Internal 22
1 10 12 19
Required. The installation program prompts you for this value.
2 6
If you do not provide these parameters and values, the installation program provides the default value.
3 7
The controlPlane section is a single mapping, but the compute section is a sequence of mappings. To meet the requirements of the different data structures, the first line of the compute section must begin with a hyphen, -, and the first line of the controlPlane section must not. Only one control plane pool is used.
4
Whether to enable or disable simultaneous multithreading, or hyperthreading. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. You can disable it by setting the parameter value to Disabled. If you disable simultaneous multithreading in some cluster machines, you must disable it in all cluster machines.
Important

If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Use larger virtual machine types, such as Standard_D8s_v3, for your machines if you disable simultaneous multithreading.

5 8
You can specify the size of the disk to use in GB. Minimum recommendation for control plane nodes (also known as the master nodes) is 1024 GB.
9
Specify a list of zones to deploy your machines to. For high availability, specify at least two zones.
11
Specify the name of the resource group that contains the DNS zone for your base domain.
13
Specify the name of an already existing resource group to install your cluster to. If undefined, a new resource group is created for the cluster.
14
If you use an existing VNet, specify the name of the resource group that contains it.
15
If you use an existing VNet, specify its name.
16
If you use an existing VNet, specify the name of the subnet to host the control plane machines.
17
If you use an existing VNet, specify the name of the subnet to host the compute machines.
18
You can customize your own outbound routing. Configuring user-defined routing prevents exposing external endpoints in your cluster. User-defined routing for egress requires deploying your cluster to an existing VNet.
20
Whether to enable or disable FIPS mode. By default, FIPS mode is not enabled. If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead.
Important

The use of FIPS Validated / Modules in Process cryptographic libraries is only supported on OpenShift Container Platform deployments on the x86_64 architecture.

21
You can optionally provide the sshKey value that you use to access the machines in your cluster.
Note

For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses.

22
How to publish the user-facing endpoints of your cluster. Set publish to Internal to deploy a private cluster, which cannot be accessed from the Internet. The default value is External.

1.7.7.3. Configuring the cluster-wide proxy during installation

Production environments can deny direct access to the Internet and instead have an HTTP or HTTPS proxy available. You can configure a new OpenShift Container Platform cluster to use a proxy by configuring the proxy settings in the install-config.yaml file.

Prerequisites

  • You have an existing install-config.yaml file.
  • You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the Proxy object’s spec.noProxy field to bypass the proxy if necessary.

    Note

    The Proxy object status.noProxy field is populated with the values of the networking.machineNetwork[].cidr, networking.clusterNetwork[].cidr, and networking.serviceNetwork[] fields from your installation configuration.

    For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the Proxy object status.noProxy field is also populated with the instance metadata endpoint (169.254.169.254).

Procedure

  1. Edit your install-config.yaml file and add the proxy settings. For example:

    apiVersion: v1
    baseDomain: my.domain.com
    proxy:
      httpProxy: http://<username>:<pswd>@<ip>:<port> 1
      httpsProxy: https://<username>:<pswd>@<ip>:<port> 2
      noProxy: example.com 3
    additionalTrustBundle: | 4
        -----BEGIN CERTIFICATE-----
        <MY_TRUSTED_CA_CERT>
        -----END CERTIFICATE-----
    ...
    1
    A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be http.
    2
    A proxy URL to use for creating HTTPS connections outside the cluster.
    3
    A comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com, but not y.com. Use * to bypass the proxy for all destinations.
    4
    If provided, the installation program generates a config map that is named user-ca-bundle in the openshift-config namespace to hold the additional CA certificates. If you provide additionalTrustBundle and at least one proxy setting, the Proxy object is configured to reference the user-ca-bundle config map in the trustedCA field. The Cluster Network Operator then creates a trusted-ca-bundle config map that merges the contents specified for the trustedCA parameter with the RHCOS trust bundle. The additionalTrustBundle field is required unless the proxy’s identity certificate is signed by an authority from the RHCOS trust bundle.
    Note

    The installation program does not support the proxy readinessEndpoints field.

  2. Save the file and reference it when installing OpenShift Container Platform.

The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settings in the provided install-config.yaml file. If no proxy settings are provided, a cluster Proxy object is still created, but it will have a nil spec.

Note

Only the Proxy object named cluster is supported, and no additional proxies can be created.

1.7.8. Deploying the cluster

You can install OpenShift Container Platform on a compatible cloud platform.

Important

You can run the create cluster command of the installation program only once, during initial installation.

Prerequisites

  • Configure an account with the cloud platform that hosts your cluster.
  • Obtain the OpenShift Container Platform installation program and the pull secret for your cluster.

Procedure

  1. Change to the directory that contains the installation program and initialize the cluster deployment:

    $ ./openshift-install create cluster --dir <installation_directory> \ 1
        --log-level=info 2
    1
    For <installation_directory>, specify the
    2
    To view different installation details, specify warn, debug, or error instead of info.
    Note

    If the cloud provider account that you configured on your host does not have sufficient permissions to deploy the cluster, the installation process stops, and the missing permissions are displayed.

    When the cluster deployment completes, directions for accessing your cluster, including a link to its web console and credentials for the kubeadmin user, display in your terminal.

    Example output

    ...
    INFO Install complete!
    INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig'
    INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com
    INFO Login to the console with user: "kubeadmin", and password: "4vYBz-Ee6gm-ymBZj-Wt5AL"
    INFO Time elapsed: 36m22s

    Note

    The cluster access and credential information also outputs to <installation_directory>/.openshift_install.log when an installation succeeds.

    Important
    • The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information.
    • It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation.
    Important

    You must not delete the installation program or the files that the installation program creates. Both are required to delete the cluster.

1.7.9. Installing the OpenShift CLI by downloading the binary

You can install the OpenShift CLI (oc) in order to interact with OpenShift Container Platform from a command-line interface. You can install oc on Linux, Windows, or macOS.

Important

If you installed an earlier version of oc, you cannot use it to complete all of the commands in OpenShift Container Platform 4.6. Download and install the new version of oc.

1.7.9.1. Installing the OpenShift CLI on Linux

You can install the OpenShift CLI (oc) binary on Linux by using the following procedure.

Procedure

  1. Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal.
  2. Select the appropriate version in the Version drop-down menu.
  3. Click Download Now next to the OpenShift v4.6 Linux Client entry and save the file.
  4. Unpack the archive:

    $ tar xvzf <file>
  5. Place the oc binary in a directory that is on your PATH.

    To check your PATH, execute the following command:

    $ echo $PATH

After you install the OpenShift CLI, it is available using the oc command:

$ oc <command>

1.7.9.2. Installing the OpenShift CLI on Windows

You can install the OpenShift CLI (oc) binary on Windows by using the following procedure.

Procedure

  1. Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal.
  2. Select the appropriate version in the Version drop-down menu.
  3. Click Download Now next to the OpenShift v4.6 Windows Client entry and save the file.
  4. Unzip the archive with a ZIP program.
  5. Move the oc binary to a directory that is on your PATH.

    To check your PATH, open the command prompt and execute the following command:

    C:\> path

After you install the OpenShift CLI, it is available using the oc command:

C:\> oc <command>

1.7.9.3. Installing the OpenShift CLI on macOS

You can install the OpenShift CLI (oc) binary on macOS by using the following procedure.

Procedure

  1. Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal.
  2. Select the appropriate version in the Version drop-down menu.
  3. Click Download Now next to the OpenShift v4.6 MacOSX Client entry and save the file.
  4. Unpack and unzip the archive.
  5. Move the oc binary to a directory on your PATH.

    To check your PATH, open a terminal and execute the following command:

    $ echo $PATH

After you install the OpenShift CLI, it is available using the oc command:

$ oc <command>

1.7.10. Logging in to the cluster by using the CLI

You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation.

Prerequisites

  • You deployed an OpenShift Container Platform cluster.
  • You installed the oc CLI.

Procedure

  1. Export the kubeadmin credentials:

    $ export KUBECONFIG=<installation_directory>/auth/kubeconfig 1
    1
    For <installation_directory>, specify the path to the directory that you stored the installation files in.
  2. Verify you can run oc commands successfully using the exported configuration:

    $ oc whoami

    Example output

    system:admin

Additional resources

  • See Accessing the web console for more details about accessing and understanding the OpenShift Container Platform web console.

1.7.11. Telemetry access for OpenShift Container Platform

In OpenShift Container Platform 4.6, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager.

After you confirm that your OpenShift Cluster Manager inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level.

Additional resources

1.7.12. Next steps

1.8. Installing a cluster on Azure into a government region

In OpenShift Container Platform version 4.6, you can install a cluster on Microsoft Azure into a government region. To configure the government region, you modify parameters in the install-config.yaml file before you install the cluster.

1.8.1. Prerequisites

1.8.2. Azure government regions

OpenShift Container Platform supports deploying a cluster to Microsoft Azure Government (MAG) regions. MAG is specifically designed for US government agencies at the federal, state, and local level, as well as contractors, educational institutions, and other US customers that must run sensitive workloads on Azure. MAG is composed of government-only data center regions, all granted an Impact Level 5 Provisional Authorization.

Installing to a MAG region requires manually configuring the Azure Government dedicated cloud instance and region in the install-config.yaml file. You must also update your service principal to reference the appropriate government environment.

Note

The Azure government region cannot be selected using the guided terminal prompts from the installation program. You must define the region manually in the install-config.yaml file. Remember to also set the dedicated cloud instance, like AzureUSGovernmentCloud, based on the region specified.

1.8.3. Private clusters

You can deploy a private OpenShift Container Platform cluster that does not expose external endpoints. Private clusters are accessible from only an internal network and are not visible to the Internet.

By default, OpenShift Container Platform is provisioned to use publicly-accessible DNS and endpoints. A private cluster sets the DNS, Ingress Controller, and API server to private when you deploy your cluster. This means that the cluster resources are only accessible from your internal network and are not visible to the internet.

To deploy a private cluster, you must use existing networking that meets your requirements. Your cluster resources might be shared between other clusters on the network.

Additionally, you must deploy a private cluster from a machine that has access the API services for the cloud you provision to, the hosts on the network that you provision, and to the internet to obtain installation media. You can use any machine that meets these access requirements and follows your company’s guidelines. For example, this machine can be a bastion host on your cloud network or a machine that has access to the network through a VPN.

1.8.3.1. Private clusters in Azure

To create a private cluster on Microsoft Azure, you must provide an existing private VNet and subnets to host the cluster. The installation program must also be able to resolve the DNS records that the cluster requires. The installation program configures the Ingress Operator and API server for only internal traffic.

Depending how your network connects to the private VNET, you might need to use a DNS forwarder in order to resolve the cluster’s private DNS records. The cluster’s machines use 168.63.129.16 internally for DNS resolution. For more information, see What is Azure Private DNS? and What is IP address 168.63.129.16? in the Azure documentation.

The cluster still requires access to Internet to access the Azure APIs.

The following items are not required or created when you install a private cluster:

  • A BaseDomainResourceGroup, since the cluster does not create public records
  • Public IP addresses
  • Public DNS records
  • Public endpoints

    The cluster is configured so that the Operators do not create public records for the cluster and all cluster machines are placed in the private subnets that you specify.
1.8.3.1.1. Limitations

Private clusters on Azure are subject to only the limitations that are associated with the use of an existing VNet.

1.8.3.2. User-defined outbound routing

In OpenShift Container Platform, you can choose your own outbound routing for a cluster to connect to the Internet. This allows you to skip the creation of public IP addresses and the public load balancer.

You can configure user-defined routing by modifying parameters in the install-config.yaml file before installing your cluster. A pre-existing VNet is required to use outbound routing when installing a cluster; the installation program is not responsible for configuring this.

When configuring a cluster to use user-defined routing, the installation program does not create the following resources:

  • Outbound rules for access to the Internet.
  • Public IPs for the public load balancer.
  • Kubernetes Service object to add the cluster machines to the public load balancer for outbound requests.

You must ensure the following items are available before setting user-defined routing:

  • Egress to the Internet is possible to pull container images, unless using an internal registry mirror.
  • The cluster can access Azure APIs.
  • Various allowlist endpoints are configured. You can reference these endpoints in the Configuring your firewall section.

There are several pre-existing networking setups that are supported for Internet access using user-defined routing.

Private cluster with network address translation

You can use Azure VNET network address translation (NAT) to provide outbound Internet access for the subnets in your cluster. You can reference Create a NAT gateway using Azure CLI in the Azure documentation for configuration instructions.

When using a VNet setup with Azure NAT and user-defined routing configured, you can create a private cluster with no public endpoints.

Private cluster with Azure Firewall

You can use Azure Firewall to provide outbound routing for the VNet used to install the cluster. You can learn more about providing user-defined routing with Azure Firewall in the Azure documentation.

When using a VNet setup with Azure Firewall and user-defined routing configured, you can create a private cluster with no public endpoints.

Private cluster with a proxy configuration

You can use a proxy with user-defined routing to allow egress to the Internet. You must ensure that cluster Operators do not access Azure APIs using a proxy; Operators must have access to Azure APIs outside of the proxy.

When using the default route table for subnets, with 0.0.0.0/0 populated automatically by Azure, all Azure API requests are routed over Azure’s internal network even though the IP addresses are public. As long as the Network Security Group rules allow egress to Azure API endpoints, proxies with user-defined routing configured allow you to create private clusters with no public endpoints.

Private cluster with no Internet access

You can install a private network that restricts all access to the internet, except the Azure API. This is accomplished by mirroring the release image registry locally. Your cluster must have access to the following:

  • An internal registry mirror that allows for pulling container images
  • Access to Azure APIs

With these requirements available, you can use user-defined routing to create private clusters with no public endpoints.

1.8.4. About reusing a VNet for your OpenShift Container Platform cluster

In OpenShift Container Platform 4.6, you can deploy a cluster into an existing Azure Virtual Network (VNet) in Microsoft Azure. If you do, you must also use existing subnets within the VNet and routing rules.

By deploying OpenShift Container Platform into an existing Azure VNet, you might be able to avoid service limit constraints in new accounts or more easily abide by the operational constraints that your company’s guidelines set. This is a good option to use if you cannot obtain the infrastructure creation permissions that are required to create the VNet.

1.8.4.1. Requirements for using your VNet

When you deploy a cluster by using an existing VNet, you must perform additional network configuration before you install the cluster. In installer-provisioned infrastructure clusters, the installer usually creates the following components, but it does not create them when you install into an existing VNet:

  • Subnets
  • Route tables
  • VNets
  • Network Security Groups
Note

The installation program requires that you use the cloud-provided DNS server. Using a custom DNS server is not supported and causes the installation to fail.

If you use a custom VNet, you must correctly configure it and its subnets for the installation program and the cluster to use. The installation program cannot subdivide network ranges for the cluster to use, set route tables for the subnets, or set VNet options like DHCP, so you must do so before you install the cluster.

The cluster must be able to access the resource group that contains the existing VNet and subnets. While all of the resources that the cluster creates are placed in a separate resource group that it creates, some network resources are used from a separate group. Some cluster Operators must be able to access resources in both resource groups. For example, the Machine API controller attaches NICS for the virtual machines that it creates to subnets from the networking resource group.

Your VNet must meet the following characteristics:

  • The VNet’s CIDR block must contain the Networking.MachineCIDR range, which is the IP address pool for cluster machines.
  • The VNet and its subnets must belong to the same resource group, and the subnets must be configured to use Azure-assigned DHCP IP addresses instead of static IP addresses.

You must provide two subnets within your VNet, one for the control plane machines and one for the compute machines. Because Azure distributes machines in different availability zones within the region that you specify, your cluster will have high availability by default.

To ensure that the subnets that you provide are suitable, the installation program confirms the following data:

  • All the specified subnets exist.
  • There are two private subnets, one for the control plane machines and one for the compute machines.
  • The subnet CIDRs belong to the machine CIDR that you specified. Machines are not provisioned in availability zones that you do not provide private subnets for. If required, the installation program creates public load balancers that manage the control plane and worker nodes, and Azure allocates a public IP address to them.
Note

If you destroy a cluster that uses an existing VNet, the VNet is not deleted.

1.8.4.1.1. Network security group requirements

The network security groups for the subnets that host the compute and control plane machines require specific access to ensure that the cluster communication is correct. You must create rules to allow access to the required cluster communication ports.

Important

The network security group rules must be in place before you install the cluster. If you attempt to install a cluster without the required access, the installation program cannot reach the Azure APIs, and installation fails.

Table 1.24. Required ports

PortDescriptionControl planeCompute

80

Allows HTTP traffic

 

x

443

Allows HTTPS traffic

 

x

6443

Allows communication to the control plane machines

x

 

22623

Allows communication to the machine config server

x

 
Note

Since cluster components do not modify the user-provided network security groups, which the Kubernetes controllers update, a pseudo-network security group is created for the Kubernetes controller to modify without impacting the rest of the environment.

1.8.4.2. Division of permissions

Starting with OpenShift Container Platform 4.3, you do not need all of the permissions that are required for an installation program-provisioned infrastructure cluster to deploy a cluster. This change mimics the division of permissions that you might have at your company: some individuals can create different resources in your clouds than others. For example, you might be able to create application-specific items, like instances, storage, and load balancers, but not networking-related components such as VNets, subnet, or ingress rules.

The Azure credentials that you use when you create your cluster do not need the networking permissions that are required to make VNets and core networking components within the VNet, such as subnets, routing tables, internet gateways, NAT, and VPN. You still need permission to make the application resources that the machines within the cluster require, such as load balancers, security groups, storage accounts, and nodes.

1.8.4.3. Isolation between clusters

Because the cluster is unable to modify network security groups in an existing subnet, there is no way to isolate clusters from each other on the VNet.

1.8.5. Internet access for OpenShift Container Platform

In OpenShift Container Platform 4.6, you require access to the Internet to install your cluster.

You must have Internet access to:

  • Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster.
  • Access Quay.io to obtain the packages that are required to install your cluster.
  • Obtain the packages that are required to perform cluster updates.
Important

If your cluster cannot have direct Internet access, you can perform a restricted network installation on some types of infrastructure that you provision. During that process, you download the content that is required and use it to populate a mirror registry with the packages that you need to install a cluster and generate the installation program. With some installation types, the environment that you install your cluster in will not require Internet access. Before you update the cluster, you update the content of the mirror registry.

1.8.6. Generating an SSH private key and adding it to the agent

If you want to perform installation debugging or disaster recovery on your cluster, you must provide an SSH key to both your ssh-agent and the installation program. You can use this key to access the bootstrap machine in a public cluster to troubleshoot installation issues.

Note

In a production environment, you require disaster recovery and debugging.

You can use this key to SSH into the master nodes as the user core. When you deploy the cluster, the key is added to the core user’s ~/.ssh/authorized_keys list.

Note

You must use a local key, not one that you configured with platform-specific approaches such as AWS key pairs.

Procedure

  1. If you do not have an SSH key that is configured for password-less authentication on your computer, create one. For example, on a computer that uses a Linux operating system, run the following command:

    $ ssh-keygen -t ed25519 -N '' \
        -f <path>/<file_name> 1
    1
    Specify the path and file name, such as ~/.ssh/id_rsa, of the new SSH key. If you have an existing key pair, ensure your public key is in the your ~/.ssh directory.

    Running this command generates an SSH key that does not require a password in the location that you specified.

    Note

    If you plan to install an OpenShift Container Platform cluster that uses FIPS Validated / Modules in Process cryptographic libraries on the x86_64 architecture, do not create a key that uses the ed25519 algorithm. Instead, create a key that uses the rsa or ecdsa algorithm.

  2. Start the ssh-agent process as a background task:

    $ eval "$(ssh-agent -s)"

    Example output

    Agent pid 31874

    Note

    If your cluster is in FIPS mode, only use FIPS-compliant algorithms to generate the SSH key. The key must be either RSA or ECDSA.

  3. Add your SSH private key to the ssh-agent:

    $ ssh-add <path>/<file_name> 1

    Example output

    Identity added: /home/<you>/<path>/<file_name> (<computer_name>)

    1
    Specify the path and file name for your SSH private key, such as ~/.ssh/id_rsa

Next steps

  • When you install OpenShift Container Platform, provide the SSH public key to the installation program.

1.8.7. Obtaining the installation program

Before you install OpenShift Container Platform, download the installation file on a local computer.

Prerequisites

  • You have a computer that runs Linux or macOS, with 500 MB of local disk space

Procedure

  1. Access the Infrastructure Provider page on the OpenShift Cluster Manager site. If you have a Red Hat account, log in with your credentials. If you do not, create an account.
  2. Select your infrastructure provider.
  3. Navigate to the page for your installation type, download the installation program for your operating system, and place the file in the directory where you will store the installation configuration files.

    Important

    The installation program creates several files on the computer that you use to install your cluster. You must keep the installation program and the files that the installation program creates after you finish installing the cluster. Both files are required to delete the cluster.

    Important

    Deleting the files created by the installation program does not remove your cluster, even if the cluster failed during installation. To remove your cluster, complete the OpenShift Container Platform uninstallation procedures for your specific cloud provider.

  4. Extract the installation program. For example, on a computer that uses a Linux operating system, run the following command:

    $ tar xvf openshift-install-linux.tar.gz
  5. Download your installation pull secret from the Red Hat OpenShift Cluster Manager. This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components.

1.8.8. Manually creating the installation configuration file

When installing OpenShift Container Platform on Microsoft Azure into a government region, you must manually generate your installation configuration file.

Prerequisites

  • Obtain the OpenShift Container Platform installation program and the access token for your cluster.

Procedure

  1. Create an installation directory to store your required installation assets in:

    $ mkdir <installation_directory>
    Important

    You must create a directory. Some installation assets, like bootstrap X.509 certificates have short expiration intervals, so you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version.

  2. Customize the following install-config.yaml file template and save it in the <installation_directory>.

    Note

    You must name this configuration file install-config.yaml.

  3. Back up the install-config.yaml file so that you can use it to install multiple clusters.

    Important

    The install-config.yaml file is consumed during the next step of the installation process. You must back it up now.

1.8.8.1. Installation configuration parameters

Before you deploy an OpenShift Container Platform cluster, you provide parameter values to describe your account on the cloud platform that hosts your cluster and optionally customize your cluster’s platform. When you create the install-config.yaml installation configuration file, you provide values for the required parameters through the command line. If you customize your cluster, you can modify the install-config.yaml file to provide more details about the platform.

Note

After installation, you cannot modify these parameters in the install-config.yaml file.

Important

The openshift-install command does not validate field names for parameters. If an incorrect name is specified, the related file or object is not created, and no error is reported. Ensure that the field names for any parameters that are specified are correct.

1.8.8.1.1. Required configuration parameters

Required installation configuration parameters are described in the following table:

Table 1.25. Required parameters

ParameterDescriptionValues

apiVersion

The API version for the install-config.yaml content. The current version is v1. The installer may also support older API versions.

String

baseDomain

The base domain of your cloud provider. The base domain is used to create routes to your OpenShift Container Platform cluster components. The full DNS name for your cluster is a combination of the baseDomain and metadata.name parameter values that uses the <metadata.name>.<baseDomain> format.

A fully-qualified domain or subdomain name, such as example.com.

metadata

Kubernetes resource ObjectMeta, from which only the name parameter is consumed.

Object

metadata.name

The name of the cluster. DNS records for the cluster are all subdomains of {{.metadata.name}}.{{.baseDomain}}.

String of lowercase letters, hyphens (-), and periods (.), such as dev.

platform

The configuration for the specific platform upon which to perform the installation: aws, baremetal, azure, openstack, ovirt, vsphere. For additional information about platform.<platform> parameters, consult the following table for your specific platform.

Object

pullSecret

Get a pull secret from the Red Hat OpenShift Cluster Manager to authenticate downloading container images for OpenShift Container Platform components from services such as Quay.io.

{
   "auths":{
      "cloud.openshift.com":{
         "auth":"b3Blb=",
         "email":"you@example.com"
      },
      "quay.io":{
         "auth":"b3Blb=",
         "email":"you@example.com"
      }
   }
}
1.8.8.1.2. Network configuration parameters

You can customize your installation configuration based on the requirements of your existing network infrastructure. For example, you can expand the IP address block for the cluster network or provide different IP address blocks than the defaults.

Only IPv4 addresses are supported.

Table 1.26. Network parameters

ParameterDescriptionValues

networking

The configuration for the cluster network.

Object

Note

You cannot modify parameters specified by the networking object after installation.

networking.networkType

The cluster network provider Container Network Interface (CNI) plug-in to install.

Either OpenShiftSDN or OVNKubernetes. The default value is OpenShiftSDN.

networking.clusterNetwork

The IP address blocks for pods.

The default value is 10.128.0.0/14 with a host prefix of /23.

If you specify multiple IP address blocks, the blocks must not overlap.

An array of objects. For example:

networking:
  clusterNetwork:
  - cidr: 10.128.0.0/14
    hostPrefix: 23

networking.clusterNetwork.cidr

Required if you use networking.clusterNetwork. An IP address block.

An IPv4 network.

An IP address block in Classless Inter-Domain Routing (CIDR) notation. The prefix length for an IPv4 block is between 0 and 32.

networking.clusterNetwork.hostPrefix

The subnet prefix length to assign to each individual node. For example, if hostPrefix is set to 23 then each node is assigned a /23 subnet out of the given cidr. A hostPrefix value of 23 provides 510 (2^(32 - 23) - 2) pod IP addresses.

A subnet prefix.

The default value is 23.

networking.serviceNetwork

The IP address block for services. The default value is 172.30.0.0/16.

The OpenShift SDN and OVN-Kubernetes network providers support only a single IP address block for the service network.

An array with an IP address block in CIDR format. For example:

networking:
  serviceNetwork:
   - 172.30.0.0/16

networking.machineNetwork

The IP address blocks for machines.

If you specify multiple IP address blocks, the blocks must not overlap.

An array of objects. For example:

networking:
  machineNetwork:
  - cidr: 10.0.0.0/16

networking.machineNetwork.cidr

Required if you use networking.machineNetwork. An IP address block. The default value is 10.0.0.0/16 for all platforms other than libvirt. For libvirt, the default value is 192.168.126.0/24.

An IP network block in CIDR notation.

For example, 10.0.0.0/16.

Note

Set the networking.machineNetwork to match the CIDR that the preferred NIC resides in.

1.8.8.1.3. Optional configuration parameters

Optional installation configuration parameters are described in the following table:

Table 1.27. Optional parameters

ParameterDescriptionValues

additionalTrustBundle

A PEM-encoded X.509 certificate bundle that is added to the nodes' trusted certificate store. This trust bundle may also be used when a proxy has been configured.

String

compute

The configuration for the machines that comprise the compute nodes.

Array of machine-pool objects. For details, see the following "Machine-pool" table.

compute.architecture

Determines the instruction set architecture of the machines in the pool. Currently, heteregeneous clusters are not supported, so all pools must specify the same architecture. Valid values are amd64 (the default).

String

compute.hyperthreading

Whether to enable or disable simultaneous multithreading, or hyperthreading, on compute machines. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores.

Important

If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance.

Enabled or Disabled

compute.name

Required if you use compute. The name of the machine pool.

worker

compute.platform

Required if you use compute. Use this parameter to specify the cloud provider to host the worker machines. This parameter value must match the controlPlane.platform parameter value.

aws, azure, gcp, openstack, ovirt, vsphere, or {}

compute.replicas

The number of compute machines, which are also known as worker machines, to provision.

A positive integer greater than or equal to 2. The default value is 3.

controlPlane

The configuration for the machines that comprise the control plane.

Array of MachinePool objects. For details, see the following "Machine-pool" table.

controlPlane.architecture

Determines the instruction set architecture of the machines in the pool. Currently, heterogeneous clusters are not supported, so all pools must specify the same architecture. Valid values are amd64 (the default).

String

controlPlane.hyperthreading

Whether to enable or disable simultaneous multithreading, or hyperthreading, on control plane machines. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores.

Important

If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance.

Enabled or Disabled

controlPlane.name

Required if you use controlPlane. The name of the machine pool.

master

controlPlane.platform

Required if you use controlPlane. Use this parameter to specify the cloud provider that hosts the control plane machines. This parameter value must match the compute.platform parameter value.

aws, azure, gcp, openstack, ovirt, vsphere, or {}

controlPlane.replicas

The number of control plane machines to provision.

The only supported value is 3, which is the default value.

credentialsMode

The Cloud Credential Operator (CCO) mode. If no mode is specified, the CCO dynamically tries to determine the capabilities of the provided credentials, with a preference for mint mode on the platforms where multiple modes are supported.

Note

Not all CCO modes are supported for all cloud providers. For more information on CCO modes, see the Cloud Credential Operator entry in the Red Hat Operators reference content.

Mint, Passthrough, Manual, or an empty string ("").

fips

Enable or disable FIPS mode. The default is false (disabled). If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead.

Important

The use of FIPS Validated / Modules in Process cryptographic libraries is only supported on OpenShift Container Platform deployments on the x86_64 architecture.

Note

If you are using Azure File storage, you cannot enable FIPS mode.

false or true

imageContentSources

Sources and repositories for the release-image content.

Array of objects. Includes a source and, optionally, mirrors, as described in the following rows of this table.

imageContentSources.source

Required if you use imageContentSources. Specify the repository that users refer to, for example, in image pull specifications.

String

imageContentSources.mirrors

Specify one or more repositories that may also contain the same images.

Array of strings

publish

How to publish or expose the user-facing endpoints of your cluster, such as the Kubernetes API, OpenShift routes.

Internal or External. To deploy a private cluster, which cannot be accessed from the internet, set publish to Internal. The default value is External.

sshKey

The SSH key or keys to authenticate access your cluster machines.

Note

For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses.

One or more keys. For example:

sshKey:
  <key1>
  <key2>
  <key3>
1.8.8.1.4. Additional Azure configuration parameters

Additional Azure configuration parameters are described in the following table:

Table 1.28. Additional Azure parameters

ParameterDescriptionValues

compute.platform.azure.osDisk.diskSizeGB

The Azure disk size for the VM.

Integer that represents the size of the disk in GB. The default is 128.

compute.platform.azure.osDisk.diskType

Defines the type of disk.

standard_LRS, premium_LRS, or standardSSD_LRS. The default is premium_LRS.

controlPlane.platform.azure.osDisk.diskSizeGB

The Azure disk size for the VM.

Integer that represents the size of the disk in GB. The default is 1024.

controlPlane.platform.azure.osDisk.diskType

Defines the type of disk.

premium_LRS or standardSSD_LRS. The default is premium_LRS.

platform.azure.baseDomainResourceGroupName

The name of the resource group that contains the DNS zone for your base domain.

String, for example production_cluster.

platform.azure.outboundType

The outbound routing strategy used to connect your cluster to the internet. If you are using user-defined routing, you must have pre-existing networking available where the outbound routing has already been configured prior to installing a cluster. The installation program is not responsible for configuring user-defined routing.

LoadBalancer or UserDefinedRouting. The default is LoadBalancer.

platform.azure.region

The name of the Azure region that hosts your cluster.

Any valid region name, such as centralus.

platform.azure.zone

List of availability zones to place machines in. For high availability, specify at least two zones.

List of zones, for example ["1", "2", "3"].

platform.azure.networkResourceGroupName

The name of the resource group that contains the existing VNet that you want to deploy your cluster to. This name cannot be the same as the platform.azure.baseDomainResourceGroupName.

String.

platform.azure.virtualNetwork

The name of the existing VNet that you want to deploy your cluster to.

String.

platform.azure.controlPlaneSubnet

The name of the existing subnet in your VNet that you want to deploy your control plane machines to.

Valid CIDR, for example 10.0.0.0/16.

platform.azure.computeSubnet

The name of the existing subnet in your VNet that you want to deploy your compute machines to.

Valid CIDR, for example 10.0.0.0/16.

platform.azure.cloudName

The name of the Azure cloud environment that is used to configure the Azure SDK with the appropriate Azure API endpoints. If empty, the default value AzurePublicCloud is used.

Any valid cloud environment, such as AzurePublicCloud or AzureUSGovernmentCloud.

Note

You cannot customize Azure Availability Zones or Use tags to organize your Azure resources with an Azure cluster.

1.8.8.2. Sample customized install-config.yaml file for Azure

You can customize the install-config.yaml file to specify more details about your OpenShift Container Platform cluster’s platform or modify the values of the required parameters.

Important

This sample YAML file is provided for reference only. You must obtain your install-config.yaml file by using the installation program and modify it.

apiVersion: v1
baseDomain: example.com 1
controlPlane: 2
  hyperthreading: Enabled 3 4
  name: master
  platform:
    azure:
      osDisk:
        diskSizeGB: 1024 5
        diskType: Premium_LRS
      type: Standard_D8s_v3
  replicas: 3
compute: 6
- hyperthreading: Enabled 7
  name: worker
  platform:
    azure:
      type: Standard_D2s_v3
      osDisk:
        diskSizeGB: 512 8
        diskType: Standard_LRS
      zones: 9
      - "1"
      - "2"
      - "3"
  replicas: 5
metadata:
  name: test-cluster 10
networking:
  clusterNetwork:
  - cidr: 10.128.0.0/14
    hostPrefix: 23
  machineNetwork:
  - cidr: 10.0.0.0/16
  networkType: OpenShiftSDN
  serviceNetwork:
  - 172.30.0.0/16
platform:
  azure:
    baseDomainResourceGroupName: resource_group 11
    region: usgovvirginia
    resourceGroupName: existing_resource_group 12
    networkResourceGroupName: vnet_resource_group 13
    virtualNetwork: vnet 14
    controlPlaneSubnet: control_plane_subnet 15
    computeSubnet: compute_subnet 16
    outboundType: UserDefinedRouting 17
    cloudName: AzureUSGovernmentCloud 18
pullSecret: '{"auths": ...}' 19
fips: false 20
sshKey: ssh-ed25519 AAAA... 21
publish: Internal 22
1 10 19
Required.
2 6
If you do not provide these parameters and values, the installation program provides the default value.
3 7
The controlPlane section is a single mapping, but the compute section is a sequence of mappings. To meet the requirements of the different data structures, the first line of the compute section must begin with a hyphen, -, and the first line of the controlPlane section must not. Only one control plane pool is used.
4
Whether to enable or disable simultaneous multithreading, or hyperthreading. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. You can disable it by setting the parameter value to Disabled. If you disable simultaneous multithreading in some cluster machines, you must disable it in all cluster machines.
Important

If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Use larger virtual machine types, such as Standard_D8s_v3, for your machines if you disable simultaneous multithreading.

5 8
You can specify the size of the disk to use in GB. Minimum recommendation for control plane nodes (also known as the master nodes) is 1024 GB.
9
Specify a list of zones to deploy your machines to. For high availability, specify at least two zones.
11
Specify the name of the resource group that contains the DNS zone for your base domain.
12
Specify the name of an already existing resource group to install your cluster to. If undefined, a new resource group is created for the cluster.
13
If you use an existing VNet, specify the name of the resource group that contains it.
14
If you use an existing VNet, specify its name.
15
If you use an existing VNet, specify the name of the subnet to host the control plane machines.
16
If you use an existing VNet, specify the name of the subnet to host the compute machines.
17
You can customize your own outbound routing. Configuring user-defined routing prevents exposing external endpoints in your cluster. User-defined routing for egress requires deploying your cluster to an existing VNet.
18
Specify the name of the Azure cloud environment to deploy your cluster to. Set AzureUSGovernmentCloud to deploy to a Microsoft Azure Government (MAG) region. The default value is AzurePublicCloud.
20
Whether to enable or disable FIPS mode. By default, FIPS mode is not enabled. If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead.
Important

The use of FIPS Validated / Modules in Process cryptographic libraries is only supported on OpenShift Container Platform deployments on the x86_64 architecture.

21
You can optionally provide the sshKey value that you use to access the machines in your cluster.
Note

For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses.

22
How to publish the user-facing endpoints of your cluster. Set publish to Internal to deploy a private cluster, which cannot be accessed from the Internet. The default value is External.

1.8.8.3. Configuring the cluster-wide proxy during installation

Production environments can deny direct access to the Internet and instead have an HTTP or HTTPS proxy available. You can configure a new OpenShift Container Platform cluster to use a proxy by configuring the proxy settings in the install-config.yaml file.

Prerequisites

  • You have an existing install-config.yaml file.
  • You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the Proxy object’s spec.noProxy field to bypass the proxy if necessary.

    Note

    The Proxy object status.noProxy field is populated with the values of the networking.machineNetwork[].cidr, networking.clusterNetwork[].cidr, and networking.serviceNetwork[] fields from your installation configuration.

    For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the Proxy object status.noProxy field is also populated with the instance metadata endpoint (169.254.169.254).

Procedure

  1. Edit your install-config.yaml file and add the proxy settings. For example:

    apiVersion: v1
    baseDomain: my.domain.com
    proxy:
      httpProxy: http://<username>:<pswd>@<ip>:<port> 1
      httpsProxy: https://<username>:<pswd>@<ip>:<port> 2
      noProxy: example.com 3
    additionalTrustBundle: | 4
        -----BEGIN CERTIFICATE-----
        <MY_TRUSTED_CA_CERT>
        -----END CERTIFICATE-----
    ...
    1
    A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be http.
    2
    A proxy URL to use for creating HTTPS connections outside the cluster.
    3
    A comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com, but not y.com. Use * to bypass the proxy for all destinations.
    4
    If provided, the installation program generates a config map that is named user-ca-bundle in the openshift-config namespace to hold the additional CA certificates. If you provide additionalTrustBundle and at least one proxy setting, the Proxy object is configured to reference the user-ca-bundle config map in the trustedCA field. The Cluster Network Operator then creates a trusted-ca-bundle config map that merges the contents specified for the trustedCA parameter with the RHCOS trust bundle. The additionalTrustBundle field is required unless the proxy’s identity certificate is signed by an authority from the RHCOS trust bundle.
    Note

    The installation program does not support the proxy readinessEndpoints field.

  2. Save the file and reference it when installing OpenShift Container Platform.

The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settings in the provided install-config.yaml file. If no proxy settings are provided, a cluster Proxy object is still created, but it will have a nil spec.

Note

Only the Proxy object named cluster is supported, and no additional proxies can be created.

1.8.9. Deploying the cluster

You can install OpenShift Container Platform on a compatible cloud platform.

Important

You can run the create cluster command of the installation program only once, during initial installation.

Prerequisites

  • Configure an account with the cloud platform that hosts your cluster.
  • Obtain the OpenShift Container Platform installation program and the pull secret for your cluster.

Procedure

  1. Change to the directory that contains the installation program and initialize the cluster deployment:

    $ ./openshift-install create cluster --dir <installation_directory> \ 1
        --log-level=info 2
    1
    For <installation_directory>, specify the location of your customized ./install-config.yaml file.
    2
    To view different installation details, specify warn, debug, or error instead of info.
    Note

    If the cloud provider account that you configured on your host does not have sufficient permissions to deploy the cluster, the installation process stops, and the missing permissions are displayed.

    When the cluster deployment completes, directions for accessing your cluster, including a link to its web console and credentials for the kubeadmin user, display in your terminal.

    Example output

    ...
    INFO Install complete!
    INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig'
    INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com
    INFO Login to the console with user: "kubeadmin", and password: "4vYBz-Ee6gm-ymBZj-Wt5AL"
    INFO Time elapsed: 36m22s

    Note

    The cluster access and credential information also outputs to <installation_directory>/.openshift_install.log when an installation succeeds.

    Important
    • The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information.
    • It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation.
    Important

    You must not delete the installation program or the files that the installation program creates. Both are required to delete the cluster.

1.8.10. Installing the OpenShift CLI by downloading the binary

You can install the OpenShift CLI (oc) in order to interact with OpenShift Container Platform from a command-line interface. You can install oc on Linux, Windows, or macOS.

Important

If you installed an earlier version of oc, you cannot use it to complete all of the commands in OpenShift Container Platform 4.6. Download and install the new version of oc.

1.8.10.1. Installing the OpenShift CLI on Linux

You can install the OpenShift CLI (oc) binary on Linux by using the following procedure.

Procedure

  1. Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal.
  2. Select the appropriate version in the Version drop-down menu.
  3. Click Download Now next to the OpenShift v4.6 Linux Client entry and save the file.
  4. Unpack the archive:

    $ tar xvzf <file>
  5. Place the oc binary in a directory that is on your PATH.

    To check your PATH, execute the following command:

    $ echo $PATH

After you install the OpenShift CLI, it is available using the oc command:

$ oc <command>

1.8.10.2. Installing the OpenShift CLI on Windows

You can install the OpenShift CLI (oc) binary on Windows by using the following procedure.

Procedure

  1. Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal.
  2. Select the appropriate version in the Version drop-down menu.
  3. Click Download Now next to the OpenShift v4.6 Windows Client entry and save the file.
  4. Unzip the archive with a ZIP program.
  5. Move the oc binary to a directory that is on your PATH.

    To check your PATH, open the command prompt and execute the following command:

    C:\> path

After you install the OpenShift CLI, it is available using the oc command:

C:\> oc <command>

1.8.10.3. Installing the OpenShift CLI on macOS

You can install the OpenShift CLI (oc) binary on macOS by using the following procedure.

Procedure

  1. Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal.
  2. Select the appropriate version in the Version drop-down menu.
  3. Click Download Now next to the OpenShift v4.6 MacOSX Client entry and save the file.
  4. Unpack and unzip the archive.
  5. Move the oc binary to a directory on your PATH.

    To check your PATH, open a terminal and execute the following command:

    $ echo $PATH

After you install the OpenShift CLI, it is available using the oc command:

$ oc <command>

1.8.11. Logging in to the cluster by using the CLI

You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation.

Prerequisites

  • You deployed an OpenShift Container Platform cluster.
  • You installed the oc CLI.

Procedure

  1. Export the kubeadmin credentials:

    $ export KUBECONFIG=<installation_directory>/auth/kubeconfig 1
    1
    For <installation_directory>, specify the path to the directory that you stored the installation files in.
  2. Verify you can run oc commands successfully using the exported configuration:

    $ oc whoami

    Example output

    system:admin

Additional resources

  • See Accessing the web console for more details about accessing and understanding the OpenShift Container Platform web console.

1.8.12. Telemetry access for OpenShift Container Platform

In OpenShift Container Platform 4.6, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager.

After you confirm that your OpenShift Cluster Manager inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level.

Additional resources

1.8.13. Next steps

1.9. Installing a cluster on Azure using ARM templates

In OpenShift Container Platform version 4.6, you can install a cluster on Microsoft Azure by using infrastructure that you provide.

Several Azure Resource Manager (ARM) templates are provided to assist in completing these steps or to help model your own.

Important

The steps for performing a user-provisioned infrastructure installation are provided as an example only. Installing a cluster with infrastructure you provide requires knowledge of the cloud provider and the installation process of OpenShift Container Platform. Several ARM templates are provided to assist in completing these steps or to help model your own. You are also free to create the required resources through other methods; the templates are just an example.

1.9.1. Prerequisites

1.9.2. Internet access for OpenShift Container Platform

In OpenShift Container Platform 4.6, you require access to the Internet to install your cluster.

You must have Internet access to:

  • Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster.
  • Access Quay.io to obtain the packages that are required to install your cluster.
  • Obtain the packages that are required to perform cluster updates.
Important

If your cluster cannot have direct Internet access, you can perform a restricted network installation on some types of infrastructure that you provision. During that process, you download the content that is required and use it to populate a mirror registry with the packages that you need to install a cluster and generate the installation program. With some installation types, the environment that you install your cluster in will not require Internet access. Before you update the cluster, you update the content of the mirror registry.

1.9.3. Configuring your Azure project

Before you can install OpenShift Container Platform, you must configure an Azure project to host it.

Important

All Azure resources that are available through public endpoints are subject to resource name restrictions, and you cannot create resources that use certain terms. For a list of terms that Azure restricts, see Resolve reserved resource name errors in the Azure documentation.

1.9.3.1. Azure account limits

The OpenShift Container Platform cluster uses a number of Microsoft Azure components, and the default Azure subscription and service limits, quotas, and constraints affect your ability to install OpenShift Container Platform clusters.

Important

Default limits vary by offer category types, such as Free Trial and Pay-As-You-Go, and by series, such as Dv2, F, and G. For example, the default for Enterprise Agreement subscriptions is 350 cores.

Check the limits for your subscription type and if necessary, increase quota limits for your account before you install a default cluster on Azure.

The following table summarizes the Azure components whose limits can impact your ability to install and run OpenShift Container Platform clusters.

ComponentNumber of components required by defaultDefault Azure limitDescription

vCPU

40

20 per region

A default cluster requires 40 vCPUs, so you must increase the account limit.

By default, each cluster creates the following instances:

  • One bootstrap machine, which is removed after installation
  • Three control plane machines
  • Three compute machines

Because the bootstrap machine uses Standard_D4s_v3 machines, which use 4 vCPUs, the control plane machines use Standard_D8s_v3 virtual machines, which use 8 vCPUs, and the worker machines use Standard_D4s_v3 virtual machines, which use 4 vCPUs, a default cluster requires 40 vCPUs. The bootstrap node VM, which uses 4 vCPUs, is used only during installation.

To deploy more worker nodes, enable autoscaling, deploy large workloads, or use a different instance type, you must further increase the vCPU limit for your account to ensure that your cluster can deploy the machines that you require.

By default, the installation program distributes control plane and compute machines across all availability zones within a region. To ensure high availability for your cluster, select a region with at least three availability zones. If your region contains fewer than three availability zones, the installation program places more than one control plane machine in the available zones.

OS Disk

7

 

VM OS disk must be able to sustain a minimum throughput of 5000 IOPS / 200MBps. This throughput can be provided by having a minimum of 1 TiB Premium SSD (P30). In Azure, disk performance is directly dependent on SSD disk sizes, so to achieve the throughput supported by Standard_D8s_v3, or other similar machine types available, and the target of 5000 IOPS, at least a P30 disk is required.

Host caching must be set to ReadOnly for low read latency and high read IOPS and throughput. The reads performed from the cache, which is present either in the VM memory or in the local SSD disk, are much faster than the reads from the data disk, which is in the blob storage.

VNet

1

1000 per region

Each default cluster requires one Virtual Network (VNet), which contains two subnets.

Network interfaces

6

65,536 per region

Each default cluster requires six network interfaces. If you create more machines or your deployed workloads create load balancers, your cluster uses more network interfaces.

Network security groups

2

5000

Each default cluster Each cluster creates network security groups for each subnet in the VNet. The default cluster creates network security groups for the control plane and for the compute node subnets:

controlplane

Allows the control plane machines to be reached on port 6443 from anywhere

node

Allows worker nodes to be reached from the Internet on ports 80 and 443

Network load balancers

3

1000 per region

Each cluster creates the following load balancers:

default

Public IP address that load balances requests to ports 80 and 443 across worker machines

internal

Private IP address that load balances requests to ports 6443 and 22623 across control plane machines

external

Public IP address that load balances requests to port 6443 across control plane machines

If your applications create more Kubernetes LoadBalancer service objects, your cluster uses more load balancers.

Public IP addresses

3

 

Each of the two public load balancers uses a public IP address. The bootstrap machine also uses a public IP address so that you can SSH into the machine to troubleshoot issues during installation. The IP address for the bootstrap node is used only during installation.

Private IP addresses

7

 

The internal load balancer, each of the three control plane machines, and each of the three worker machines each use a private IP address.

Spot VM vCPUs (optional)

0

If you configure spot VMs, your cluster must have two spot VM vCPUs for every compute node.

20 per region

This is an optional component. To use spot VMs, you must increase the Azure default limit to at least twice the number of compute nodes in your cluster.

Note

Using spot VMs for control plane nodes is not recommended.

1.9.3.2. Configuring a public DNS zone in Azure

To install OpenShift Container Platform, the Microsoft Azure account you use must have a dedicated public hosted DNS zone in your account. This zone must be authoritative for the domain. This service provides cluster DNS resolution and name lookup for external connections to the cluster.

Procedure

  1. Identify your domain, or subdomain, and registrar. You can transfer an existing domain and registrar or obtain a new one through Azure or another source.

    Note

    For more information about purchasing domains through Azure, see Buy a custom domain name for Azure App Service in the Azure documentation.

  2. If you are using an existing domain and registrar, migrate its DNS to Azure. See Migrate an active DNS name to Azure App Service in the Azure documentation.
  3. Configure DNS for your domain. Follow the steps in the Tutorial: Host your domain in Azure DNS in the Azure documentation to create a public hosted zone for your domain or subdomain, extract the new authoritative name servers, and update the registrar records for the name servers that your domain uses.

    Use an appropriate root domain, such as openshiftcorp.com, or subdomain, such as clusters.openshiftcorp.com.

  4. If you use a subdomain, follow your company’s procedures to add its delegation records to the parent domain.

You can view Azure’s DNS solution by visiting this example for creating DNS zones.

1.9.3.3. Increasing Azure account limits

To increase an account limit, file a support request on the Azure portal.

Note

You can increase only one type of quota per support request.

Procedure

  1. From the Azure portal, click Help + support in the lower left corner.
  2. Click New support request and then select the required values:

    1. From the Issue type list, select Service and subscription limits (quotas).
    2. From the Subscription list, select the subscription to modify.
    3. From the Quota type list, select the quota to increase. For example, select Compute-VM (cores-vCPUs) subscription limit increases to increase the number of vCPUs, which is required to install a cluster.
    4. Click Next: Solutions.
  3. On the Problem Details page, provide the required information for your quota increase:

    1. Click Provide details and provide the required details in the Quota details window.
    2. In the SUPPORT METHOD and CONTACT INFO sections, provide the issue severity and your contact details.
  4. Click Next: Review + create and then click Create.

1.9.3.4. Certificate signing requests management

Because your cluster has limited access to automatic machine management when you use infrastructure that you provision, you must provide a mechanism for approving cluster certificate signing requests (CSRs) after installation. The kube-controller-manager only approves the kubelet client CSRs. The machine-approver cannot guarantee the validity of a serving certificate that is requested by using kubelet credentials because it cannot confirm that the correct machine issued the request. You must determine and implement a method of verifying the validity of the kubelet serving certificate requests and approving them.

1.9.3.5. Required Azure roles

OpenShift Container Platform needs a service principal so it can manage Microsoft Azure resources. Before you can create a service principal, your Azure account subscription must have the following roles:

  • User Access Administrator
  • Owner

To set roles on the Azure portal, see the Manage access to Azure resources using RBAC and the Azure portal in the Azure documentation.

1.9.3.6. Creating a service principal

Because OpenShift Container Platform and its installation program must create Microsoft Azure resources through Azure Resource Manager, you must create a service principal to represent it.

Prerequisites

  • Install or update the Azure CLI.
  • Install the jq package.
  • Your Azure account has the required roles for the subscription that you use.

Procedure

  1. Log in to the Azure CLI:

    $ az login

    Log in to Azure in the web console by using your credentials.

  2. If your Azure account uses subscriptions, ensure that you are using the right subscription.

    1. View the list of available accounts and record the tenantId value for the subscription you want to use for your cluster:

      $ az account list --refresh

      Example output

      [
        {
          "cloudName": "AzureCloud",
          "id": "9bab1460-96d5-40b3-a78e-17b15e978a80",
          "isDefault": true,
          "name": "Subscription Name",
          "state": "Enabled",
          "tenantId": "6057c7e9-b3ae-489d-a54e-de3f6bf6a8ee",
          "user": {
            "name": "you@example.com",
            "type": "user"
          }
        }
      ]

    2. View your active account details and confirm that the tenantId value matches the subscription you want to use:

      $ az account show

      Example output

      {
        "environmentName": "AzureCloud",
        "id": "9bab1460-96d5-40b3-a78e-17b15e978a80",
        "isDefault": true,
        "name": "Subscription Name",
        "state": "Enabled",
        "tenantId": "6057c7e9-b3ae-489d-a54e-de3f6bf6a8ee", 1
        "user": {
          "name": "you@example.com",
          "type": "user"
        }
      }

      1
      Ensure that the value of the tenantId parameter is the UUID of the correct subscription.
    3. If you are not using the right subscription, change the active subscription:

      $ az account set -s <id> 1
      1
      Substitute the value of the id for the subscription that you want to use for <id>.
    4. If you changed the active subscription, display your account information again:

      $ az account show

      Example output

      {
        "environmentName": "AzureCloud",
        "id": "33212d16-bdf6-45cb-b038-f6565b61edda",
        "isDefault": true,
        "name": "Subscription Name",
        "state": "Enabled",
        "tenantId": "8049c7e9-c3de-762d-a54e-dc3f6be6a7ee",
        "user": {
          "name": "you@example.com",
          "type": "user"
        }
      }

  3. Record the values of the tenantId and id parameters from the previous output. You need these values during OpenShift Container Platform installation.
  4. Create the service principal for your account:

    $ az ad sp create-for-rbac --role Contributor --name <service_principal> 1
    1
    Replace <service_principal> with the name to assign to the service principal.

    Example output

    Changing "<service_principal>" to a valid URI of "http://<service_principal>", which is the required format used for service principal names
    Retrying role assignment creation: 1/36
    Retrying role assignment creation: 2/36
    Retrying role assignment creation: 3/36
    Retrying role assignment creation: 4/36
    {
      "appId": "8bd0d04d-0ac2-43a8-928d-705c598c6956",
      "displayName": "<service_principal>",
      "name": "http://<service_principal>",
      "password": "ac461d78-bf4b-4387-ad16-7e32e328aec6",
      "tenant": "6048c7e9-b2ad-488d-a54e-dc3f6be6a7ee"
    }

  5. Record the values of the appId and password parameters from the previous output. You need these values during OpenShift Container Platform installation.
  6. Grant additional permissions to the service principal.

    • You must always add the Contributor and User Access Administrator roles to the app registration service principal so the cluster can assign credentials for its components.
    • To operate the Cloud Credential Operator (CCO) in mint mode, the app registration service principal also requires the Azure Active Directory Graph/Application.ReadWrite.OwnedBy API permission.
    • To operate the CCO in passthrough mode, the app registration service principal does not require additional API permissions.

    For more information about CCO modes, see the Cloud Credential Operator entry in the Red Hat Operators reference content.

    1. To assign the User Access Administrator role, run the following command:

      $ az role assignment create --role "User Access Administrator" \
          --assignee-object-id $(az ad sp list --filter "appId eq '<appId>'" \
             | jq '.[0].id' -r) 1
      1
      Replace <appId> with the appId parameter value for your service principal.
    2. To assign the Azure Active Directory Graph permission, run the following command:

      $ az ad app permission add --id <appId> \ 1
           --api 00000002-0000-0000-c000-000000000000 \
           --api-permissions 824c81eb-e3f8-4ee6-8f6d-de7f50d565b7=Role
      1
      Replace <appId> with the appId parameter value for your service principal.

      Example output

      Invoking "az ad app permission grant --id 46d33abc-b8a3-46d8-8c84-f0fd58177435 --api 00000002-0000-0000-c000-000000000000" is needed to make the change effective

      For more information about the specific permissions that you grant with this command, see the GUID Table for Windows Azure Active Directory Permissions.

    3. Approve the permissions request. If your account does not have the Azure Active Directory tenant administrator role, follow the guidelines for your organization to request that the tenant administrator approve your permissions request.

      $ az ad app permission grant --id <appId> \ 1
           --api 00000002-0000-0000-c000-000000000000
      1
      Replace <appId> with the appId parameter value for your service principal.

1.9.3.7. Supported Azure regions

The installation program dynamically generates the list of available Microsoft Azure regions based on your subscription. The following Azure regions were tested and validated in OpenShift Container Platform version 4.6.1:

Supported Azure public regions
  • australiacentral (Australia Central)
  • australiaeast (Australia East)
  • australiasoutheast (Australia South East)
  • brazilsouth (Brazil South)
  • canadacentral (Canada Central)
  • canadaeast (Canada East)
  • centralindia (Central India)
  • centralus (Central US)
  • eastasia (East Asia)
  • eastus (East US)
  • eastus2 (East US 2)
  • francecentral (France Central)
  • germanywestcentral (Germany West Central)
  • japaneast (Japan East)
  • japanwest (Japan West)
  • koreacentral (Korea Central)
  • koreasouth (Korea South)
  • northcentralus (North Central US)
  • northeurope (North Europe)
  • norwayeast (Norway East)
  • southafricanorth (South Africa North)
  • southcentralus (South Central US)
  • southeastasia (Southeast Asia)
  • southindia (South India)
  • switzerlandnorth (Switzerland North)
  • uaenorth (UAE North)
  • uksouth (UK South)
  • ukwest (UK West)
  • westcentralus (West Central US)
  • westeurope (West Europe)
  • westindia (West India)
  • westus (West US)
  • westus2 (West US 2)
Supported Azure Government regions

Support for the following Microsoft Azure Government (MAG) regions was added in OpenShift Container Platform version 4.6:

  • usgovtexas (US Gov Texas)
  • usgovvirginia (US Gov Virginia)

You can reference all available MAG regions in the Azure documentation. Other provided MAG regions are expected to work with OpenShift Container Platform, but have not been tested.

1.9.4. Obtaining the installation program

Before you install OpenShift Container Platform, download the installation file on a local computer.

Prerequisites

  • You have a computer that runs Linux or macOS, with 500 MB of local disk space

Procedure

  1. Access the Infrastructure Provider page on the OpenShift Cluster Manager site. If you have a Red Hat account, log in with your credentials. If you do not, create an account.
  2. Select your infrastructure provider.
  3. Navigate to the page for your installation type, download the installation program for your operating system, and place the file in the directory where you will store the installation configuration files.

    Important

    The installation program creates several files on the computer that you use to install your cluster. You must keep the installation program and the files that the installation program creates after you finish installing the cluster. Both files are required to delete the cluster.

    Important

    Deleting the files created by the installation program does not remove your cluster, even if the cluster failed during installation. To remove your cluster, complete the OpenShift Container Platform uninstallation procedures for your specific cloud provider.

  4. Extract the installation program. For example, on a computer that uses a Linux operating system, run the following command:

    $ tar xvf openshift-install-linux.tar.gz
  5. Download your installation pull secret from the Red Hat OpenShift Cluster Manager. This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components.

1.9.5. Generating an SSH private key and adding it to the agent

If you want to perform installation debugging or disaster recovery on your cluster, you must provide an SSH key to both your ssh-agent and the installation program. You can use this key to access the bootstrap machine in a public cluster to troubleshoot installation issues.

Note

In a production environment, you require disaster recovery and debugging.

You can use this key to SSH into the master nodes as the user core. When you deploy the cluster, the key is added to the core user’s ~/.ssh/authorized_keys list.

Note

You must use a local key, not one that you configured with platform-specific approaches such as AWS key pairs.

Procedure

  1. If you do not have an SSH key that is configured for password-less authentication on your computer, create one. For example, on a computer that uses a Linux operating system, run the following command:

    $ ssh-keygen -t ed25519 -N '' \
        -f <path>/<file_name> 1
    1
    Specify the path and file name, such as ~/.ssh/id_rsa, of the new SSH key. If you have an existing key pair, ensure your public key is in the your ~/.ssh directory.

    Running this command generates an SSH key that does not require a password in the location that you specified.

    Note

    If you plan to install an OpenShift Container Platform cluster that uses FIPS Validated / Modules in Process cryptographic libraries on the x86_64 architecture, do not create a key that uses the ed25519 algorithm. Instead, create a key that uses the rsa or ecdsa algorithm.

  2. Start the ssh-agent process as a background task:

    $ eval "$(ssh-agent -s)"

    Example output

    Agent pid 31874

    Note

    If your cluster is in FIPS mode, only use FIPS-compliant algorithms to generate the SSH key. The key must be either RSA or ECDSA.

  3. Add your SSH private key to the ssh-agent:

    $ ssh-add <path>/<file_name> 1

    Example output

    Identity added: /home/<you>/<path>/<file_name> (<computer_name>)

    1
    Specify the path and file name for your SSH private key, such as ~/.ssh/id_rsa

Next steps

  • When you install OpenShift Container Platform, provide the SSH public key to the installation program. If you install a cluster on infrastructure that you provision, you must provide this key to your cluster’s machines.

1.9.6. Creating the installation files for Azure

To install OpenShift Container Platform on Microsoft Azure using user-provisioned infrastructure, you must generate the files that the installation program needs to deploy your cluster and modify them so that the cluster creates only the machines that it will use. You generate and customize the install-config.yaml file, Kubernetes manifests, and Ignition config files. You also have the option to first set up a separate var partition during the preparation phases of installation.

1.9.6.1. Optional: Creating a separate /var partition

It is recommended that disk partitioning for OpenShift Container Platform be left to the installer. However, there are cases where you might want to create separate partitions in a part of the filesystem that you expect to grow.

OpenShift Container Platform supports the addition of a single partition to attach storage to either the /var partition or a subdirectory of /var. For example:

  • /var/lib/containers: Holds container-related content that can grow as more images and containers are added to a system.
  • /var/lib/etcd: Holds data that you might want to keep separate for purposes such as performance optimization of etcd storage.
  • /var: Holds data that you might want to keep separate for purposes such as auditing.

Storing the contents of a /var directory separately makes it easier to grow storage for those areas as needed and reinstall OpenShift Container Platform at a later date and keep that data intact. With this method, you will not have to pull all your containers again, nor will you have to copy massive log files when you update systems.

Because /var must be in place before a fresh installation of Red Hat Enterprise Linux CoreOS (RHCOS), the following procedure sets up the separate /var partition by creating a machine config that is inserted during the openshift-install preparation phases of an OpenShift Container Platform installation.

Important

If you follow the steps to create a separate /var partition in this procedure, it is not necessary to create the Kubernetes manifest and Ignition config files again as described later in this section.

Procedure

  1. Create a directory to hold the OpenShift Container Platform installation files:

    $ mkdir $HOME/clusterconfig
  2. Run openshift-install to create a set of files in the manifest and openshift subdirectories. Answer the system questions as you are prompted:

    $ openshift-install create manifests --dir $HOME/clusterconfig

    Example output

    ? SSH Public Key ...
    INFO Credentials loaded from the "myprofile" profile in file "/home/myuser/.aws/credentials"
    INFO Consuming Install Config from target directory
    INFO Manifests created in: $HOME/clusterconfig/manifests and $HOME/clusterconfig/openshift

  3. Optional: Confirm that the installation program created manifests in the clusterconfig/openshift directory:

    $ ls $HOME/clusterconfig/openshift/

    Example output

    99_kubeadmin-password-secret.yaml
    99_openshift-cluster-api_master-machines-0.yaml
    99_openshift-cluster-api_master-machines-1.yaml
    99_openshift-cluster-api_master-machines-2.yaml
    ...

  4. Create a MachineConfig object and add it to a file in the openshift directory. For example, name the file 98-var-partition.yaml, change the disk device name to the name of the storage device on the worker systems, and set the storage size as appropriate. This example places the /var directory on a separate partition:

    apiVersion: machineconfiguration.openshift.io/v1
    kind: MachineConfig
    metadata:
      labels:
        machineconfiguration.openshift.io/role: worker
      name: 98-var-partition
    spec:
      config:
        ignition:
          version: 3.1.0
        storage:
          disks:
          - device: /dev/<device_name> 1
            partitions:
            - label: var
              startMiB: <partition_start_offset> 2
              sizeMiB: <partition_size> 3
          filesystems:
            - device: /dev/disk/by-partlabel/var
              path: /var
              format: xfs
        systemd:
          units:
            - name: var.mount 4
              enabled: true
              contents: |
                [Unit]
                Before=local-fs.target
                [Mount]
                What=/dev/disk/by-partlabel/var
                Where=/var
                Options=defaults,prjquota 5
                [Install]
                WantedBy=local-fs.target
    1
    The storage device name of the disk that you want to partition.
    2
    When adding a data partition to the boot disk, a minimum value of 25000 MiB (Mebibytes) is recommended. The root file system is automatically resized to fill all available space up to the specified offset. If no value is specified, or if the specified value is smaller than the recommended minimum, the resulting root file system will be too small, and future reinstalls of RHCOS might overwrite the beginning of the data partition.
    3
    The size of the data partition in mebibytes.
    4
    The name of the mount unit must match the directory specified in the Where= directive. For example, for a filesystem mounted on /var/lib/containers, the unit must be named var-lib-containers.mount.
    5
    The prjquota mount option must be enabled for filesystems used for container storage.
    Note

    When creating a separate /var partition, you cannot use different instance types for worker nodes, if the different instance types do not have the same device name.

  5. Run openshift-install again to create Ignition configs from a set of files in the manifest and openshift subdirectories:

    $ openshift-install create ignition-configs --dir $HOME/clusterconfig
    $ ls $HOME/clusterconfig/
    auth  bootstrap.ign  master.ign  metadata.json  worker.ign

Now you can use the Ignition config files as input to the installation procedures to install Red Hat Enterprise Linux CoreOS (RHCOS) systems.

1.9.6.2. Creating the installation configuration file

You can customize the OpenShift Container Platform cluster you install on Microsoft Azure.

Prerequisites

  • Obtain the OpenShift Container Platform installation program and the pull secret for your cluster.

Procedure

  1. Create the install-config.yaml file.

    1. Change to the directory that contains the installation program and run the following command:

      $ ./openshift-install create install-config --dir <installation_directory> 1
      1
      For <installation_directory>, specify the directory name to store the files that the installation program creates.
      Important

      Specify an empty directory. Some installation assets, like bootstrap X.509 certificates have short expiration intervals, so you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version.

    2. At the prompts, provide the configuration details for your cloud:

      1. Optional: Select an SSH key to use to access your cluster machines.

        Note

        For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses.

      2. Select azure as the platform to target.
      3. If you do not have a Microsoft Azure profile stored on your computer, specify the following Azure parameter values for your subscription and service principal:

        • azure subscription id: The subscription ID to use for the cluster. Specify the id value in your account output.
        • azure tenant id: The tenant ID. Specify the tenantId value in your account output.
        • azure service principal client id: The value of the appId parameter for the service principal.
        • azure service principal client secret: The value of the password parameter for the service principal.
      4. Select the region to deploy the cluster to.
      5. Select the base domain to deploy the cluster to. The base domain corresponds to the Azure DNS Zone that you created for your cluster.
      6. Enter a descriptive name for your cluster.

        Important

        All Azure resources that are available through public endpoints are subject to resource name restrictions, and you cannot create resources that use certain terms. For a list of terms that Azure restricts, see Resolve reserved resource name errors in the Azure documentation.

      7. Paste the pull secret from the Red Hat OpenShift Cluster Manager.
    3. Optional: If you do not want the cluster to provision compute machines, empty the compute pool by editing the resulting install-config.yaml file to set replicas to 0 for the compute pool:

      compute:
      - hyperthreading: Enabled
        name: worker
        platform: {}
        replicas: 0 1
      1
      Set to 0.
  2. Modify the install-config.yaml file. You can find more information about the available parameters in the Installation configuration parameters section.
  3. Back up the install-config.yaml file so that you can use it to install multiple clusters.

    Important

    The install-config.yaml file is consumed during the installation process. If you want to reuse the file, you must back it up now.

1.9.6.3. Configuring the cluster-wide proxy during installation

Production environments can deny direct access to the Internet and instead have an HTTP or HTTPS proxy available. You can configure a new OpenShift Container Platform cluster to use a proxy by configuring the proxy settings in the install-config.yaml file.

Prerequisites

  • You have an existing install-config.yaml file.
  • You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the Proxy object’s spec.noProxy field to bypass the proxy if necessary.

    Note

    The Proxy object status.noProxy field is populated with the values of the networking.machineNetwork[].cidr, networking.clusterNetwork[].cidr, and networking.serviceNetwork[] fields from your installation configuration.

    For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the Proxy object status.noProxy field is also populated with the instance metadata endpoint (169.254.169.254).

Procedure

  1. Edit your install-config.yaml file and add the proxy settings. For example:

    apiVersion: v1
    baseDomain: my.domain.com
    proxy:
      httpProxy: http://<username>:<pswd>@<ip>:<port> 1
      httpsProxy: https://<username>:<pswd>@<ip>:<port> 2
      noProxy: example.com 3
    additionalTrustBundle: | 4
        -----BEGIN CERTIFICATE-----
        <MY_TRUSTED_CA_CERT>
        -----END CERTIFICATE-----
    ...
    1
    A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be http.
    2
    A proxy URL to use for creating HTTPS connections outside the cluster.
    3
    A comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com, but not y.com. Use * to bypass the proxy for all destinations.
    4
    If provided, the installation program generates a config map that is named user-ca-bundle in the openshift-config namespace to hold the additional CA certificates. If you provide additionalTrustBundle and at least one proxy setting, the Proxy object is configured to reference the user-ca-bundle config map in the trustedCA field. The Cluster Network Operator then creates a trusted-ca-bundle config map that merges the contents specified for the trustedCA parameter with the RHCOS trust bundle. The additionalTrustBundle field is required unless the proxy’s identity certificate is signed by an authority from the RHCOS trust bundle.
    Note

    The installation program does not support the proxy readinessEndpoints field.

  2. Save the file and reference it when installing OpenShift Container Platform.

The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settings in the provided install-config.yaml file. If no proxy settings are provided, a cluster Proxy object is still created, but it will have a nil spec.

Note

Only the Proxy object named cluster is supported, and no additional proxies can be created.

1.9.6.4. Exporting common variables for ARM templates

You must export a common set of variables that are used with the provided Azure Resource Manager (ARM) templates used to assist in completing a user-provided infrastructure install on Microsoft Azure.

Note

Specific ARM templates can also require additional exported variables, which are detailed in their related procedures.

Prerequisites

  • Obtain the OpenShift Container Platform installation program and the pull secret for your cluster.

Procedure

  1. Export common variables found in the install-config.yaml to be used by the provided ARM templates:

    $ export CLUSTER_NAME=<cluster_name>1
    $ export AZURE_REGION=<azure_region>2
    $ export SSH_KEY=<ssh_key>3
    $ export BASE_DOMAIN=<base_domain>4
    $ export BASE_DOMAIN_RESOURCE_GROUP=<base_domain_resource_group>5
    1
    The value of the .metadata.name attribute from the install-config.yaml file.
    2
    The region to deploy the cluster into, for example centralus. This is the value of the .platform.azure.region attribute from the install-config.yaml file.
    3
    The SSH RSA public key file as a string. You must enclose the SSH key in quotes since it contains spaces. This is the value of the .sshKey attribute from the install-config.yaml file.
    4
    The base domain to deploy the cluster to. The base domain corresponds to the public DNS zone that you created for your cluster. This is the value of the .baseDomain attribute from the install-config.yaml file.
    5
    The resource group where the public DNS zone exists. This is the value of the .platform.azure.baseDomainResourceGroupName attribute from the install-config.yaml file.

    For example:

    $ export CLUSTER_NAME=test-cluster
    $ export AZURE_REGION=centralus
    $ export SSH_KEY="ssh-rsa xxx/xxx/xxx= user@email.com"
    $ export BASE_DOMAIN=example.com
    $ export BASE_DOMAIN_RESOURCE_GROUP=ocp-cluster
  2. Export the kubeadmin credentials:

    $ export KUBECONFIG=<installation_directory>/auth/kubeconfig 1
    1
    For <installation_directory>, specify the path to the directory that you stored the installation files in.

1.9.6.5. Creating the Kubernetes manifest and Ignition config files

Because you must modify some cluster definition files and manually start the cluster machines, you must generate the Kubernetes manifest and Ignition config files that the cluster needs to make its machines.

The installation configuration file transforms into the Kubernetes manifests. The manifests wrap into the Ignition configuration files, which are later used to create the cluster.

Important
  • The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information.
  • It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation.

Prerequisites

  • You obtained the OpenShift Container Platform installation program.
  • You created the install-config.yaml installation configuration file.

Procedure

  1. Change to the directory that contains the installation program and generate the Kubernetes manifests for the cluster:

    $ ./openshift-install create manifests --dir <installation_directory> 1
    1
    For <installation_directory>, specify the installation directory that contains the install-config.yaml file you created.
  2. Remove the Kubernetes manifest files that define the control plane machines:

    $ rm -f <installation_directory>/openshift/99_openshift-cluster-api_master-machines-*.yaml

    By removing these files, you prevent the cluster from automatically generating control plane machines.

  3. Remove the Kubernetes manifest files that define the worker machines:

    $ rm -f <installation_directory>/openshift/99_openshift-cluster-api_worker-machineset-*.yaml

    Because you create and manage the worker machines yourself, you do not need to initialize these machines.

  4. Check that the mastersSchedulable parameter in the <installation_directory>/manifests/cluster-scheduler-02-config.yml Kubernetes manifest file is set to false. This setting prevents pods from being scheduled on the control plane machines:

    1. Open the <installation_directory>/manifests/cluster-scheduler-02-config.yml file.
    2. Locate the mastersSchedulable parameter and ensure that it is set to false.
    3. Save and exit the file.
  5. Optional: If you do not want the Ingress Operator to create DNS records on your behalf, remove the privateZone and publicZone sections from the <installation_directory>/manifests/cluster-dns-02-config.yml DNS configuration file:

    apiVersion: config.openshift.io/v1
    kind: DNS
    metadata:
      creationTimestamp: null
      name: cluster
    spec:
      baseDomain: example.openshift.com
      privateZone: 1
        id: mycluster-100419-private-zone
      publicZone: 2
        id: example.openshift.com
    status: {}
    1 2
    Remove this section completely.

    If you do so, you must add ingress DNS records manually in a later step.

  6. When configuring Azure on user-provisioned infrastructure, you must export some common variables defined in the manifest files to use later in the Azure Resource Manager (ARM) templates:

    1. Export the infrastructure ID by using the following command:

      $ export INFRA_ID=<infra_id> 1
      1
      The OpenShift Container Platform cluster has been assigned an identifier (INFRA_ID) in the form of <cluster_name>-<random_string>. This will be used as the base name for most resources created using the provided ARM templates. This is the value of the .status.infrastructureName attribute from the manifests/cluster-infrastructure-02-config.yml file.
    2. Export the resource group by using the following command:

      $ export RESOURCE_GROUP=<resource_group> 1
      1
      All resources created in this Azure deployment exists as part of a resource group. The resource group name is also based on the INFRA_ID, in the form of <cluster_name>-<random_string>-rg. This is the value of the .status.platformStatus.azure.resourceGroupName attribute from the manifests/cluster-infrastructure-02-config.yml file.
  7. To create the Ignition configuration files, run the following command from the directory that contains the installation program:

    $ ./openshift-install create ignition-configs --dir <installation_directory> 1
    1
    For <installation_directory>, specify the same installation directory.

    The following files are generated in the directory:

    .
    ├── auth
    │   ├── kubeadmin-password
    │   └── kubeconfig
    ├── bootstrap.ign
    ├── master.ign
    ├── metadata.json
    └── worker.ign

1.9.7. Creating the Azure resource group and identity

You must create a Microsoft Azure resource group and an identity for that resource group. These are both used during the installation of your OpenShift Container Platform cluster on Azure.

Prerequisites

  • Configure an Azure account.
  • Generate the Ignition config files for your cluster.

Procedure

  1. Create the resource group in a supported Azure region:

    $ az group create --name ${RESOURCE_GROUP} --location ${AZURE_REGION}
  2. Create an Azure identity for the resource group:

    $ az identity create -g ${RESOURCE_GROUP} -n ${INFRA_ID}-identity

    This is used to grant the required access to Operators in your cluster. For example, this allows the Ingress Operator to create a public IP and its load balancer. You must assign the Azure identity to a role.

  3. Grant the Contributor role to the Azure identity:

    1. Export the following variables required by the Azure role assignment:

      $ export PRINCIPAL_ID=`az identity show -g ${RESOURCE_GROUP} -n ${INFRA_ID}-identity --query principalId --out tsv`
      $ export RESOURCE_GROUP_ID=`az group show -g ${RESOURCE_GROUP} --query id --out tsv`
    2. Assign the Contributor role to the identity:

      $ az role assignment create --assignee "${PRINCIPAL_ID}" --role 'Contributor' --scope "${RESOURCE_GROUP_ID}"

1.9.8. Uploading the RHCOS cluster image and bootstrap Ignition config file

The Azure client does not support deployments based on files existing locally; therefore, you must copy and store the RHCOS virtual hard disk (VHD) cluster image and bootstrap Ignition config file in a storage container so they are accessible during deployment.

Prerequisites

  • Configure an Azure account.
  • Generate the Ignition config files for your cluster.

Procedure

  1. Create an Azure storage account to store the VHD cluster image:

    $ az storage account create -g ${RESOURCE_GROUP} --location ${AZURE_REGION} --name ${CLUSTER_NAME}sa --kind Storage --sku Standard_LRS
    Warning

    The Azure storage account name must be between 3 and 24 characters in length and use numbers and lower-case letters only. If your CLUSTER_NAME variable does not follow these restrictions, you must manually define the Azure storage account name. For more information on Azure storage account name restrictions, see Resolve errors for storage account names in the Azure documentation.

  2. Export the storage account key as an environment variable:

    $ export ACCOUNT_KEY=`az storage account keys list -g ${RESOURCE_GROUP} --account-name ${CLUSTER_NAME}sa --query "[0].value" -o tsv`
  3. Choose the RHCOS version to use and export the URL of its VHD to an environment variable:

    $ export VHD_URL=`curl -s https://raw.githubusercontent.com/openshift/installer/release-4.6/data/data/rhcos.json | jq -r .azure.url`
    Important

    The RHCOS images might not change with every release of OpenShift Container Platform. You must specify an image with the highest version that is less than or equal to the OpenShift Container Platform version that you install. Use the image version that matches your OpenShift Container Platform version if it is available.

  4. Copy the chosen VHD to a blob:

    $ az storage container create --name vhd --account-name ${CLUSTER_NAME}sa --account-key ${ACCOUNT_KEY}
    $ az storage blob copy start --account-name ${CLUSTER_NAME}sa --account-key ${ACCOUNT_KEY} --destination-blob "rhcos.vhd" --destination-container vhd --source-uri "${VHD_URL}"

    To track the progress of the VHD copy task, run this script:

    status="unknown"
    while [ "$status" != "success" ]
    do
      status=`az storage blob show --container-name vhd --name "rhcos.vhd" --account-name ${CLUSTER_NAME}sa --account-key ${ACCOUNT_KEY} -o tsv --query properties.copy.status`
      echo $status
    done
  5. Create a blob storage container and upload the generated bootstrap.ign file:

    $ az storage container create --name files --account-name ${CLUSTER_NAME}sa --account-key ${ACCOUNT_KEY} --public-access blob
    $ az storage blob upload --account-name ${CLUSTER_NAME}sa --account-key ${ACCOUNT_KEY} -c "files" -f "<installation_directory>/bootstrap.ign" -n "bootstrap.ign"

1.9.9. Example for creating DNS zones

DNS records are required for clusters that use user-provisioned infrastructure. You should choose the DNS strategy that fits your scenario.

For this example, Azure’s DNS solution is used, so you will create a new public DNS zone for external (internet) visibility and a private DNS zone for internal cluster resolution.

Note

The public DNS zone is not required to exist in the same resource group as the cluster deployment and might already exist in your organization for the desired base domain. If that is the case, you can skip creating the public DNS zone; be sure the installation config you generated earlier reflects that scenario.

Prerequisites

  • Configure an Azure account.
  • Generate the Ignition config files for your cluster.

Procedure

  1. Create the new public DNS zone in the resource group exported in the BASE_DOMAIN_RESOURCE_GROUP environment variable:

    $ az network dns zone create -g ${BASE_DOMAIN_RESOURCE_GROUP} -n ${CLUSTER_NAME}.${BASE_DOMAIN}

    You can skip this step if you are using a public DNS zone that already exists.

  2. Create the private DNS zone in the same resource group as the rest of this deployment:

    $ az network private-dns zone create -g ${RESOURCE_GROUP} -n ${CLUSTER_NAME}.${BASE_DOMAIN}

You can learn more about configuring a public DNS zone in Azure by visiting that section.

1.9.10. Creating a VNet in Azure

You must create a virtual network (VNet) in Microsoft Azure for your OpenShift Container Platform cluster to use. You can customize the VNet to meet your requirements. One way to create the VNet is to modify the provided Azure Resource Manager (ARM) template.

Note

If you do not use the provided ARM template to create your Azure infrastructure, you must review the provided information and manually create the infrastructure. If your cluster does not initialize correctly, you might have to contact Red Hat support with your installation logs.

Prerequisites

  • Configure an Azure account.
  • Generate the Ignition config files for your cluster.

Procedure

  1. Copy the template from the ARM template for the VNet section of this topic and save it as 01_vnet.json in your cluster’s installation directory. This template describes the VNet that your cluster requires.
  2. Create the deployment by using the az CLI:

    $ az deployment group create -g ${RESOURCE_GROUP} \
      --template-file "<installation_directory>/01_vnet.json" \
      --parameters baseName="${INFRA_ID}"1
    1
    The base name to be used in resource names; this is usually the cluster’s infrastructure ID.
  3. Link the VNet template to the private DNS zone:

    $ az network private-dns link vnet create -g ${RESOURCE_GROUP} -z ${CLUSTER_NAME}.${BASE_DOMAIN} -n ${INFRA_ID}-network-link -v "${INFRA_ID}-vnet" -e false

1.9.10.1. ARM template for the VNet

You can use the following Azure Resource Manager (ARM) template to deploy the VNet that you need for your OpenShift Container Platform cluster:

Example 1.1. 01_vnet.json ARM template

{
  "$schema" : "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
  "contentVersion" : "1.0.0.0",
  "parameters" : {
    "baseName" : {
      "type" : "string",
      "minLength" : 1,
      "metadata" : {
        "description" : "Base name to be used in resource names (usually the cluster's Infra ID)"
      }
    }
  },
  "variables" : {
    "location" : "[resourceGroup().location]",
    "virtualNetworkName" : "[concat(parameters('baseName'), '-vnet')]",
    "addressPrefix" : "10.0.0.0/16",
    "masterSubnetName" : "[concat(parameters('baseName'), '-master-subnet')]",
    "masterSubnetPrefix" : "10.0.0.0/24",
    "nodeSubnetName" : "[concat(parameters('baseName'), '-worker-subnet')]",
    "nodeSubnetPrefix" : "10.0.1.0/24",
    "clusterNsgName" : "[concat(parameters('baseName'), '-nsg')]"
  },
  "resources" : [
    {
      "apiVersion" : "2018-12-01",
      "type" : "Microsoft.Network/virtualNetworks",
      "name" : "[variables('virtualNetworkName')]",
      "location" : "[variables('location')]",
      "dependsOn" : [
        "[concat('Microsoft.Network/networkSecurityGroups/', variables('clusterNsgName'))]"
      ],
      "properties" : {
        "addressSpace" : {
          "addressPrefixes" : [
            "[variables('addressPrefix')]"
          ]
        },
        "subnets" : [
          {
            "name" : "[variables('masterSubnetName')]",
            "properties" : {
              "addressPrefix" : "[variables('masterSubnetPrefix')]",
              "serviceEndpoints": [],
              "networkSecurityGroup" : {
                "id" : "[resourceId('Microsoft.Network/networkSecurityGroups', variables('clusterNsgName'))]"
              }
            }
          },
          {
            "name" : "[variables('nodeSubnetName')]",
            "properties" : {
              "addressPrefix" : "[variables('nodeSubnetPrefix')]",
              "serviceEndpoints": [],
              "networkSecurityGroup" : {
                "id" : "[resourceId('Microsoft.Network/networkSecurityGroups', variables('clusterNsgName'))]"
              }
            }
          }
        ]
      }
    },
    {
      "type" : "Microsoft.Network/networkSecurityGroups",
      "name" : "[variables('clusterNsgName')]",
      "apiVersion" : "2018-10-01",
      "location" : "[variables('location')]",
      "properties" : {
        "securityRules" : [
          {
            "name" : "apiserver_in",
            "properties" : {
              "protocol" : "Tcp",
              "sourcePortRange" : "*",
              "destinationPortRange" : "6443",
              "sourceAddressPrefix" : "*",
              "destinationAddressPrefix" : "*",
              "access" : "Allow",
              "priority" : 101,
              "direction" : "Inbound"
            }
          }
        ]
      }
    }
  ]
}

1.9.11. Deploying the RHCOS cluster image for the Azure infrastructure

You must use a valid Red Hat Enterprise Linux CoreOS (RHCOS) image for Microsoft Azure for your OpenShift Container Platform nodes.

Prerequisites

  • Configure an Azure account.
  • Generate the Ignition config files for your cluster.
  • Store the RHCOS virtual hard disk (VHD) cluster image in an Azure storage container.
  • Store the bootstrap Ignition config file in an Azure storage container.

Procedure

  1. Copy the template from the ARM template for image storage section of this topic and save it as 02_storage.json in your cluster’s installation directory. This template describes the image storage that your cluster requires.
  2. Export the RHCOS VHD blob URL as a variable:

    $ export VHD_BLOB_URL=`az storage blob url --account-name ${CLUSTER_NAME}sa --account-key ${ACCOUNT_KEY} -c vhd -n "rhcos.vhd" -o tsv`
  3. Deploy the cluster image:

    $ az deployment group create -g ${RESOURCE_GROUP} \
      --template-file "<installation_directory>/02_storage.json" \
      --parameters vhdBlobURL="${VHD_BLOB_URL}" \ 1
      --parameters baseName="${INFRA_ID}"2
    1
    The blob URL of the RHCOS VHD to be used to create master and worker machines.
    2
    The base name to be used in resource names; this is usually the cluster’s infrastructure ID.

1.9.11.1. ARM template for image storage

You can use the following Azure Resource Manager (ARM) template to deploy the stored Red Hat Enterprise Linux CoreOS (RHCOS) image that you need for your OpenShift Container Platform cluster:

Example 1.2. 02_storage.json ARM template

{
  "$schema" : "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
  "contentVersion" : "1.0.0.0",
  "parameters" : {
    "baseName" : {
      "type" : "string",
      "minLength" : 1,
      "metadata" : {
        "description" : "Base name to be used in resource names (usually the cluster's Infra ID)"
      }
    },
    "vhdBlobURL" : {
      "type" : "string",
      "metadata" : {
        "description" : "URL pointing to the blob where the VHD to be used to create master and worker machines is located"
      }
    }
  },
  "variables" : {
    "location" : "[resourceGroup().location]",
    "imageName" : "[concat(parameters('baseName'), '-image')]"
  },
  "resources" : [
    {
      "apiVersion" : "2018-06-01",
      "type": "Microsoft.Compute/images",
      "name": "[variables('imageName')]",
      "location" : "[variables('location')]",
      "properties": {
        "storageProfile": {
          "osDisk": {
            "osType": "Linux",
            "osState": "Generalized",
            "blobUri": "[parameters('vhdBlobURL')]",
            "storageAccountType": "Standard_LRS"
          }
        }
      }
    }
  ]
}

1.9.12. Networking requirements for user-provisioned infrastructure

All the Red Hat Enterprise Linux CoreOS (RHCOS) machines require network in initramfs during boot to fetch Ignition config from the machine config server.

You must configure the network connectivity between machines to allow cluster components to communicate. Each machine must be able to resolve the host names of all other machines in the cluster.

Table 1.29. All machines to all machines

ProtocolPortDescription

ICMP

N/A

Network reachability tests

TCP

1936

Metrics

9000-9999

Host level services, including the node exporter on ports 9100-9101 and the Cluster Version Operator on port 9099.

10250-10259

The default ports that Kubernetes reserves

10256

openshift-sdn

UDP

4789

VXLAN and Geneve

6081

VXLAN and Geneve

9000-9999

Host level services, including the node exporter on ports 9100-9101.

TCP/UDP

30000-32767

Kubernetes node port

Table 1.30. All machines to control plane

ProtocolPortDescription

TCP

6443

Kubernetes API

Table 1.31. Control plane machines to control plane machines

ProtocolPortDescription

TCP

2379-2380

etcd server and peer ports

Network topology requirements

The infrastructure that you provision for your cluster must meet the following network topology requirements.

Important

OpenShift Container Platform requires all nodes to have internet access to pull images for platform containers and provide telemetry data to Red Hat.

Load balancers

Before you install OpenShift Container Platform, you must provision two load balancers that meet the following requirements:

  1. API load balancer: Provides a common endpoint for users, both human and machine, to interact with and configure the platform. Configure the following conditions:

    • Layer 4 load balancing only. This can be referred to as Raw TCP, SSL Passthrough, or SSL Bridge mode. If you use SSL Bridge mode, you must enable Server Name Indication (SNI) for the API routes.
    • A stateless load balancing algorithm. The options vary based on the load balancer implementation.
    Important

    Do not configure session persistence for an API load balancer.

    Configure the following ports on both the front and back of the load balancers:

    Table 1.32. API load balancer

    PortBack-end machines (pool members)InternalExternalDescription

    6443

    Bootstrap and control plane. You remove the bootstrap machine from the load balancer after the bootstrap machine initializes the cluster control plane. You must configure the /readyz endpoint for the API server health check probe.

    X

    X

    Kubernetes API server

    22623

    Bootstrap and control plane. You remove the bootstrap machine from the load balancer after the bootstrap machine initializes the cluster control plane.

    X

     

    Machine config server

    Note

    The load balancer must be configured to take a maximum of 30 seconds from the time the API server turns off the /readyz endpoint to the removal of the API server instance from the pool. Within the time frame after /readyz returns an error or becomes healthy, the endpoint must have been removed or added. Probing every 5 or 10 seconds, with two successful requests to become healthy and three to become unhealthy, are well-tested values.

  2. Application Ingress load balancer: Provides an Ingress point for application traffic flowing in from outside the cluster. Configure the following conditions:

    • Layer 4 load balancing only. This can be referred to as Raw TCP, SSL Passthrough, or SSL Bridge mode. If you use SSL Bridge mode, you must enable Server Name Indication (SNI) for the Ingress routes.
    • A connection-based or session-based persistence is recommended, based on the options available and types of applications that will be hosted on the platform.

    Configure the following ports on both the front and back of the load balancers:

    Table 1.33. Application Ingress load balancer

    PortBack-end machines (pool members)InternalExternalDescription

    443

    The machines that run the Ingress router pods, compute, or worker, by default.

    X

    X

    HTTPS traffic

    80

    The machines that run the Ingress router pods, compute, or worker, by default.

    X

    X

    HTTP traffic

Tip

If the true IP address of the client can be seen by the load balancer, enabling source IP-based session persistence can improve performance for applications that use end-to-end TLS encryption.

Note

A working configuration for the Ingress router is required for an OpenShift Container Platform cluster. You must configure the Ingress router after the control plane initializes.

1.9.13. Creating networking and load balancing components in Azure

You must configure networking and load balancing in Microsoft Azure for your OpenShift Container Platform cluster to use. One way to create these components is to modify the provided Azure Resource Manager (ARM) template.

Note

If you do not use the provided ARM template to create your Azure infrastructure, you must review the provided information and manually create the infrastructure. If your cluster does not initialize correctly, you might have to contact Red Hat support with your installation logs.

Prerequisites

  • Configure an Azure account.
  • Generate the Ignition config files for your cluster.
  • Create and configure a VNet and associated subnets in Azure.

Procedure

  1. Copy the template from the ARM template for the network and load balancers section of this topic and save it as 03_infra.json in your cluster’s installation directory. This template describes the networking and load balancing objects that your cluster requires.
  2. Create the deployment by using the az CLI:

    $ az deployment group create -g ${RESOURCE_GROUP} \
      --template-file "<installation_directory>/03_infra.json" \
      --parameters privateDNSZoneName="${CLUSTER_NAME}.${BASE_DOMAIN}" \ 1
      --parameters baseName="${INFRA_ID}"2
    1
    The name of the private DNS zone.
    2
    The base name to be used in resource names; this is usually the cluster’s infrastructure ID.
  3. Create an api DNS record in the public zone for the API public load balancer. The ${BASE_DOMAIN_RESOURCE_GROUP} variable must point to the resource group where the public DNS zone exists.

    1. Export the following variable:

      $ export PUBLIC_IP=`az network public-ip list -g ${RESOURCE_GROUP} --query "[?name=='${INFRA_ID}-master-pip'] | [0].ipAddress" -o tsv`
    2. Create the DNS record in a new public zone:

      $ az network dns record-set a add-record -g ${BASE_DOMAIN_RESOURCE_GROUP} -z ${CLUSTER_NAME}.${BASE_DOMAIN} -n api -a ${PUBLIC_IP} --ttl 60
    3. If you are adding the cluster to an existing public zone, you can create the DNS record in it instead:

      $ az network dns record-set a add-record -g ${BASE_DOMAIN_RESOURCE_GROUP} -z ${BASE_DOMAIN} -n api.${CLUSTER_NAME} -a ${PUBLIC_IP} --ttl 60

1.9.13.1. ARM template for the network and load balancers

You can use the following Azure Resource Manager (ARM) template to deploy the networking objects and load balancers that you need for your OpenShift Container Platform cluster:

Example 1.3. 03_infra.json ARM template

{
  "$schema" : "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
  "contentVersion" : "1.0.0.0",
  "parameters" : {
    "baseName" : {
      "type" : "string",
      "minLength" : 1,
      "metadata" : {
        "description" : "Base name to be used in resource names (usually the cluster's Infra ID)"
      }
    },
    "privateDNSZoneName" : {
      "type" : "string",
      "metadata" : {
        "description" : "Name of the private DNS zone"
      }
    }
  },
  "variables" : {
    "location" : "[resourceGroup().location]",
    "virtualNetworkName" : "[concat(parameters('baseName'), '-vnet')]",
    "virtualNetworkID" : "[resourceId('Microsoft.Network/virtualNetworks', variables('virtualNetworkName'))]",
    "masterSubnetName" : "[concat(parameters('baseName'), '-master-subnet')]",
    "masterSubnetRef" : "[concat(variables('virtualNetworkID'), '/subnets/', variables('masterSubnetName'))]",
    "masterPublicIpAddressName" : "[concat(parameters('baseName'), '-master-pip')]",
    "masterPublicIpAddressID" : "[resourceId('Microsoft.Network/publicIPAddresses', variables('masterPublicIpAddressName'))]",
    "masterLoadBalancerName" : "[concat(parameters('baseName'), '-public-lb')]",
    "masterLoadBalancerID" : "[resourceId('Microsoft.Network/loadBalancers', variables('masterLoadBalancerName'))]",
    "internalLoadBalancerName" : "[concat(parameters('baseName'), '-internal-lb')]",
    "internalLoadBalancerID" : "[resourceId('Microsoft.Network/loadBalancers', variables('internalLoadBalancerName'))]",
    "skuName": "Standard"
  },
  "resources" : [
    {
      "apiVersion" : "2018-12-01",
      "type" : "Microsoft.Network/publicIPAddresses",
      "name" : "[variables('masterPublicIpAddressName')]",
      "location" : "[variables('location')]",
      "sku": {
        "name": "[variables('skuName')]"
      },
      "properties" : {
        "publicIPAllocationMethod" : "Static",
        "dnsSettings" : {
          "domainNameLabel" : "[variables('masterPublicIpAddressName')]"
        }
      }
    },
    {
      "apiVersion" : "2018-12-01",
      "type" : "Microsoft.Network/loadBalancers",
      "name" : "[variables('masterLoadBalancerName')]",
      "location" : "[variables('location')]",
      "sku": {
        "name": "[variables('skuName')]"
      },
      "dependsOn" : [
        "[concat('Microsoft.Network/publicIPAddresses/', variables('masterPublicIpAddressName'))]"
      ],
      "properties" : {
        "frontendIPConfigurations" : [
          {
            "name" : "public-lb-ip",
            "properties" : {
              "publicIPAddress" : {
                "id" : "[variables('masterPublicIpAddressID')]"
              }
            }
          }
        ],
        "backendAddressPools" : [
          {
            "name" : "public-lb-backend"
          }
        ],
        "loadBalancingRules" : [
          {
            "name" : "api-internal",
            "properties" : {
              "frontendIPConfiguration" : {
                "id" :"[concat(variables('masterLoadBalancerID'), '/frontendIPConfigurations/public-lb-ip')]"
              },
              "backendAddressPool" : {
                "id" : "[concat(variables('masterLoadBalancerID'), '/backendAddressPools/public-lb-backend')]"
              },
              "protocol" : "Tcp",
              "loadDistribution" : "Default",
              "idleTimeoutInMinutes" : 30,
              "frontendPort" : 6443,
              "backendPort" : 6443,
              "probe" : {
                "id" : "[concat(variables('masterLoadBalancerID'), '/probes/api-internal-probe')]"
              }
            }
          }
        ],
        "probes" : [
          {
            "name" : "api-internal-probe",
            "properties" : {
              "protocol" : "Https",
              "port" : 6443,
              "requestPath": "/readyz",
              "intervalInSeconds" : 10,
              "numberOfProbes" : 3
            }
          }
        ]
      }
    },
    {
      "apiVersion" : "2018-12-01",
      "type" : "Microsoft.Network/loadBalancers",
      "name" : "[variables('internalLoadBalancerName')]",
      "location" : "[variables('location')]",
      "sku": {
        "name": "[variables('skuName')]"
      },
      "properties" : {
        "frontendIPConfigurations" : [
          {
            "name" : "internal-lb-ip",
            "properties" : {
              "privateIPAllocationMethod" : "Dynamic",
              "subnet" : {
                "id" : "[variables('masterSubnetRef')]"
              },
              "privateIPAddressVersion" : "IPv4"
            }
          }
        ],
        "backendAddressPools" : [
          {
            "name" : "internal-lb-backend"
          }
        ],
        "loadBalancingRules" : [
          {
            "name" : "api-internal",
            "properties" : {
              "frontendIPConfiguration" : {
                "id" : "[concat(variables('internalLoadBalancerID'), '/frontendIPConfigurations/internal-lb-ip')]"
              },
              "frontendPort" : 6443,
              "backendPort" : 6443,
              "enableFloatingIP" : false,
              "idleTimeoutInMinutes" : 30,
              "protocol" : "Tcp",
              "enableTcpReset" : false,
              "loadDistribution" : "Default",
              "backendAddressPool" : {
                "id" : "[concat(variables('internalLoadBalancerID'), '/backendAddressPools/internal-lb-backend')]"
              },
              "probe" : {
                "id" : "[concat(variables('internalLoadBalancerID'), '/probes/api-internal-probe')]"
              }
            }
          },
          {
            "name" : "sint",
            "properties" : {
              "frontendIPConfiguration" : {
                "id" : "[concat(variables('internalLoadBalancerID'), '/frontendIPConfigurations/internal-lb-ip')]"
              },
              "frontendPort" : 22623,
              "backendPort" : 22623,
              "enableFloatingIP" : false,
              "idleTimeoutInMinutes" : 30,
              "protocol" : "Tcp",
              "enableTcpReset" : false,
              "loadDistribution" : "Default",
              "backendAddressPool" : {
                "id" : "[concat(variables('internalLoadBalancerID'), '/backendAddressPools/internal-lb-backend')]"
              },
              "probe" : {
                "id" : "[concat(variables('internalLoadBalancerID'), '/probes/sint-probe')]"
              }
            }
          }
        ],
        "probes" : [
          {
            "name" : "api-internal-probe",
            "properties" : {
              "protocol" : "Https",
              "port" : 6443,
              "requestPath": "/readyz",
              "intervalInSeconds" : 10,
              "numberOfProbes" : 3
            }
          },
          {
            "name" : "sint-probe",
            "properties" : {
              "protocol" : "Https",
              "port" : 22623,
              "requestPath": "/healthz",
              "intervalInSeconds" : 10,
              "numberOfProbes" : 3
            }
          }
        ]
      }
    },
    {
      "apiVersion": "2018-09-01",
      "type": "Microsoft.Network/privateDnsZones/A",
      "name": "[concat(parameters('privateDNSZoneName'), '/api')]",
      "location" : "[variables('location')]",
      "dependsOn" : [
        "[concat('Microsoft.Network/loadBalancers/', variables('internalLoadBalancerName'))]"
      ],
      "properties": {
        "ttl": 60,
        "aRecords": [
          {
            "ipv4Address": "[reference(variables('internalLoadBalancerName')).frontendIPConfigurations[0].properties.privateIPAddress]"
          }
        ]
      }
    },
    {
      "apiVersion": "2018-09-01",
      "type": "Microsoft.Network/privateDnsZones/A",
      "name": "[concat(parameters('privateDNSZoneName'), '/api-int')]",
      "location" : "[variables('location')]",
      "dependsOn" : [
        "[concat('Microsoft.Network/loadBalancers/', variables('internalLoadBalancerName'))]"
      ],
      "properties": {
        "ttl": 60,
        "aRecords": [
          {
            "ipv4Address": "[reference(variables('internalLoadBalancerName')).frontendIPConfigurations[0].properties.privateIPAddress]"
          }
        ]
      }
    }
  ]
}

1.9.14. Creating the bootstrap machine in Azure

You must create the bootstrap machine in Microsoft Azure to use during OpenShift Container Platform cluster initialization. One way to create this machine is to modify the provided Azure Resource Manager (ARM) template.

Note

If you do not use the provided ARM template to create your bootstrap machine, you must review the provided information and manually create the infrastructure. If your cluster does not initialize correctly, you might have to contact Red Hat support with your installation logs.

Prerequisites

  • Configure an Azure account.
  • Generate the Ignition config files for your cluster.
  • Create and configure a VNet and associated subnets in Azure.
  • Create and configure networking and load balancers in Azure.
  • Create control plane and compute roles.

Procedure

  1. Copy the template from the ARM template for the bootstrap machine section of this topic and save it as 04_bootstrap.json in your cluster’s installation directory. This template describes the bootstrap machine that your cluster requires.
  2. Export the following variables required by the bootstrap machine deployment:

    $ export BOOTSTRAP_URL=`az storage blob url --account-name ${CLUSTER_NAME}sa --account-key ${ACCOUNT_KEY} -c "files" -n "bootstrap.ign" -o tsv`
    $ export BOOTSTRAP_IGNITION=`jq -rcnM --arg v "3.1.0" --arg url ${BOOTSTRAP_URL} '{ignition:{version:$v,config:{replace:{source:$url}}}}' | base64 | tr -d '\n'`
  3. Create the deployment by using the az CLI:

    $ az deployment group create -g ${RESOURCE_GROUP} \
      --template-file "<installation_directory>/04_bootstrap.json" \
      --parameters bootstrapIgnition="${BOOTSTRAP_IGNITION}" \ 1
      --parameters sshKeyData="${SSH_KEY}" \ 2
      --parameters baseName="${INFRA_ID}" 3
    1
    The bootstrap Ignition content for the bootstrap cluster.
    2
    The SSH RSA public key file as a string.
    3
    The base name to be used in resource names; this is usually the cluster’s infrastructure ID.

1.9.14.1. ARM template for the bootstrap machine

You can use the following Azure Resource Manager (ARM) template to deploy the bootstrap machine that you need for your OpenShift Container Platform cluster:

Example 1.4. 04_bootstrap.json ARM template

{
  "$schema" : "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
  "contentVersion" : "1.0.0.0",
  "parameters" : {
    "baseName" : {
      "type" : "string",
      "minLength" : 1,
      "metadata" : {
        "description" : "Base name to be used in resource names (usually the cluster's Infra ID)"
      }
    },
    "bootstrapIgnition" : {
      "type" : "string",
      "minLength" : 1,
      "metadata" : {
        "description" : "Bootstrap ignition content for the bootstrap cluster"
      }
    },
    "sshKeyData" : {
      "type" : "securestring",
      "metadata" : {
        "description" : "SSH RSA public key file as a string."
      }
    },
    "bootstrapVMSize" : {
      "type" : "string",
      "defaultValue" : "Standard_D4s_v3",
      "allowedValues" : [
        "Standard_A2",
        "Standard_A3",
        "Standard_A4",
        "Standard_A5",
        "Standard_A6",
        "Standard_A7",
        "Standard_A8",
        "Standard_A9",
        "Standard_A10",
        "Standard_A11",
        "Standard_D2",
        "Standard_D3",
        "Standard_D4",
        "Standard_D11",
        "Standard_D12",
        "Standard_D13",
        "Standard_D14",
        "Standard_D2_v2",
        "Standard_D3_v2",
        "Standard_D4_v2",
        "Standard_D5_v2",
        "Standard_D8_v3",
        "Standard_D11_v2",
        "Standard_D12_v2",
        "Standard_D13_v2",
        "Standard_D14_v2",
        "Standard_E2_v3",
        "Standard_E4_v3",
        "Standard_E8_v3",
        "Standard_E16_v3",
        "Standard_E32_v3",
        "Standard_E64_v3",
        "Standard_E2s_v3",
        "Standard_E4s_v3",
        "Standard_E8s_v3",
        "Standard_E16s_v3",
        "Standard_E32s_v3",
        "Standard_E64s_v3",
        "Standard_G1",
        "Standard_G2",
        "Standard_G3",
        "Standard_G4",
        "Standard_G5",
        "Standard_DS2",
        "Standard_DS3",
        "Standard_DS4",
        "Standard_DS11",
        "Standard_DS12",
        "Standard_DS13",
        "Standard_DS14",
        "Standard_DS2_v2",
        "Standard_DS3_v2",
        "Standard_DS4_v2",
        "Standard_DS5_v2",
        "Standard_DS11_v2",
        "Standard_DS12_v2",
        "Standard_DS13_v2",
        "Standard_DS14_v2",
        "Standard_GS1",
        "Standard_GS2",
        "Standard_GS3",
        "Standard_GS4",
        "Standard_GS5",
        "Standard_D2s_v3",
        "Standard_D4s_v3",
        "Standard_D8s_v3"
      ],
      "metadata" : {
        "description" : "The size of the Bootstrap Virtual Machine"
      }
    }
  },
  "variables" : {
    "location" : "[resourceGroup().location]",
    "virtualNetworkName" : "[concat(parameters('baseName'), '-vnet')]",
    "virtualNetworkID" : "[resourceId('Microsoft.Network/virtualNetworks', variables('virtualNetworkName'))]",
    "masterSubnetName" : "[concat(parameters('baseName'), '-master-subnet')]",
    "masterSubnetRef" : "[concat(variables('virtualNetworkID'), '/subnets/', variables('masterSubnetName'))]",
    "masterLoadBalancerName" : "[concat(parameters('baseName'), '-public-lb')]",
    "internalLoadBalancerName" : "[concat(parameters('baseName'), '-internal-lb')]",
    "sshKeyPath" : "/home/core/.ssh/authorized_keys",
    "identityName" : "[concat(parameters('baseName'), '-identity')]",
    "vmName" : "[concat(parameters('baseName'), '-bootstrap')]",
    "nicName" : "[concat(variables('vmName'), '-nic')]",
    "imageName" : "[concat(parameters('baseName'), '-image')]",
    "clusterNsgName" : "[concat(parameters('baseName'), '-nsg')]",
    "sshPublicIpAddressName" : "[concat(variables('vmName'), '-ssh-pip')]"
  },
  "resources" : [
    {
      "apiVersion" : "2018-12-01",
      "type" : "Microsoft.Network/publicIPAddresses",
      "name" : "[variables('sshPublicIpAddressName')]",
      "location" : "[variables('location')]",
      "sku": {
        "name": "Standard"
      },
      "properties" : {
        "publicIPAllocationMethod" : "Static",
        "dnsSettings" : {
          "domainNameLabel" : "[variables('sshPublicIpAddressName')]"
        }
      }
    },
    {
      "apiVersion" : "2018-06-01",
      "type" : "Microsoft.Network/networkInterfaces",
      "name" : "[variables('nicName')]",
      "location" : "[variables('location')]",
      "dependsOn" : [
        "[resourceId('Microsoft.Network/publicIPAddresses', variables('sshPublicIpAddressName'))]"
      ],
      "properties" : {
        "ipConfigurations" : [
          {
            "name" : "pipConfig",
            "properties" : {
              "privateIPAllocationMethod" : "Dynamic",
              "publicIPAddress": {
                "id": "[resourceId('Microsoft.Network/publicIPAddresses', variables('sshPublicIpAddressName'))]"
              },
              "subnet" : {
                "id" : "[variables('masterSubnetRef')]"
              },
              "loadBalancerBackendAddressPools" : [
                {
                  "id" : "[concat('/subscriptions/', subscription().subscriptionId, '/resourceGroups/', resourceGroup().name, '/providers/Microsoft.Network/loadBalancers/', variables('masterLoadBalancerName'), '/backendAddressPools/public-lb-backend')]"
                },
                {
                  "id" : "[concat('/subscriptions/', subscription().subscriptionId, '/resourceGroups/', resourceGroup().name, '/providers/Microsoft.Network/loadBalancers/', variables('internalLoadBalancerName'), '/backendAddressPools/internal-lb-backend')]"
                }
              ]
            }
          }
        ]
      }
    },
    {
      "apiVersion" : "2018-06-01",
      "type" : "Microsoft.Compute/virtualMachines",
      "name" : "[variables('vmName')]",
      "location" : "[variables('location')]",
      "identity" : {
        "type" : "userAssigned",
        "userAssignedIdentities" : {
          "[resourceID('Microsoft.ManagedIdentity/userAssignedIdentities/', variables('identityName'))]" : {}
        }
      },
      "dependsOn" : [
        "[concat('Microsoft.Network/networkInterfaces/', variables('nicName'))]"
      ],
      "properties" : {
        "hardwareProfile" : {
          "vmSize" : "[parameters('bootstrapVMSize')]"
        },
        "osProfile" : {
          "computerName" : "[variables('vmName')]",
          "adminUsername" : "core",
          "customData" : "[parameters('bootstrapIgnition')]",
          "linuxConfiguration" : {
            "disablePasswordAuthentication" : true,
            "ssh" : {
              "publicKeys" : [
                {
                  "path" : "[variables('sshKeyPath')]",
                  "keyData" : "[parameters('sshKeyData')]"
                }
              ]
            }
          }
        },
        "storageProfile" : {
          "imageReference": {
            "id": "[resourceId('Microsoft.Compute/images', variables('imageName'))]"
          },
          "osDisk" : {
            "name": "[concat(variables('vmName'),'_OSDisk')]",
            "osType" : "Linux",
            "createOption" : "FromImage",
            "managedDisk": {
              "storageAccountType": "Premium_LRS"
            },
            "diskSizeGB" : 100
          }
        },
        "networkProfile" : {
          "networkInterfaces" : [
            {
              "id" : "[resourceId('Microsoft.Network/networkInterfaces', variables('nicName'))]"
            }
          ]
        }
      }
    },
    {
      "apiVersion" : "2018-06-01",
      "type": "Microsoft.Network/networkSecurityGroups/securityRules",
      "name" : "[concat(variables('clusterNsgName'), '/bootstrap_ssh_in')]",
      "location" : "[variables('location')]",
      "dependsOn" : [
        "[resourceId('Microsoft.Compute/virtualMachines', variables('vmName'))]"
      ],
      "properties": {
        "protocol" : "Tcp",
        "sourcePortRange" : "*",
        "destinationPortRange" : "22",
        "sourceAddressPrefix" : "*",
        "destinationAddressPrefix" : "*",
        "access" : "Allow",
        "priority" : 100,
        "direction" : "Inbound"
      }
    }
  ]
}

1.9.15. Creating the control plane machines in Azure

You must create the control plane machines in Microsoft Azure for your cluster to use. One way to create these machines is to modify the provided Azure Resource Manager (ARM) template.

Note

If you do not use the provided ARM template to create your control plane machines, you must review the provided information and manually create the infrastructure. If your cluster does not initialize correctly, you might have to contact Red Hat support with your installation logs.

Prerequisites

  • Configure an Azure account.
  • Generate the Ignition config files for your cluster.
  • Create and configure a VNet and associated subnets in Azure.
  • Create and configure networking and load balancers in Azure.
  • Create control plane and compute roles.
  • Create the bootstrap machine.

Procedure

  1. Copy the template from the ARM template for control plane machines section of this topic and save it as 05_masters.json in your cluster’s installation directory. This template describes the control plane machines that your cluster requires.
  2. Export the following variable needed by the control plane machine deployment:

    $ export MASTER_IGNITION=`cat <installation_directory>/master.ign | base64 | tr -d '\n'`
  3. Create the deployment by using the az CLI:

    $ az deployment group create -g ${RESOURCE_GROUP} \
      --template-file "<installation_directory>/05_masters.json" \
      --parameters masterIgnition="${MASTER_IGNITION}" \ 1
      --parameters sshKeyData="${SSH_KEY}" \ 2
      --parameters privateDNSZoneName="${CLUSTER_NAME}.${BASE_DOMAIN}" \ 3
      --parameters baseName="${INFRA_ID}"4
    1
    The Ignition content for the control plane nodes (also known as the master nodes).
    2
    The SSH RSA public key file as a string.
    3
    The name of the private DNS zone to which the control plane nodes are attached.
    4
    The base name to be used in resource names; this is usually the cluster’s infrastructure ID.

1.9.15.1. ARM template for control plane machines

You can use the following Azure Resource Manager (ARM) template to deploy the control plane machines that you need for your OpenShift Container Platform cluster:

Example 1.5. 05_masters.json ARM template

{
  "$schema" : "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
  "contentVersion" : "1.0.0.0",
  "parameters" : {
    "baseName" : {
      "type" : "string",
      "minLength" : 1,
      "metadata" : {
        "description" : "Base name to be used in resource names (usually the cluster's Infra ID)"
      }
    },
    "masterIgnition" : {
      "type" : "string",
      "metadata" : {
        "description" : "Ignition content for the master nodes"
      }
    },
    "numberOfMasters" : {
      "type" : "int",
      "defaultValue" : 3,
      "minValue" : 2,
      "maxValue" : 30,
      "metadata" : {
        "description" : "Number of OpenShift masters to deploy"
      }
    },
    "sshKeyData" : {
      "type" : "securestring",
      "metadata" : {
        "description" : "SSH RSA public key file as a string"
      }
    },
    "privateDNSZoneName" : {
      "type" : "string",
      "metadata" : {
        "description" : "Name of the private DNS zone the master nodes are going to be attached to"
      }
    },
    "masterVMSize" : {
      "type" : "string",
      "defaultValue" : "Standard_D8s_v3",
      "allowedValues" : [
        "Standard_A2",
        "Standard_A3",
        "Standard_A4",
        "Standard_A5",
        "Standard_A6",
        "Standard_A7",
        "Standard_A8",
        "Standard_A9",
        "Standard_A10",
        "Standard_A11",
        "Standard_D2",
        "Standard_D3",
        "Standard_D4",
        "Standard_D11",
        "Standard_D12",
        "Standard_D13",
        "Standard_D14",
        "Standard_D2_v2",
        "Standard_D3_v2",
        "Standard_D4_v2",
        "Standard_D5_v2",
        "Standard_D8_v3",
        "Standard_D11_v2",
        "Standard_D12_v2",
        "Standard_D13_v2",
        "Standard_D14_v2",
        "Standard_E2_v3",
        "Standard_E4_v3",
        "Standard_E8_v3",
        "Standard_E16_v3",
        "Standard_E32_v3",
        "Standard_E64_v3",
        "Standard_E2s_v3",
        "Standard_E4s_v3",
        "Standard_E8s_v3",
        "Standard_E16s_v3",
        "Standard_E32s_v3",
        "Standard_E64s_v3",
        "Standard_G1",
        "Standard_G2",
        "Standard_G3",
        "Standard_G4",
        "Standard_G5",
        "Standard_DS2",
        "Standard_DS3",
        "Standard_DS4",
        "Standard_DS11",
        "Standard_DS12",
        "Standard_DS13",
        "Standard_DS14",
        "Standard_DS2_v2",
        "Standard_DS3_v2",
        "Standard_DS4_v2",
        "Standard_DS5_v2",
        "Standard_DS11_v2",
        "Standard_DS12_v2",
        "Standard_DS13_v2",
        "Standard_DS14_v2",
        "Standard_GS1",
        "Standard_GS2",
        "Standard_GS3",
        "Standard_GS4",
        "Standard_GS5",
        "Standard_D2s_v3",
        "Standard_D4s_v3",
        "Standard_D8s_v3"
      ],
      "metadata" : {
        "description" : "The size of the Master Virtual Machines"
      }
    },
    "diskSizeGB" : {
      "type" : "int",
      "defaultValue" : 1024,
      "metadata" : {
        "description" : "Size of the Master VM OS disk, in GB"
      }
    }
  },
  "variables" : {
    "location" : "[resourceGroup().location]",
    "virtualNetworkName" : "[concat(parameters('baseName'), '-vnet')]",
    "virtualNetworkID" : "[resourceId('Microsoft.Network/virtualNetworks', variables('virtualNetworkName'))]",
    "masterSubnetName" : "[concat(parameters('baseName'), '-master-subnet')]",
    "masterSubnetRef" : "[concat(variables('virtualNetworkID'), '/subnets/', variables('masterSubnetName'))]",
    "masterLoadBalancerName" : "[concat(parameters('baseName'), '-public-lb')]",
    "internalLoadBalancerName" : "[concat(parameters('baseName'), '-internal-lb')]",
    "sshKeyPath" : "/home/core/.ssh/authorized_keys",
    "identityName" : "[concat(parameters('baseName'), '-identity')]",
    "imageName" : "[concat(parameters('baseName'), '-image')]",
    "copy" : [
      {
        "name" : "vmNames",
        "count" :  "[parameters('numberOfMasters')]",
        "input" : "[concat(parameters('baseName'), '-master-', copyIndex('vmNames'))]"
      }
    ]
  },
  "resources" : [
    {
      "apiVersion" : "2018-06-01",
      "type" : "Microsoft.Network/networkInterfaces",
      "copy" : {
        "name" : "nicCopy",
        "count" : "[length(variables('vmNames'))]"
      },
      "name" : "[concat(variables('vmNames')[copyIndex()], '-nic')]",
      "location" : "[variables('location')]",
      "properties" : {
        "ipConfigurations" : [
          {
            "name" : "pipConfig",
            "properties" : {
              "privateIPAllocationMethod" : "Dynamic",
              "subnet" : {
                "id" : "[variables('masterSubnetRef')]"
              },
              "loadBalancerBackendAddressPools" : [
                {
                  "id" : "[concat('/subscriptions/', subscription().subscriptionId, '/resourceGroups/', resourceGroup().name, '/providers/Microsoft.Network/loadBalancers/', variables('masterLoadBalancerName'), '/backendAddressPools/public-lb-backend')]"
                },
                {
                  "id" : "[concat('/subscriptions/', subscription().subscriptionId, '/resourceGroups/', resourceGroup().name, '/providers/Microsoft.Network/loadBalancers/', variables('internalLoadBalancerName'), '/backendAddressPools/internal-lb-backend')]"
                }
              ]
            }
          }
        ]
      }
    },
    {
      "apiVersion": "2018-09-01",
      "type": "Microsoft.Network/privateDnsZones/SRV",
      "name": "[concat(parameters('privateDNSZoneName'), '/_etcd-server-ssl._tcp')]",
      "location" : "[variables('location')]",
      "properties": {
        "ttl": 60,
        "copy": [{
          "name": "srvRecords",
          "count": "[length(variables('vmNames'))]",
          "input": {
            "priority": 0,
            "weight" : 10,
            "port" : 2380,
            "target" : "[concat('etcd-', copyIndex('srvRecords'), '.', parameters('privateDNSZoneName'))]"
          }
        }]
      }
    },
    {
      "apiVersion": "2018-09-01",
      "type": "Microsoft.Network/privateDnsZones/A",
      "copy" : {
        "name" : "dnsCopy",
        "count" : "[length(variables('vmNames'))]"
      },
      "name": "[concat(parameters('privateDNSZoneName'), '/etcd-', copyIndex())]",
      "location" : "[variables('location')]",
      "dependsOn" : [
        "[concat('Microsoft.Network/networkInterfaces/', concat(variables('vmNames')[copyIndex()], '-nic'))]"
      ],
      "properties": {
        "ttl": 60,
        "aRecords": [
          {
            "ipv4Address": "[reference(concat(variables('vmNames')[copyIndex()], '-nic')).ipConfigurations[0].properties.privateIPAddress]"
          }
        ]
      }
    },
    {
      "apiVersion" : "2018-06-01",
      "type" : "Microsoft.Compute/virtualMachines",
      "copy" : {
        "name" : "vmCopy",
        "count" : "[length(variables('vmNames'))]"
      },
      "name" : "[variables('vmNames')[copyIndex()]]",
      "location" : "[variables('location')]",
      "identity" : {
        "type" : "userAssigned",
        "userAssignedIdentities" : {
          "[resourceID('Microsoft.ManagedIdentity/userAssignedIdentities/', variables('identityName'))]" : {}
        }
      },
      "dependsOn" : [
        "[concat('Microsoft.Network/networkInterfaces/', concat(variables('vmNames')[copyIndex()], '-nic'))]",
        "[concat('Microsoft.Network/privateDnsZones/', parameters('privateDNSZoneName'), '/A/etcd-', copyIndex())]",
        "[concat('Microsoft.Network/privateDnsZones/', parameters('privateDNSZoneName'), '/SRV/_etcd-server-ssl._tcp')]"
      ],
      "properties" : {
        "hardwareProfile" : {
          "vmSize" : "[parameters('masterVMSize')]"
        },
        "osProfile" : {
          "computerName" : "[variables('vmNames')[copyIndex()]]",
          "adminUsername" : "core",
          "customData" : "[parameters('masterIgnition')]",
          "linuxConfiguration" : {
            "disablePasswordAuthentication" : true,
            "ssh" : {
              "publicKeys" : [
                {
                  "path" : "[variables('sshKeyPath')]",
                  "keyData" : "[parameters('sshKeyData')]"
                }
              ]
            }
          }
        },
        "storageProfile" : {
          "imageReference": {
            "id": "[resourceId('Microsoft.Compute/images', variables('imageName'))]"
          },
          "osDisk" : {
            "name": "[concat(variables('vmNames')[copyIndex()], '_OSDisk')]",
            "osType" : "Linux",
            "createOption" : "FromImage",
            "caching": "ReadOnly",
            "writeAcceleratorEnabled": false,
            "managedDisk": {
              "storageAccountType": "Premium_LRS"
            },
            "diskSizeGB" : "[parameters('diskSizeGB')]"
          }
        },
        "networkProfile" : {
          "networkInterfaces" : [
            {
              "id" : "[resourceId('Microsoft.Network/networkInterfaces', concat(variables('vmNames')[copyIndex()], '-nic'))]",
              "properties": {
                "primary": false
              }
            }
          ]
        }
      }
    }
  ]
}

1.9.16. Wait for bootstrap completion and remove bootstrap resources in Azure

After you create all of the required infrastructure in Microsoft Azure, wait for the bootstrap process to complete on the machines that you provisioned by using the Ignition config files that you generated with the installation program.

Prerequisites

  • Configure an Azure account.
  • Generate the Ignition config files for your cluster.
  • Create and configure a VNet and associated subnets in Azure.
  • Create and configure networking and load balancers in Azure.
  • Create control plane and compute roles.
  • Create the bootstrap machine.
  • Create the control plane machines.

Procedure

  1. Change to the directory that contains the installation program and run the following command:

    $ ./openshift-install wait-for bootstrap-complete --dir <installation_directory> \ 1
        --log-level info 2
    1
    For <installation_directory>, specify the path to the directory that you stored the installation files in.
    2
    To view different installation details, specify warn, debug, or error instead of info.

    If the command exits without a FATAL warning, your production control plane has initialized.

  2. Delete the bootstrap resources:

    $ az network nsg rule delete -g ${RESOURCE_GROUP} --nsg-name ${INFRA_ID}-nsg --name bootstrap_ssh_in
    $ az vm stop -g ${RESOURCE_GROUP} --name ${INFRA_ID}-bootstrap
    $ az vm deallocate -g ${RESOURCE_GROUP} --name ${INFRA_ID}-bootstrap
    $ az vm delete -g ${RESOURCE_GROUP} --name ${INFRA_ID}-bootstrap --yes
    $ az disk delete -g ${RESOURCE_GROUP} --name ${INFRA_ID}-bootstrap_OSDisk --no-wait --yes
    $ az network nic delete -g ${RESOURCE_GROUP} --name ${INFRA_ID}-bootstrap-nic --no-wait
    $ az storage blob delete --account-key ${ACCOUNT_KEY} --account-name ${CLUSTER_NAME}sa --container-name files --name bootstrap.ign
    $ az network public-ip delete -g ${RESOURCE_GROUP} --name ${INFRA_ID}-bootstrap-ssh-pip

1.9.17. Creating additional worker machines in Azure

You can create worker machines in Microsoft Azure for your cluster to use by launching individual instances discretely or by automated processes outside the cluster, such as auto scaling groups. You can also take advantage of the built-in cluster scaling mechanisms and the machine API in OpenShift Container Platform.

In this example, you manually launch one instance by using the Azure Resource Manager (ARM) template. Additional instances can be launched by including additional resources of type 06_workers.json in the file.

Note

If you do not use the provided ARM template to create your worker machines, you must review the provided information and manually create the infrastructure. If your cluster does not initialize correctly, you might have to contact Red Hat support with your installation logs.

Prerequisites

  • Configure an Azure account.
  • Generate the Ignition config files for your cluster.
  • Create and configure a VNet and associated subnets in Azure.
  • Create and configure networking and load balancers in Azure.
  • Create control plane and compute roles.
  • Create the bootstrap machine.
  • Create the control plane machines.

Procedure

  1. Copy the template from the ARM template for worker machines section of this topic and save it as 06_workers.json in your cluster’s installation directory. This template describes the worker machines that your cluster requires.
  2. Export the following variable needed by the worker machine deployment:

    $ export WORKER_IGNITION=`cat <installation_directory>/worker.ign | base64 | tr -d '\n'`
  3. Create the deployment by using the az CLI:

    $ az deployment group create -g ${RESOURCE_GROUP} \
      --template-file "<installation_directory>/06_workers.json" \
      --parameters workerIgnition="${WORKER_IGNITION}" \ 1
      --parameters sshKeyData="${SSH_KEY}" \ 2
      --parameters baseName="${INFRA_ID}" 3
    1
    The Ignition content for the worker nodes.
    2
    The SSH RSA public key file as a string.
    3
    The base name to be used in resource names; this is usually the cluster’s infrastructure ID.

1.9.17.1. ARM template for worker machines

You can use the following Azure Resource Manager (ARM) template to deploy the worker machines that you need for your OpenShift Container Platform cluster:

Example 1.6. 06_workers.json ARM template

{
  "$schema" : "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
  "contentVersion" : "1.0.0.0",
  "parameters" : {
    "baseName" : {
      "type" : "string",
      "minLength" : 1,
      "metadata" : {
        "description" : "Base name to be used in resource names (usually the cluster's Infra ID)"
      }
    },
    "workerIgnition" : {
      "type" : "string",
      "metadata" : {
        "description" : "Ignition content for the worker nodes"
      }
    },
    "numberOfNodes" : {
      "type" : "int",
      "defaultValue" : 3,
      "minValue" : 2,
      "maxValue" : 30,
      "metadata" : {
        "description" : "Number of OpenShift compute nodes to deploy"
      }
    },
    "sshKeyData" : {
      "type" : "securestring",
      "metadata" : {
        "description" : "SSH RSA public key file as a string"
      }
    },
    "nodeVMSize" : {
      "type" : "string",
      "defaultValue" : "Standard_D4s_v3",
      "allowedValues" : [
        "Standard_A2",
        "Standard_A3",
        "Standard_A4",
        "Standard_A5",
        "Standard_A6",
        "Standard_A7",
        "Standard_A8",
        "Standard_A9",
        "Standard_A10",
        "Standard_A11",
        "Standard_D2",
        "Standard_D3",
        "Standard_D4",
        "Standard_D11",
        "Standard_D12",
        "Standard_D13",
        "Standard_D14",
        "Standard_D2_v2",
        "Standard_D3_v2",
        "Standard_D4_v2",
        "Standard_D5_v2",
        "Standard_D8_v3",
        "Standard_D11_v2",
        "Standard_D12_v2",
        "Standard_D13_v2",
        "Standard_D14_v2",
        "Standard_E2_v3",
        "Standard_E4_v3",
        "Standard_E8_v3",
        "Standard_E16_v3",
        "Standard_E32_v3",
        "Standard_E64_v3",
        "Standard_E2s_v3",
        "Standard_E4s_v3",
        "Standard_E8s_v3",
        "Standard_E16s_v3",
        "Standard_E32s_v3",
        "Standard_E64s_v3",
        "Standard_G1",
        "Standard_G2",
        "Standard_G3",
        "Standard_G4",
        "Standard_G5",
        "Standard_DS2",
        "Standard_DS3",
        "Standard_DS4",
        "Standard_DS11",
        "Standard_DS12",
        "Standard_DS13",
        "Standard_DS14",
        "Standard_DS2_v2",
        "Standard_DS3_v2",
        "Standard_DS4_v2",
        "Standard_DS5_v2",
        "Standard_DS11_v2",
        "Standard_DS12_v2",
        "Standard_DS13_v2",
        "Standard_DS14_v2",
        "Standard_GS1",
        "Standard_GS2",
        "Standard_GS3",
        "Standard_GS4",
        "Standard_GS5",
        "Standard_D2s_v3",
        "Standard_D4s_v3",
        "Standard_D8s_v3"
      ],
      "metadata" : {
        "description" : "The size of the each Node Virtual Machine"
      }
    }
  },
  "variables" : {
    "location" : "[resourceGroup().location]",
    "virtualNetworkName" : "[concat(parameters('baseName'), '-vnet')]",
    "virtualNetworkID" : "[resourceId('Microsoft.Network/virtualNetworks', variables('virtualNetworkName'))]",
    "nodeSubnetName" : "[concat(parameters('baseName'), '-worker-subnet')]",
    "nodeSubnetRef" : "[concat(variables('virtualNetworkID'), '/subnets/', variables('nodeSubnetName'))]",
    "infraLoadBalancerName" : "[parameters('baseName')]",
    "sshKeyPath" : "/home/capi/.ssh/authorized_keys",
    "identityName" : "[concat(parameters('baseName'), '-identity')]",
    "imageName" : "[concat(parameters('baseName'), '-image')]",
    "copy" : [
      {
        "name" : "vmNames",
        "count" :  "[parameters('numberOfNodes')]",
        "input" : "[concat(parameters('baseName'), '-worker-', variables('location'), '-', copyIndex('vmNames', 1))]"
      }
    ]
  },
  "resources" : [
    {
      "apiVersion" : "2019-05-01",
      "name" : "[concat('node', copyIndex())]",
      "type" : "Microsoft.Resources/deployments",
      "copy" : {
        "name" : "nodeCopy",
        "count" : "[length(variables('vmNames'))]"
      },
      "properties" : {
        "mode" : "Incremental",
        "template" : {
          "$schema" : "http://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
          "contentVersion" : "1.0.0.0",
          "resources" : [
            {
              "apiVersion" : "2018-06-01",
              "type" : "Microsoft.Network/networkInterfaces",
              "name" : "[concat(variables('vmNames')[copyIndex()], '-nic')]",
              "location" : "[variables('location')]",
              "properties" : {
                "ipConfigurations" : [
                  {
                    "name" : "pipConfig",
                    "properties" : {
                      "privateIPAllocationMethod" : "Dynamic",
                      "subnet" : {
                        "id" : "[variables('nodeSubnetRef')]"
                      }
                    }
                  }
                ]
              }
            },
            {
              "apiVersion" : "2018-06-01",
              "type" : "Microsoft.Compute/virtualMachines",
              "name" : "[variables('vmNames')[copyIndex()]]",
              "location" : "[variables('location')]",
              "tags" : {
                "kubernetes.io-cluster-ffranzupi": "owned"
              },
              "identity" : {
                "type" : "userAssigned",
                "userAssignedIdentities" : {
                  "[resourceID('Microsoft.ManagedIdentity/userAssignedIdentities/', variables('identityName'))]" : {}
                }
              },
              "dependsOn" : [
                "[concat('Microsoft.Network/networkInterfaces/', concat(variables('vmNames')[copyIndex()], '-nic'))]"
              ],
              "properties" : {
                "hardwareProfile" : {
                  "vmSize" : "[parameters('nodeVMSize')]"
                },
                "osProfile" : {
                  "computerName" : "[variables('vmNames')[copyIndex()]]",
                  "adminUsername" : "capi",
                  "customData" : "[parameters('workerIgnition')]",
                  "linuxConfiguration" : {
                    "disablePasswordAuthentication" : true,
                    "ssh" : {
                      "publicKeys" : [
                        {
                          "path" : "[variables('sshKeyPath')]",
                          "keyData" : "[parameters('sshKeyData')]"
                        }
                      ]
                    }
                  }
                },
                "storageProfile" : {
                  "imageReference": {
                    "id": "[resourceId('Microsoft.Compute/images', variables('imageName'))]"
                  },
                  "osDisk" : {
                    "name": "[concat(variables('vmNames')[copyIndex()],'_OSDisk')]",
                    "osType" : "Linux",
                    "createOption" : "FromImage",
                    "managedDisk": {
                      "storageAccountType": "Premium_LRS"
                    },
                    "diskSizeGB": 128
                  }
                },
                "networkProfile" : {
                  "networkInterfaces" : [
                    {
                      "id" : "[resourceId('Microsoft.Network/networkInterfaces', concat(variables('vmNames')[copyIndex()], '-nic'))]",
                      "properties": {
                        "primary": true
                      }
                    }
                  ]
                }
              }
            }
          ]
        }
      }
    }
  ]
}

1.9.18. Installing the OpenShift CLI by downloading the binary

You can install the OpenShift CLI (oc) in order to interact with OpenShift Container Platform from a command-line interface. You can install oc on Linux, Windows, or macOS.

Important

If you installed an earlier version of oc, you cannot use it to complete all of the commands in OpenShift Container Platform 4.6. Download and install the new version of oc.

1.9.18.1. Installing the OpenShift CLI on Linux

You can install the OpenShift CLI (oc) binary on Linux by using the following procedure.

Procedure

  1. Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal.
  2. Select the appropriate version in the Version drop-down menu.
  3. Click Download Now next to the OpenShift v4.6 Linux Client entry and save the file.
  4. Unpack the archive:

    $ tar xvzf <file>
  5. Place the oc binary in a directory that is on your PATH.

    To check your PATH, execute the following command:

    $ echo $PATH

After you install the OpenShift CLI, it is available using the oc command:

$ oc <command>

1.9.18.2. Installing the OpenShift CLI on Windows

You can install the OpenShift CLI (oc) binary on Windows by using the following procedure.

Procedure

  1. Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal.
  2. Select the appropriate version in the Version drop-down menu.
  3. Click Download Now next to the OpenShift v4.6 Windows Client entry and save the file.
  4. Unzip the archive with a ZIP program.
  5. Move the oc binary to a directory that is on your PATH.

    To check your PATH, open the command prompt and execute the following command:

    C:\> path

After you install the OpenShift CLI, it is available using the oc command:

C:\> oc <command>

1.9.18.3. Installing the OpenShift CLI on macOS

You can install the OpenShift CLI (oc) binary on macOS by using the following procedure.

Procedure

  1. Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal.
  2. Select the appropriate version in the Version drop-down menu.
  3. Click Download Now next to the OpenShift v4.6 MacOSX Client entry and save the file.
  4. Unpack and unzip the archive.
  5. Move the oc binary to a directory on your PATH.

    To check your PATH, open a terminal and execute the following command:

    $ echo $PATH

After you install the OpenShift CLI, it is available using the oc command:

$ oc <command>

1.9.19. Logging in to the cluster by using the CLI

You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation.

Prerequisites

  • You deployed an OpenShift Container Platform cluster.
  • You installed the oc CLI.

Procedure

  1. Export the kubeadmin credentials:

    $ export KUBECONFIG=<installation_directory>/auth/kubeconfig 1
    1
    For <installation_directory>, specify the path to the directory that you stored the installation files in.
  2. Verify you can run oc commands successfully using the exported configuration:

    $ oc whoami

    Example output

    system:admin

1.9.20. Approving the certificate signing requests for your machines

When you add machines to a cluster, two pending certificate signing requests (CSRs) are generated for each machine that you added. You must confirm that these CSRs are approved or, if necessary, approve them yourself. The client requests must be approved first, followed by the server requests.

Prerequisites

  • You added machines to your cluster.

Procedure

  1. Confirm that the cluster recognizes the machines:

    $ oc get nodes

    Example output

    NAME      STATUS    ROLES   AGE  VERSION
    master-0  Ready     master  63m  v1.19.0
    master-1  Ready     master  63m  v1.19.0
    master-2  Ready     master  64m  v1.19.0

    The output lists all of the machines that you created.

    Note

    The preceding output might not include the compute nodes, also known as worker nodes, until some CSRs are approved.

  2. Review the pending CSRs and ensure that you see the client requests with the Pending or Approved status for each machine that you added to the cluster:

    $ oc get csr

    Example output

    NAME        AGE     REQUESTOR                                                                   CONDITION
    csr-8b2br   15m     system:serviceaccount:openshift-machine-config-operator:node-bootstrapper   Pending
    csr-8vnps   15m     system:serviceaccount:openshift-machine-config-operator:node-bootstrapper   Pending
    ...

    In this example, two machines are joining the cluster. You might see more approved CSRs in the list.

  3. If the CSRs were not approved, after all of the pending CSRs for the machines you added are in Pending status, approve the CSRs for your cluster machines:

    Note

    Because the CSRs rotate automatically, approve your CSRs within an hour of adding the machines to the cluster. If you do not approve them within an hour, the certificates will rotate, and more than two certificates will be present for each node. You must approve all of these certificates. Once the client CSR is approved, the Kubelet creates a secondary CSR for the serving certificate, which requires manual approval. Then, subsequent serving certificate renewal requests are automatically approved by the machine-approver if the Kubelet requests a new certificate with identical parameters.

    Note

    For clusters running on platforms that are not machine API enabled, such as bare metal and other user-provisioned infrastructure, you must implement a method of automatically approving the kubelet serving certificate requests (CSRs). If a request is not approved, then the oc exec, oc rsh, and oc logs commands cannot succeed, because a serving certificate is required when the API server connects to the kubelet. Any operation that contacts the Kubelet endpoint requires this certificate approval to be in place. The method must watch for new CSRs, confirm that the CSR was submitted by the node-bootstrapper service account in the system:node or system:admin groups, and confirm the identity of the node.

    • To approve them individually, run the following command for each valid CSR:

      $ oc adm certificate approve <csr_name> 1
      1
      <csr_name> is the name of a CSR from the list of current CSRs.
    • To approve all pending CSRs, run the following command:

      $ oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve
      Note

      Some Operators might not become available until some CSRs are approved.

  4. Now that your client requests are approved, you must review the server requests for each machine that you added to the cluster:

    $ oc get csr

    Example output

    NAME        AGE     REQUESTOR                                                                   CONDITION
    csr-bfd72   5m26s   system:node:ip-10-0-50-126.us-east-2.compute.internal                       Pending
    csr-c57lv   5m26s   system:node:ip-10-0-95-157.us-east-2.compute.internal                       Pending
    ...

  5. If the remaining CSRs are not approved, and are in the Pending status, approve the CSRs for your cluster machines:

    • To approve them individually, run the following command for each valid CSR:

      $ oc adm certificate approve <csr_name> 1
      1
      <csr_name> is the name of a CSR from the list of current CSRs.
    • To approve all pending CSRs, run the following command:

      $ oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs oc adm certificate approve
  6. After all client and server CSRs have been approved, the machines have the Ready status. Verify this by running the following command:

    $ oc get nodes

    Example output

    NAME      STATUS    ROLES   AGE  VERSION
    master-0  Ready     master  73m  v1.20.0
    master-1  Ready     master  73m  v1.20.0
    master-2  Ready     master  74m  v1.20.0
    worker-0  Ready     worker  11m  v1.20.0
    worker-1  Ready     worker  11m  v1.20.0

    Note

    It can take a few minutes after approval of the server CSRs for the machines to transition to the Ready status.

Additional information

1.9.21. Adding the Ingress DNS records

If you removed the DNS Zone configuration when creating Kubernetes manifests and generating Ignition configs, you must manually create DNS records that point at the Ingress load balancer. You can create either a wildcard *.apps.{baseDomain}. or specific records. You can use A, CNAME, and other records per your requirements.

Prerequisites

  • You deployed an OpenShift Container Platform cluster on Microsoft Azure by using infrastructure that you provisioned.
  • Install the OpenShift CLI (oc).
  • Install the jq package.
  • Install or update the Azure CLI.

Procedure

  1. Confirm the Ingress router has created a load balancer and populated the EXTERNAL-IP field:

    $ oc -n openshift-ingress get service router-default

    Example output

    NAME             TYPE           CLUSTER-IP      EXTERNAL-IP     PORT(S)                      AGE
    router-default   LoadBalancer   172.30.20.10   35.130.120.110   80:32288/TCP,443:31215/TCP   20

  2. Export the Ingress router IP as a variable:

    $ export PUBLIC_IP_ROUTER=`oc -n openshift-ingress get service router-default --no-headers | awk '{print $4}'`
  3. Add a *.apps record to the public DNS zone.

    1. If you are adding this cluster to a new public zone, run:

      $ az network dns record-set a add-record -g ${BASE_DOMAIN_RESOURCE_GROUP} -z ${CLUSTER_NAME}.${BASE_DOMAIN} -n *.apps -a ${PUBLIC_IP_ROUTER} --ttl 300
    2. If you are adding this cluster to an already existing public zone, run:

      $ az network dns record-set a add-record -g ${BASE_DOMAIN_RESOURCE_GROUP} -z ${BASE_DOMAIN} -n *.apps.${CLUSTER_NAME} -a ${PUBLIC_IP_ROUTER} --ttl 300
  4. Add a *.apps record to the private DNS zone:

    1. Create a *.apps record by using the following command:

      $ az network private-dns record-set a create -g ${RESOURCE_GROUP} -z ${CLUSTER_NAME}.${BASE_DOMAIN} -n *.apps --ttl 300
    2. Add the *.apps record to the private DNS zone by using the following command:

      $ az network private-dns record-set a add-record -g ${RESOURCE_GROUP} -z ${CLUSTER_NAME}.${BASE_DOMAIN} -n *.apps -a ${PUBLIC_IP_ROUTER}

If you prefer to add explicit domains instead of using a wildcard, you can create entries for each of the cluster’s current routes:

$ oc get --all-namespaces -o jsonpath='{range .items[*]}{range .status.ingress[*]}{.host}{"\n"}{end}{end}' routes

Example output

oauth-openshift.apps.cluster.basedomain.com
console-openshift-console.apps.cluster.basedomain.com
downloads-openshift-console.apps.cluster.basedomain.com
alertmanager-main-openshift-monitoring.apps.cluster.basedomain.com
grafana-openshift-monitoring.apps.cluster.basedomain.com
prometheus-k8s-openshift-monitoring.apps.cluster.basedomain.com

1.9.22. Completing an Azure installation on user-provisioned infrastructure

After you start the OpenShift Container Platform installation on Microsoft Azure user-provisioned infrastructure, you can monitor the cluster events until the cluster is ready.

Prerequisites

  • Deploy the bootstrap machine for an OpenShift Container Platform cluster on user-provisioned Azure infrastructure.
  • Install the oc CLI and log in.

Procedure

  • Complete the cluster installation:

    $ ./openshift-install --dir <installation_directory> wait-for install-complete 1

    Example output

    INFO Waiting up to 30m0s for the cluster to initialize...

    1
    For <installation_directory>, specify the path to the directory that you stored the installation files in.
    Important
    • The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information.
    • It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation.

1.9.23. Telemetry access for OpenShift Container Platform

In OpenShift Container Platform 4.6, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager.

After you confirm that your OpenShift Cluster Manager inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level.

Additional resources

1.10. Uninstalling a cluster on Azure

You can remove a cluster that you deployed to Microsoft Azure.

1.10.1. Removing a cluster that uses installer-provisioned infrastructure

You can remove a cluster that uses installer-provisioned infrastructure from your cloud.

Note

After uninstallation, check your cloud provider for any resources not removed properly, especially with User Provisioned Infrastructure (UPI) clusters. There might be resources that the installer did not create or that the installer is unable to access.

Prerequisites

  • Have a copy of the installation program that you used to deploy the cluster.
  • Have the files that the installation program generated when you created your cluster.

Procedure

  1. From the directory that contains the installation program on the computer that you used to install the cluster, run the following command:

    $ ./openshift-install destroy cluster \
    --dir <installation_directory> --log-level info 1 2
    1
    For <installation_directory>, specify the path to the directory that you stored the installation files in.
    2
    To view different details, specify warn, debug, or error instead of info.
    Note

    You must specify the directory that contains the cluster definition files for your cluster. The installation program requires the metadata.json file in this directory to delete the cluster.

  2. Optional: Delete the <installation_directory> directory and the OpenShift Container Platform installation program.

Legal Notice

Copyright © 2022 Red Hat, Inc.
The text of and illustrations in this document are licensed by Red Hat under a Creative Commons Attribution–Share Alike 3.0 Unported license ("CC-BY-SA"). An explanation of CC-BY-SA is available at http://creativecommons.org/licenses/by-sa/3.0/. In accordance with CC-BY-SA, if you distribute this document or an adaptation of it, you must provide the URL for the original version.
Red Hat, as the licensor of this document, waives the right to enforce, and agrees not to assert, Section 4d of CC-BY-SA to the fullest extent permitted by applicable law.
Red Hat, Red Hat Enterprise Linux, the Shadowman logo, the Red Hat logo, JBoss, OpenShift, Fedora, the Infinity logo, and RHCE are trademarks of Red Hat, Inc., registered in the United States and other countries.
Linux® is the registered trademark of Linus Torvalds in the United States and other countries.
Java® is a registered trademark of Oracle and/or its affiliates.
XFS® is a trademark of Silicon Graphics International Corp. or its subsidiaries in the United States and/or other countries.
MySQL® is a registered trademark of MySQL AB in the United States, the European Union and other countries.
Node.js® is an official trademark of Joyent. Red Hat is not formally related to or endorsed by the official Joyent Node.js open source or commercial project.
The OpenStack® Word Mark and OpenStack logo are either registered trademarks/service marks or trademarks/service marks of the OpenStack Foundation, in the United States and other countries and are used with the OpenStack Foundation's permission. We are not affiliated with, endorsed or sponsored by the OpenStack Foundation, or the OpenStack community.
All other trademarks are the property of their respective owners.