Chapter 1. Preparing to deploy OpenShift Data Foundation

Deploying OpenShift Data Foundation on OpenShift Container Platform using dynamic or local storage devices provides you with the option to create internal cluster resources. This will result in the internal provisioning of the base services, which helps to make additional storage classes available to applications.

Before you begin the deployment of Red Hat OpenShift Data Foundation using dynamic or local storage, ensure that your resource requirements are met. See Planning your deployment.

  1. Optional: If you want to enable cluster-wide encryption using the external Key Management System (KMS) HashiCorp Vault, follow these steps:

  2. Optional: If you want to enable cluster-wide encryption using the external Key Management System (KMS) Thales CipherTrust Manager, you must first enable the Key Management Interoperability Protocol (KMIP) and use signed certificates on your server. Follow these steps:

    1. Create a KMIP client if one does not exist. From the user interface, select KMIPClient ProfileAdd Profile.

      1. Add the CipherTrust username to the Common Name field during profile creation.
    2. Create a token by navigating to KMIPRegistration TokenNew Registration Token. Copy the token for the next step.
    3. To register the client, navigate to KMIPRegistered ClientsAdd Client. Specify the Name. Paste the Registration Token from the previous step, then click Save.
    4. Download the Private Key and Client Certificate by clicking Save Private Key and Save Certificate respectively.
    5. To create a new KMIP interface, navigate to Admin SettingsInterfacesAdd Interface.

      1. Select KMIP Key Management Interoperability Protocol and click Next.
      2. Select a free Port.
      3. Select Network Interface as all.
      4. Select Interface Mode as TLS, verify client cert, user name taken from client cert, auth request is optional.
      5. (Optional) You can enable hard delete to delete both metadata and material when the key is deleted. It is disabled by default.
      6. Select the CA to be used, and click Save.
    6. To get the server CA certificate, click on the Action menu (⋮) on the right of the newly created interface, and click Download Certificate.
    7. Optional: If StorageClass encryption is to be enabled during deployment, create a key to act as the Key Encryption Key (KEK):

      1. Navigate to KeysAdd Key.
      2. Enter Key Name.
      3. Set the Algorithm and Size to AES and 256 respectively.
      4. Enable Create a key in Pre-Active state and set the date and time for activation.
      5. Ensure that Encrypt and Decrypt are enabled under Key Usage.
      6. Copy the ID of the newly created Key to be used as the Unique Identifier during deployment.
  3. Minimum starting node requirements

    An OpenShift Data Foundation cluster will be deployed with minimum configuration when the standard deployment resource requirement is not met. See Resource requirements section in Planning guide.

  4. Disaster recovery requirements [Technology Preview]

    Disaster Recovery features supported by Red Hat OpenShift Data Foundation require all of the following prerequisites to successfully implement a disaster recovery solution:

    For detailed requirements, see Configuring OpenShift Data Foundation Disaster Recovery for OpenShift Workloads guide, and Requirements and recommendations section of the Install guide in Red Hat Advanced Cluster Management for Kubernetes documentation.

  5. For deploying using local storage devices, see requirements for installing OpenShift Data Foundation using local storage devices. These are not applicable for deployment using dynamic storage devices.

1.1. Requirements for installing OpenShift Data Foundation using local storage devices

Node requirements

The cluster must consist of at least three OpenShift Container Platform worker nodes with locally attached-storage devices on each of them.

  • Each of the three selected nodes must have at least one raw block device available. OpenShift Data Foundation uses the one or more available raw block devices.
  • The devices you use must be empty, the disks must not include Physical Volumes (PVs), Volume Groups (VGs), or Logical Volumes (LVs) remaining on the disk.

For more information, see the Resource requirements section in the Planning guide.

Disaster recovery requirements [Technology Preview]

Disaster Recovery features supported by Red Hat OpenShift Data Foundation require all of the following prerequisites to successfully implement a disaster recovery solution:

  • A valid Red Hat OpenShift Data Foundation Advanced subscription.
  • A valid Red Hat Advanced Cluster Management (RHACM) for Kubernetes subscription.

To know in detail how subscriptions for OpenShift Data Foundation work, see knowledgebase article on OpenShift Data Foundation subscriptions.

For detailed disaster recovery solution requirements, see Configuring OpenShift Data Foundation Disaster Recovery for OpenShift Workloads guide, and Requirements and recommendations section of the Install guide in Red Hat Advanced Cluster Management for Kubernetes documentation.

Arbiter stretch cluster requirements [Technology Preview]

In this case, a single cluster is stretched across two zones with a third zone as the location for the arbiter. This is a Technology Preview feature that is currently intended for deployment in the OpenShift Container Platform on-premises and in the same data center. This solution is not recommended for deployments stretching over multiple data centers. Instead, consider Metro-DR as a first option for no data loss DR solution deployed over multiple data centers with low latency networks.

For detailed requirements and instructions, see the Knowledgebase article on Configuring OpenShift Data Foundation for stretch cluster.

To know in detail how subscriptions for OpenShift Data Foundation work, see knowledgebase article on OpenShift Data Foundation subscriptions.

Note

You cannot enable Flexible scaling and Arbiter both at the same time as they have conflicting scaling logic. With Flexible scaling, you can add one node at a time to your OpenShift Data Foundation cluster. Whereas, in an Arbiter cluster, you need to add at least one node in each of the two data zones.

Minimum starting node requirements

An OpenShift Data Foundation cluster is deployed with a minimum configuration when the resource requirement for a standard deployment is not met.

For more information, see the Resource requirements section in the Planning guide.