Chapter 1. Preparing to deploy OpenShift Data Foundation

Deploying OpenShift Data Foundation on OpenShift Container Platform using dynamic storage devices provides you with the option to create internal cluster resources. This will result in the internal provisioning of the base services, which helps to make additional storage classes available to applications.

Before you begin the deployment of Red Hat OpenShift Data Foundation, follow these steps:

  1. Optional: If you want to enable cluster-wide encryption using an external Key Management System (KMS):

  2. Minimum starting node requirements [Technology Preview]

    An OpenShift Data Foundation cluster will be deployed with minimum configuration when the standard deployment resource requirement is not met. See Resource requirements section in Planning guide.

  3. Regional-DR requirements [Developer Preview]

    Disaster Recovery features supported by Red Hat OpenShift Data Foundation require all of the following prerequisites in order to successfully implement a Disaster Recovery solution:

    For detailed requirements, see Regional-DR requirements and RHACM requirements.

1.1. Enabling key value backend path and policy in Vault

Prerequisites

  • Administrator access to Vault.
  • Carefully, choose a unique path name as the backend path that follows the naming convention since it cannot be changed later.

Procedure

  1. Enable the Key/Value (KV) backend path in Vault.

    For Vault KV secret engine API, version 1:

    $ vault secrets enable -path=odf kv

    For Vault KV secret engine API, version 2:

    $ vault secrets enable -path=odf kv-v2
  2. Create a policy to restrict users to perform a write or delete operation on the secret using the following commands.

    echo '
    path "odf/*" {
      capabilities = ["create", "read", "update", "delete", "list"]
    }
    path "sys/mounts" {
    capabilities = ["read"]
    }'| vault policy write odf -
  3. Create a token matching the above policy.

    $ vault token create -policy=odf -format json