Chapter 2. Deploy OpenShift Data Foundation using local storage devices

Use this section to deploy OpenShift Data Foundation on IBM Power infrastructure where OpenShift Container Platform is already installed.

Also, it is possible to deploy only the Multicloud Object Gateway (MCG) component with OpenShift Data Foundation. For more information, see Deploy standalone Multicloud Object Gateway.

Perform the following steps to deploy OpenShift Data Foundation:

2.1. Installing Local Storage Operator

Use this procedure to install the Local Storage Operator from the Operator Hub before creating OpenShift Data Foundation clusters on local storage devices.

Procedure

  1. Log in to the OpenShift Web Console.
  2. Click Operators → OperatorHub.
  3. Type local storage in the Filter by keyword…​ box to find the Local Storage Operator from the list of operators and click on it.
  4. Set the following options on the Install Operator page:

    1. Update Channel as stable.
    2. Installation Mode as A specific namespace on the cluster.
    3. Installed Namespace as Operator recommended namespace openshift-local-storage.
    4. Approval Strategy as Automatic.
  5. Click Install.

Verification steps

  • Verify that the Local Storage Operator shows a green tick indicating successful installation.

2.2. Installing Red Hat OpenShift Data Foundation Operator

You can install Red Hat OpenShift Data Foundation Operator using the Red Hat OpenShift Container Platform Operator Hub.

For information about the hardware and software requirements, see Planning your deployment.

Prerequisites

  • Access to an OpenShift Container Platform cluster using an account with cluster-admin and Operator installation permissions.
  • You must have at least three worker nodes in the Red Hat OpenShift Container Platform cluster.
Important
  • When you need to override the cluster-wide default node selector for OpenShift Data Foundation, you can use the following command in command line interface to specify a blank node selector for the openshift-storage namespace (create openshift-storage namespace in this case):
$ oc annotate namespace openshift-storage openshift.io/node-selector=

Procedure

  1. Navigate in the left pane of the OpenShift Web Console to click Operators → OperatorHub.
  2. Scroll or type a keyword into the Filter by keyword box to search for OpenShift Data Foundation Operator.
  3. Click Install on the OpenShift Data Foundation operator page.
  4. On the Install Operator page, the following required options are selected by default:

    1. Update Channel as stable-4.9.
    2. Installation Mode as A specific namespace on the cluster.
    3. Installed Namespace as Operator recommended namespace openshift-storage. If Namespace openshift-storage does not exist, it is created during the operator installation.
  5. Select Approval Strategy as Automatic or Manual.

    If you select Automatic updates, then the Operator Lifecycle Manager (OLM) automatically upgrades the running instance of your Operator without any intervention.

    If you select Manual updates, then the OLM creates an update request. As a cluster administrator, you must then manually approve that update request to update the Operator to a newer version.

  6. Ensure that the Enable option is selected for the Console plugin.
  7. Click Install.

Verification steps

  • Verify that OpenShift Data Foundation Operator shows a green tick indicating successful installation.
  • After the operator is successfully installed, a pop-up with a message, Web console update is available appears on the user interface. Click Refresh web console from this pop-up for the console changes to reflect.

    • In the Web Console, navigate to Storage and verify if OpenShift Data Foundation is available.

2.3. Finding available storage devices

Use this procedure to identify the device names for each of the three or more worker nodes that you have labeled with the OpenShift Data Foundation label cluster.ocs.openshift.io/openshift-storage='' before creating PVs for IBM Power.

Procedure

  1. List and verify the name of the worker nodes with the OpenShift Data Foundation label.

    $ oc get nodes -l cluster.ocs.openshift.io/openshift-storage=

    Example output:

    NAME       STATUS   ROLES    AGE     VERSION
    worker-0   Ready    worker   2d11h   v1.21.1+f36aa36
    worker-1   Ready    worker   2d11h   v1.21.1+f36aa36
    worker-2   Ready    worker   2d11h   v1.21.1+f36aa36
  2. Log in to each worker node that is used for OpenShift Data Foundation resources and find the name of the additional disk that you have attached while deploying Openshift Container Platform.

    $ oc debug node/<node name>

    Example output:

    $ oc debug node/worker-0
    Starting pod/worker-0-debug ...
    To use host binaries, run `chroot /host`
    Pod IP: 192.168.0.63
    If you don't see a command prompt, try pressing enter.
    sh-4.4#
    sh-4.4# chroot /host
    sh-4.4# lsblk
    NAME   MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
    loop1    7:1    0   500G  0 loop
    sda      8:0    0   500G  0 disk
    sdb      8:16   0   120G  0 disk
    |-sdb1   8:17   0     4M  0 part
    |-sdb3   8:19   0   384M  0 part
    `-sdb4   8:20   0 119.6G  0 part
    sdc      8:32   0   500G  0 disk
    sdd      8:48   0   120G  0 disk
    |-sdd1   8:49   0     4M  0 part
    |-sdd3   8:51   0   384M  0 part
    `-sdd4   8:52   0 119.6G  0 part
    sde      8:64   0   500G  0 disk
    sdf      8:80   0   120G  0 disk
    |-sdf1   8:81   0     4M  0 part
    |-sdf3   8:83   0   384M  0 part
    `-sdf4   8:84   0 119.6G  0 part
    sdg      8:96   0   500G  0 disk
    sdh      8:112  0   120G  0 disk
    |-sdh1   8:113  0     4M  0 part
    |-sdh3   8:115  0   384M  0 part
    `-sdh4   8:116  0 119.6G  0 part
    sdi      8:128  0   500G  0 disk
    sdj      8:144  0   120G  0 disk
    |-sdj1   8:145  0     4M  0 part
    |-sdj3   8:147  0   384M  0 part
    `-sdj4   8:148  0 119.6G  0 part
    sdk      8:160  0   500G  0 disk
    sdl      8:176  0   120G  0 disk
    |-sdl1   8:177  0     4M  0 part
    |-sdl3   8:179  0   384M  0 part
    `-sdl4   8:180  0 119.6G  0 part /sysroot
    sdm      8:192  0   500G  0 disk
    sdn      8:208  0   120G  0 disk
    |-sdn1   8:209  0     4M  0 part
    |-sdn3   8:211  0   384M  0 part /boot
    `-sdn4   8:212  0 119.6G  0 part
    sdo      8:224  0   500G  0 disk
    sdp      8:240  0   120G  0 disk
    |-sdp1   8:241  0     4M  0 part
    |-sdp3   8:243  0   384M  0 part
    `-sdp4   8:244  0 119.6G  0 part

    In this example, for worker-0, the available local devices of 500G are sda, sdc, sde,sdg, sdi, sdk, sdm,sdo.

  3. Repeat the above step for all the other worker nodes that have the storage devices to be used by OpenShift Data Foundation. See this Knowledge Base article for more details.

2.4. Creating OpenShift Data Foundation cluster on IBM Power

Use this procedure to create an OpenShift Data Foundation cluster after you install the OpenShift Data Foundation operator.

Prerequisites

  • Ensure that all the requirements in the Requirements for installing OpenShift Data Foundation using local storage devices section are met.
  • You must have a minimum of three worker nodes with the same storage type and size attached to each node (for example, 200 GB SSD) to use local storage devices on IBM Power.
  • Verify your OpenShift Container Platform worker nodes are labeled for OpenShift Data Foundation:

    oc get nodes -l cluster.ocs.openshift.io/openshift-storage -o jsonpath='{range .items[*]}{.metadata.name}{"\n"}'

To identify storage devices on each node, refer to Finding available storage devices.

Procedure

  1. Log into the OpenShift Web Console.
  2. In openshift-local-storage namespace Click OperatorsInstalled Operators to view the installed operators.
  3. Click the Local Storage installed operator.
  4. On the Operator Details page, click the Local Volume link.
  5. Click Create Local Volume.
  6. Click on YAML view for configuring Local Volume.
  7. Define a LocalVolume custom resource for block PVs using the following YAML.

    apiVersion: local.storage.openshift.io/v1
    kind: LocalVolume
    metadata:
      name: localblock
      namespace: openshift-local-storage
    spec:
      logLevel: Normal
      managementState: Managed
      nodeSelector:
        nodeSelectorTerms:
          - matchExpressions:
              - key: kubernetes.io/hostname
                operator: In
                values:
                  - worker-0
                  - worker-1
                  - worker-2
      storageClassDevices:
        - devicePaths:
            - /dev/sda
          storageClassName: localblock
          volumeMode: Block

    The above definition selects sda local device from the worker-0, worker-1 and worker-2 nodes. The localblock storage class is created and persistent volumes are provisioned from sda.

    Important

    Specify appropriate values of nodeSelector as per your environment. The device name should be same on all the worker nodes. You can also specify more than one devicePaths.

  8. Click Create.
  9. Confirm whether diskmaker-manager pods and Persistent Volumes are created.

    1. For Pods

      1. Click Workloads → Pods from the left pane of the OpenShift Web Console.
      2. Select openshift-local-storage from the Project drop-down list.
      3. Check if there are diskmaker-manager pods for each of the worker node that you used while creating LocalVolume CR.
    2. For Persistent Volumes

      1. Click Storage → PersistentVolumes from the left pane of the OpenShift Web Console.
      2. Check the Persistent Volumes with the name local-pv-*. Number of Persistent Volumes will be equivalent to the product of number of worker nodes and number of storage devices provisioned while creating localVolume CR.

        Important
        • The flexible scaling feature is enabled only when the storage cluster that you created with 3 or more nodes are spread across fewer than the minimum requirement of 3 availability zones.

          For information about flexible scaling, see Add capacity using YAML section in Scaling Storage guide.

  10. In the OpenShift Web Console, click Operators → Installed Operators to view all the installed operators.

    Ensure that the Project selected is openshift-storage.

  11. Click on the OpenShift Data Foundation operator and then click Create StorageSystem.
  12. In the Backing storage page, select the following::

    1. Select the Use an existing StorageClass option.
    2. Select the required Storage Class that you used while installing LocalVolume.

      By default, it is set to none.

    3. Click Next.
  13. In the Capacity and nodes page, provide the necessary information::

    1. Available raw capacity is populated with the capacity value based on all the attached disks associated with the storage class. This takes some time to show up.
    2. The Selected nodes list shows the nodes based on the storage class.
    3. Click Next.
  14. Optional: In the Security and network page, configure the following based on your requirements:

    1. To enable encryption, select Enable data encryption for block and file storage.
    2. Choose one or both of the following Encryption level:

      • Cluster-wide encryption

        Encrypts the entire cluster (block and file).

      • StorageClass encryption

        Creates encrypted persistent volume (block only) using encryption enabled storage class.

    3. Select Connect to an external key management service checkbox. This is optional for cluster-wide encryption.

      1. Key Management Service Provider is set to Vault by default.
      2. Enter Vault Service Name, host Address of Vault server ('https://<hostname or ip>'), Port number and Token.
      3. Expand Advanced Settings to enter additional settings and certificate details based on your Vault configuration:

        1. Enter the Key Value secret path in Backend Path that is dedicated and unique to OpenShift Data Foundation.
        2. Optional: Enter TLS Server Name and Vault Enterprise Namespace.
        3. Provide CA Certificate, Client Certificate and Client Private Key by uploading the respective PEM encoded certificate file.
        4. Click Save.
    4. Select Default (SDN) as Multus is not yet supported on OpenShift Data Foundation on IBM Power.
    5. Click Next.
  15. In the Review and create page::

    1. Review the configurations details. To modify any configuration settings, click Back to go back to the previous configuration page.
    2. Click Create StorageSystem.

Verification steps

  • To verify the final Status of the installed storage cluster:

    1. In the OpenShift Web Console, navigate to Installed OperatorsOpenShift Data FoundationStorage Systemocs-storagecluster-storagesystemResources.
    2. Verify that Status of StorageCluster is Ready and has a green tick mark next to it.
  • To verify if flexible scaling is enabled on your storage cluster, perform the following steps:

    1. In the Web Console, click HomeSearch.
    2. Select the Resource as StorageCluster from the drop-down list.
    3. Click ocs-storagecluster.
    4. In the YAML tab, search for the keys flexibleScaling in spec section and failureDomain in status section. If flexible scaling is true and failureDomain is set to host, flexible scaling feature is enabled.

      spec:
      flexibleScaling: true
      […]
      status:
      failureDomain: host
  • To verify that all components for OpenShift Data Foundation are successfully installed, see Verifying your OpenShift Data Foundation deployment.

Additional resources

  • To expand the capacity of the initial cluster, see the Scaling Storage guide.