Chapter 5. Scaling storage nodes

To scale the storage capacity of OpenShift Container Storage in internal mode, you can do either of the following:

  • Scale up storage nodes - Add storage capacity to the existing Red Hat OpenShift Container Storage worker nodes
  • Scale out storage nodes - Add new worker nodes containing storage capacity

5.1. Requirements for scaling storage nodes

Before you proceed to scale the storage nodes, refer to the following sections to understand the node requirements for your specific Red Hat OpenShift Container Storage instance:

Important

Always ensure that you have plenty of storage capacity.

If storage ever fills completely, it is not possible to add capacity or delete or migrate content away from the storage to free up space. Completely full storage is very difficult to recover.

Capacity alerts are issued when cluster storage capacity reaches 75% (near-full) and 85% (full) of total capacity. Always address capacity warnings promptly, and review your storage regularly to ensure that you do not run out of storage space.

If you do run out of storage space completely, contact Red Hat Customer Support.

5.2. Scaling up storage by adding capacity to your OpenShift Container Storage nodes using local storage devices

Use this procedure to add storage capacity (additional storage devices) to your configured local storage based OpenShift Container Storage worker nodes on IBM Power Systems infrastructures.

Prerequisites

  • You must be logged into OpenShift Container Platform (RHOCP) cluster.
  • You must have installed local storage operator. Use the following procedures, see

  • You must have three OpenShift Container Platform worker nodes with the same storage type and size attached to each node (for example, 0.5TB SSD) as the original OpenShift Container Storage StorageCluster was created with.

Procedure

  1. To add storage capacity to OpenShift Container Platform nodes with OpenShift Container Storage installed, you need to

    1. Find the available devices that you want to add, that is, a minimum of one device per worker node. You can follow the procedure for finding available storage devices in the respective deployment guide.

      Note

      Make sure you perform this process for all the existing nodes (minimum of 3) for which you want to add storage.

    2. Check if the new disk is added to the node by running lsblk inside node.

      $ oc debug node/worker-0
      $lsblk

      Example output:

      Creating debug namespace/openshift-debug-node-ggrqr ...
      Starting pod/worker-2-debug ...
      To use host binaries, run `chroot /host`
      Pod IP: 192.168.88.23
      If you don't see a command prompt, try pressing enter.
      sh-4.4# chroot /host
      sh-4.4# lsblk
      NAME                         MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
      loop0                          7:0    0  256G  0 loop
      vda                          252:0    0   40G  0 disk
      |-vda1                       252:1    0    4M  0 part
      |-vda2                       252:2    0  384M  0 part /boot
      `-vda4                       252:4    0 39.6G  0 part
        `-coreos-luks-root-nocrypt 253:0    0 39.6G  0 dm   /sysroot
      vdb                          252:16   0  512B  1 disk
      vdc                          252:32   0  256G  0 disk
      vdd                          252:48   0  256G  0 disk
      sh-4.4#
      sh-4.4#
      Removing debug pod ...
      Removing debug namespace/openshift-debug-node-ggrqr ...
    3. Newly added disk will automatically gets discovered by LocalVolumeSet.
  2. Display the newly created PVs with storageclass name used in localVolumeSet CR.

    $ oc get pv | grep localblock | grep Available

    Example output:

    local-pv-290020c2   256Gi   RWO     Delete  Available   localblock      2m35s
    local-pv-7702952c   256Gi   RWO     Delete  Available   localblock      2m27s
    local-pv-a7a567d    256Gi   RWO     Delete  Available   localblock      2m22s
    ...

    There are three more available PVs of same size which will be used for new OSDs.

  3. Navigate to the OpenShift Web Console.
  4. Click on Operators on the left navigation bar.
  5. Select Installed Operators.
  6. In the window, click OpenShift Container Storage Operator:

    ocs installed operators ibm
  7. In the top navigation bar, scroll right and click Storage Cluster tab.

    Create OCS Cluster Service ibm
  8. The visible list should have only one item. Click (⋮) on the far right to extend the options menu.
  9. Select Add Capacity from the options menu.

    ocs add capacity dialog menu lso

    From this dialog box, set the Storage Class name to the name used in the localVolumeset CR. Available Capacity displayed is based on the local disks available in storage class.

  10. Once you are done with your setting, click Add. You might need to wait a couple of minutes for the storage cluster to reach Ready state.
  11. Verify that the new OSDs and their corresponding new PVCs are created.

    $ oc get -n openshift-storage pods -l app=rook-ceph-osd

    Example output:

    NAME                               READY   STATUS    RESTARTS   AGE
    rook-ceph-osd-0-6f8655ff7b-gj226   1/1     Running   0          1h
    rook-ceph-osd-1-6c66d77f65-cfgfq   1/1     Running   0          1h
    rook-ceph-osd-2-69f6b4c597-mtsdv   1/1     Running   0          1h
    rook-ceph-osd-3-c784bdbd4-w4cmj    1/1     Running   0          5m
    rook-ceph-osd-4-6d99845f5b-k7f8n   1/1     Running   0          5m
    rook-ceph-osd-5-fdd9897c9-r9mgb    1/1     Running   0          5m

    In the above example, osd-3, osd-4, and osd-5 are the newly added pods to the OpenShift Container Storage cluster.

    $ oc get pvc -n openshift-storage |grep localblock

    Example output:

    ocs-deviceset-localblock-0-data-0-sfsgf   Bound    local-pv-8137c873      256Gi     RWO    localblock  1h
    ocs-deviceset-localblock-0-data-1-qhs9m   Bound    local-pv-290020c2      256Gi     RWO    localblock  10m
    ocs-deviceset-localblock-1-data-0-499r2   Bound    local-pv-ec7f2b80      256Gi     RWO    localblock  1h
    ocs-deviceset-localblock-1-data-1-p9rth   Bound    local-pv-a7a567d       256Gi     RWO    localblock  10m
    ocs-deviceset-localblock-2-data-0-8pzjr   Bound    local-pv-1e31f771      256Gi     RWO    localblock  1h
    ocs-deviceset-localblock-2-data-1-7zwwn   Bound    local-pv-7702952c      256Gi     RWO    localblock  10m

    In the above example, we see three new PVCs are created.

Verification steps

  1. Navigate to OverviewPersistent Storage tab, then check the Capacity breakdown card.

    ocs add capacity expansion verification capacity card ibm

    Note that the capacity increases based on your selections.

    Important

    OpenShift Container Storage does not support cluster reduction either by reducing OSDs or reducing nodes.

5.3. Scaling out storage capacity

To scale out storage capacity, you need to perform the following steps:

  • Add a new node
  • Verify that the new node is added successfully
  • Scale up the storage capacity

5.3.1. Adding a node

You can add nodes to increase the storage capacity when existing worker nodes are already running at their maximum supported OSDs, which is increment of 3 OSDs of the capacity selected during initial configuration.

To add a storage node for IBM Power Systems, see Section 5.3.1.1, “Adding a node using a local storage device”

5.3.1.1. Adding a node using a local storage device

Prerequisites

  • You must be logged into OpenShift Container Platform (RHOCP) cluster.
  • You must have three OpenShift Container Platform worker nodes with the same storage type and size attached to each node (for example, 2TB SSD) as the original OpenShift Container Storage StorageCluster was created with.

Procedure

  1. Perform the following steps:

    1. Get a new IBM Power machine with the required infrastructure. See Platform requirements.
    2. Create a new OpenShift Container Platform node using the new IBM Power machine.
  2. Check for certificate signing requests (CSRs) related to OpenShift Container Storage that are in Pending state:

    $ oc get csr
  3. Approve all required OpenShift Container Storage CSRs for the new node:

    $ oc adm certificate approve <Certificate_Name>
  4. Click ComputeNodes, confirm if the new node is in Ready state.
  5. Apply the OpenShift Container Storage label to the new node using any one of the following:

    From User interface
    1. For the new node, click Action Menu (⋮)Edit Labels
    2. Add cluster.ocs.openshift.io/openshift-storage and click Save.
    From Command line interface
    • Execute the following command to apply the OpenShift Container Storage label to the new node:

      $ oc label node <new_node_name> cluster.ocs.openshift.io/openshift-storage=""
  6. Click OperatorsInstalled Operators from the OpenShift Web Console.

    From the Project drop-down list, make sure to select the project where the Local Storage Operator is installed.

  7. Click on Local Storage.
  8. Click the Local Volume Sets tab.
  9. Beside the LocalVolumeSet, click Action menu (⋮)Edit Local Volume Set.
  10. In the YAML, add the hostname of the new node in the values field under the node selector.

    Figure 5.1. YAML showing the addition of new hostnames

    Screenshot of YAML showing the addition of new hostnames.
  11. Click Save.
Note

It is recommended to add 3 nodes each in different zones. You must add 3 nodes and perform this procedure for all of them.

Verification steps

5.3.2. Verifying the addition of a new node

  1. Execute the following command and verify that the new node is present in the output:

    $ oc get nodes --show-labels | grep cluster.ocs.openshift.io/openshift-storage= |cut -d' ' -f1
  2. Click WorkloadsPods, confirm that at least the following pods on the new node are in Running state:

    • csi-cephfsplugin-*
    • csi-rbdplugin-*

5.4. Scaling up storage capacity

To scale up storage capacity, see Scaling up storage by adding capacity.