Chapter 5. Scaling storage nodes
To scale the storage capacity of OpenShift Container Storage in internal mode, you can do either of the following:
- Scale up storage nodes - Add storage capacity to the existing Red Hat OpenShift Container Storage worker nodes
- Scale out storage nodes - Add new worker nodes containing storage capacity
5.1. Requirements for scaling storage nodes
Before you proceed to scale the storage nodes, refer to the following sections to understand the node requirements for your specific Red Hat OpenShift Container Storage instance:
- Platform requirements
Storage device requirements
Always ensure that you have plenty of storage capacity.
If storage ever fills completely, it is not possible to add capacity or delete or migrate content away from the storage to free up space. Completely full storage is very difficult to recover.
Capacity alerts are issued when cluster storage capacity reaches 75% (near-full) and 85% (full) of total capacity. Always address capacity warnings promptly, and review your storage regularly to ensure that you do not run out of storage space.
If you do run out of storage space completely, contact Red Hat Customer Support.
5.2. Scaling up storage by adding capacity to your OpenShift Container Storage nodes using local storage devices
Use this procedure to add storage capacity (additional storage devices) to your configured local storage based OpenShift Container Storage worker nodes on IBM Power Systems infrastructures.
Prerequisites
- You must be logged into OpenShift Container Platform (RHOCP) cluster.
You must have installed local storage operator. Use the following procedures, see
- You must have three OpenShift Container Platform worker nodes with the same storage type and size attached to each node (for example, 0.5TB SSD) as the original OpenShift Container Storage StorageCluster was created with.
Procedure
To add storage capacity to OpenShift Container Platform nodes with OpenShift Container Storage installed, you need to
Find the available devices that you want to add, that is, a minimum of one device per worker node. You can follow the procedure for finding available storage devices in the respective deployment guide.
NoteMake sure you perform this process for all the existing nodes (minimum of 3) for which you want to add storage.
Check if the new disk is added to the node by running lsblk inside node.
$ oc debug node/worker-0 $lsblk
Example output:
Creating debug namespace/openshift-debug-node-ggrqr ... Starting pod/worker-2-debug ... To use host binaries, run `chroot /host` Pod IP: 192.168.88.23 If you don't see a command prompt, try pressing enter. sh-4.4# chroot /host sh-4.4# lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT loop0 7:0 0 256G 0 loop vda 252:0 0 40G 0 disk |-vda1 252:1 0 4M 0 part |-vda2 252:2 0 384M 0 part /boot `-vda4 252:4 0 39.6G 0 part `-coreos-luks-root-nocrypt 253:0 0 39.6G 0 dm /sysroot vdb 252:16 0 512B 1 disk vdc 252:32 0 256G 0 disk vdd 252:48 0 256G 0 disk sh-4.4# sh-4.4# Removing debug pod ... Removing debug namespace/openshift-debug-node-ggrqr ...
- Newly added disk will automatically gets discovered by LocalVolumeSet.
Display the newly created PVs with
storageclass
name used inlocalVolumeSet
CR.$ oc get pv | grep localblock | grep Available
Example output:
local-pv-290020c2 256Gi RWO Delete Available localblock 2m35s local-pv-7702952c 256Gi RWO Delete Available localblock 2m27s local-pv-a7a567d 256Gi RWO Delete Available localblock 2m22s ...
There are three more available PVs of same size which will be used for new OSDs.
- Navigate to the OpenShift Web Console.
- Click on Operators on the left navigation bar.
- Select Installed Operators.
In the window, click OpenShift Container Storage Operator:
In the top navigation bar, scroll right and click Storage Cluster tab.
- The visible list should have only one item. Click (⋮) on the far right to extend the options menu.
Select Add Capacity from the options menu.
From this dialog box, set the Storage Class name to the name used in the
localVolumeset
CR. Available Capacity displayed is based on the local disks available in storage class.- Once you are done with your setting, click Add. You might need to wait a couple of minutes for the storage cluster to reach Ready state.
Verify that the new OSDs and their corresponding new PVCs are created.
$ oc get -n openshift-storage pods -l app=rook-ceph-osd
Example output:
NAME READY STATUS RESTARTS AGE rook-ceph-osd-0-6f8655ff7b-gj226 1/1 Running 0 1h rook-ceph-osd-1-6c66d77f65-cfgfq 1/1 Running 0 1h rook-ceph-osd-2-69f6b4c597-mtsdv 1/1 Running 0 1h rook-ceph-osd-3-c784bdbd4-w4cmj 1/1 Running 0 5m rook-ceph-osd-4-6d99845f5b-k7f8n 1/1 Running 0 5m rook-ceph-osd-5-fdd9897c9-r9mgb 1/1 Running 0 5m
In the above example, osd-3, osd-4, and osd-5 are the newly added pods to the OpenShift Container Storage cluster.
$ oc get pvc -n openshift-storage |grep localblock
Example output:
ocs-deviceset-localblock-0-data-0-sfsgf Bound local-pv-8137c873 256Gi RWO localblock 1h ocs-deviceset-localblock-0-data-1-qhs9m Bound local-pv-290020c2 256Gi RWO localblock 10m ocs-deviceset-localblock-1-data-0-499r2 Bound local-pv-ec7f2b80 256Gi RWO localblock 1h ocs-deviceset-localblock-1-data-1-p9rth Bound local-pv-a7a567d 256Gi RWO localblock 10m ocs-deviceset-localblock-2-data-0-8pzjr Bound local-pv-1e31f771 256Gi RWO localblock 1h ocs-deviceset-localblock-2-data-1-7zwwn Bound local-pv-7702952c 256Gi RWO localblock 10m
In the above example, we see three new PVCs are created.
Verification steps
Navigate to Overview → Persistent Storage tab, then check the Capacity breakdown card.
Note that the capacity increases based on your selections.
ImportantOpenShift Container Storage does not support cluster reduction either by reducing OSDs or reducing nodes.
5.3. Scaling out storage capacity
To scale out storage capacity, you need to perform the following steps:
- Add a new node
- Verify that the new node is added successfully
- Scale up the storage capacity
5.3.1. Adding a node
You can add nodes to increase the storage capacity when existing worker nodes are already running at their maximum supported OSDs, which is increment of 3 OSDs of the capacity selected during initial configuration.
To add a storage node for IBM Power Systems, see Section 5.3.1.1, “Adding a node using a local storage device”
5.3.1.1. Adding a node using a local storage device
Prerequisites
- You must be logged into OpenShift Container Platform (RHOCP) cluster.
- You must have three OpenShift Container Platform worker nodes with the same storage type and size attached to each node (for example, 2TB SSD) as the original OpenShift Container Storage StorageCluster was created with.
Procedure
Perform the following steps:
- Get a new IBM Power machine with the required infrastructure. See Platform requirements.
- Create a new OpenShift Container Platform node using the new IBM Power machine.
Check for certificate signing requests (CSRs) related to OpenShift Container Storage that are in
Pending
state:$ oc get csr
Approve all required OpenShift Container Storage CSRs for the new node:
$ oc adm certificate approve <Certificate_Name>
- Click Compute → Nodes, confirm if the new node is in Ready state.
Apply the OpenShift Container Storage label to the new node using any one of the following:
- From User interface
- For the new node, click Action Menu (⋮) → Edit Labels
-
Add
cluster.ocs.openshift.io/openshift-storage
and click Save.
- From Command line interface
Execute the following command to apply the OpenShift Container Storage label to the new node:
$ oc label node <new_node_name> cluster.ocs.openshift.io/openshift-storage=""
Click Operators → Installed Operators from the OpenShift Web Console.
From the Project drop-down list, make sure to select the project where the Local Storage Operator is installed.
- Click on Local Storage.
- Click the Local Volume Sets tab.
-
Beside the
LocalVolumeSet
, click Action menu (⋮) → Edit Local Volume Set. In the YAML, add the hostname of the new node in the
values
field under thenode selector
.Figure 5.1. YAML showing the addition of new hostnames
- Click Save.
It is recommended to add 3 nodes each in different zones. You must add 3 nodes and perform this procedure for all of them.
Verification steps
- To verify that the new node is added, see Verifying the addition of a new node.
5.3.2. Verifying the addition of a new node
Execute the following command and verify that the new node is present in the output:
$ oc get nodes --show-labels | grep cluster.ocs.openshift.io/openshift-storage= |cut -d' ' -f1
Click Workloads → Pods, confirm that at least the following pods on the new node are in Running state:
-
csi-cephfsplugin-*
-
csi-rbdplugin-*
-
5.4. Scaling up storage capacity
To scale up storage capacity, see Scaling up storage by adding capacity.