Deploying and managing OpenShift Container Storage using IBM Power Systems

Red Hat OpenShift Container Storage 4.6

How to install and manage

Red Hat Storage Documentation Team

Abstract

Read this document for instructions on installing and managing Red Hat OpenShift Container Storage on IBM Power Systems.
Important
Deploying and managing OpenShift Container Storage on IBM Power Systems is a Technology Preview feature. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.

Preface

Red Hat OpenShift Container Storage 4.6 supports deployment on existing Red Hat OpenShift Container Platform (RHOCP) IBM Power clusters in connected environments.

Note

Only internal Openshift Container Storage clusters are supported on IBM Power Systems. See Planning your deployment for more information about deployment requirements.

To deploy OpenShift Container Storage, follow the appropriate deployment process:

Chapter 1. Planning your deployment

1.1. Introduction to Red Hat OpenShift Container Storage

Red Hat OpenShift Container Storage is a highly integrated collection of cloud storage and data services for Red Hat OpenShift Container Platform. It is available as part of the Red Hat OpenShift Container Platform Service Catalog, packaged as an operator to facilitate simple deployment and management.

Red Hat OpenShift Container Storage services are primarily made available to applications by way of storage classes that represent the following components:

  • Block storage devices, catering primarily to database workloads. Prime examples include Red Hat OpenShift Container Platform logging and monitoring, and PostgreSQL.
  • Shared and distributed file system, catering primarily to software development, messaging, and data aggregation workloads. Examples include Jenkins build sources and artifacts, Wordpress uploaded content, Red Hat OpenShift Container Platform registry, and messaging using JBoss AMQ.
  • Multicloud object storage, featuring a lightweight S3 API endpoint that can abstract the storage and retrieval of data from multiple cloud object stores.
  • On premises object storage, featuring a robust S3 API endpoint that scales to tens of petabytes and billions of objects, primarily targeting data intensive applications. Examples include the storage and access of row, columnar, and semi-structured data with applications like Spark, Presto, Red Hat AMQ Streams (Kafka), and even machine learning frameworks like TensorFlow and Pytorch.
Note

OpenShift Container Storage 4.6 on IBM Power Systems supports only block and file storage and not the object storage.

Red Hat OpenShift Container Storage version 4.x integrates a collection of software projects, including:

  • Ceph, providing block storage, a shared and distributed file system, and on-premises object storage
  • Ceph CSI, to manage provisioning and lifecycle of persistent volumes and claims
  • NooBaa, providing a Multicloud Object Gateway
  • OpenShift Container Storage, Rook-Ceph, and NooBaa operators to initialize and manage OpenShift Container Storage services.

1.2. Architecture of OpenShift Container Storage

Red Hat OpenShift Container Storage provides services for, and can run internally from Red Hat OpenShift Container Platform.

Red Hat OpenShift Container Storage architecture

Red Hat OpenShift Container Storage architecture

Red Hat OpenShift Container Storage supports deployment into Red Hat OpenShift Container Platform clusters deployed on Installer Provisioned Infrastructure or User Provisioned Infrastructure. For details about these two approaches, see OpenShift Container Platform - Installation process.

For information about the architecture and lifecycle of OpenShift Container Platform, see OpenShift Container Platform architecture.

1.2.1. About operators

Red Hat OpenShift Container Storage comprises three main operators, which codify administrative tasks and custom resources so that task and resource characteristics can be easily automated. Administrators define the desired end state of the cluster, and the OpenShift Container Storage operators ensure the cluster is either in that state, or approaching that state, with minimal administrator intervention.

OpenShift Container Storage operator

A meta-operator that codifies and enforces the recommendations and requirements of a supported Red Hat OpenShift Container Storage deployment by drawing on other operators in specific, tested ways. This operator provides the storage cluster resource that wraps resources provided by the Rook-Ceph and NooBaa operators.

Rook-Ceph operator

This operator automates the packaging, deployment, management, upgrading, and scaling of persistent storage and file, block, and object services. It creates block and file storage classes for all environments, and creates an object storage class and services object bucket claims made against it in on-premises environments.

Additionally, for internal mode clusters, it provides the Ceph cluster resource, which manages the deployments and services representing the following:

  • Object storage daemons (OSDs)
  • Monitors (MONs)
  • Manager (MGR)
  • Metadata servers (MDS)
  • Object gateways (RGW) on-premises only

NooBaa operator

This operator automates the packaging, deployment, management, upgrading, and scaling of the Multicloud Object Gateway object service. It creates an object storage class and services object bucket claims made against it.

Additionally, it provides the NooBaa cluster resource, which manages the deployments and services for NooBaa core, database, and endpoint.

1.2.2. Storage cluster deployment approach

Flexibility is a core tenet of Red Hat OpenShift Container Storage, as evidenced by its growing list of operating modalities. This section provides you with information that will help you understand deployment of OpenShift Container Storage within OpenShift Container Platform.

Deployment of Red Hat OpenShift Container Storage entirely within Red Hat OpenShift Container Platform has all the benefits of operator based deployment and management. There are two different deployment modalities available when Red Hat OpenShift Container Storage is running entirely within Red Hat OpenShift Container Platform:

  • Simple
  • Optimized

Simple deployment

Red Hat OpenShift Container Storage services run co-resident with applications, managed by operators in Red Hat OpenShift Container Platform.

A simple deployment is best for situations where

  • Storage requirements are not clear
  • OpenShift Container Storage services will run co-resident with applications
  • Creating a node instance of a specific size is difficult (bare metal)

In order for Red Hat OpenShift Container Storage to run co-resident with applications, they must have local storage devices, or portable storage devices attached to them dynamically. For example, SAN volumes dynamically provisioned by PowerVC.

Optimized deployment

OpenShift Container Storage services run on dedicated infrastructure nodes managed by Red Hat OpenShift Container Platform.

An optimized approach is best for situations when:

  • Storage requirements are clear
  • OpenShift Container Storage services run on dedicated infrastructure nodes
  • Creating a node instance of a specific size is easy (Cloud, Virtualized environment, etc.)

1.2.3. Node types

Nodes run the container runtime, as well as services, to ensure that containers are running, and maintain network communication and separation between pods. In OpenShift Container Storage, there are three types of nodes.

Table 1.1. Types of nodes

Node TypeDescription

Master

These nodes run processes that expose the Kubernetes API, watch and schedule newly created pods, maintain node health and quantity, and control interaction with underlying cloud providers.

Infrastructure (Infra)

Infra nodes run cluster level infrastructure services such as logging, metrics, registry, and routing. These are optional in OpenShift Container Platform clusters. It is recommended to use infra nodes for OpenShift Container Storage in virtualized and cloud environments.

To create Infra nodes, you can provision new nodes labeled as infra. See How to use dedicated worker nodes for Red Hat OpenShift Container Storage?

Worker

Worker nodes are also known as application nodes since they run applications.

When OpenShift Container Storage is deployed in internal mode, a minimal cluster of 3 worker nodes is required, where the nodes are recommended to be spread across three different racks, or availability zones, to ensure availability. In order for OpenShift Container Storage to run on worker nodes, they must either have local storage devices, or portable storage devices attached to them dynamically.

When it is deployed in external mode, it runs on multiple nodes to allow rescheduling by K8S on available nodes in case of a failure.

Examples of portable storage devices are EBS volumes on EC2, or vSphere Virtual Volumes on VMware.

Note

Nodes that run only storage workloads require a subscription for Red Hat OpenShift Container Storage. Nodes that run other workloads in addition to storage workloads require both Red Hat OpenShift Container Storage and Red Hat OpenShift Container Platform subscriptions. See Section 1.4, “Subscriptions” for more information.

1.3. Security considerations

1.3.1. Data encryption options

Encryption lets you encode and obscure your data to make it impossible to understand if it is stolen. Red Hat OpenShift Container Storage 4.6 provides support for at-rest encryption of all disks in the storage cluster, meaning that your data is encrypted when it is written to disk, and decrypted when it is read from the disk.

OpenShift Container Storage 4.6 uses Linux Unified Key System (LUKS) version 2 based encryption with a key size of 512 bits and the aes-xts-plain64 cipher. Each device has a different encryption key, which is stored as a Kubernetes secret.

You can enable or disable encryption for your whole cluster during cluster deployment. It is disabled by default. Working with encrypted data incurs only a very small penalty to performance.

Data encryption is only supported for new clusters deployed using OpenShift Container Storage 4.6. It is not supported on existing clusters that are upgraded to version 4.6.

1.4. Subscriptions

1.4.1. Subscription offerings

Red Hat OpenShift Container Storage subscription is based on “core-pairs,” similar to Red Hat OpenShift Container Platform. The Red Hat OpenShift Container Storage 2-core subscription is based on the number of logical cores on the CPUs in the system where OpenShift Container Platform runs.

As with OpenShift Container Platform:

  • OpenShift Container Storage subscriptions are stackable to cover larger hosts.
  • Cores can be distributed across as many virtual machines (VMs) as needed. For example, a 2-core subscription at SMT level of 8 will provide 2 cores or 16 vCPUs that can be used across any number of VMs.
  • OpenShift Container Storage subscriptions are available with Premium or Standard support.

1.4.2. Disaster recovery subscriptions

Red Hat OpenShift Container Storage does not offer disaster recovery (DR), cold backup, or other subscription types. Any system with OpenShift Container Storage installed, powered-on or powered-off, running workload or not, requires an active subscription.

1.4.3. Cores versus vCPUs and simultaneous multithreading (SMT)

Making a determination about whether or not a particular system consumes one or more cores is currently dependent on the level of simultaneous multithreading configured (SMT). IBM Power systems provide simultaneous multithreading levels of 1, 2, 4 or 8.

For systems where SMT is configured the calculation of cores depends on the SMT level. Therefore, a 2-core subscription 2 vCPU on SMT level of 1, 4 vCPUs on SMT level of 2, 8 vCPUs on SMT level of 4 and 16 vCPUs on SMT level of 8. A large virtual machine (VM) might have 16 vCPUs, which at a SMT level 8 will be equivalent of 2 subscription cores. As subscriptions come in 2-core units, you will need 1 2-core subscription to cover these 2 cores or 16 vCPUs.

1.4.4. Shared Processor Pools

IBM Power Systems have a notion of shared processor pools. The processors in a shared processor pool can be shared across the nodes in the cluster. The aggregate compute capacity required for a OpenShift Container Storage should be a multiple of core-pairs.

1.4.5. Subscription requirements

OpenShift Container Storage components can run on either OpenShift Container Platform worker or infrastructure nodes, for which either Red Hat CoreOS (RHCOS) or Red Hat Enterprise Linux (RHEL) 7 can be used as the host operating system. When worker nodes are used for OpenShift Container Storage components, those nodes are required to have subscriptions for both OpenShift Container Platform and OpenShift Container Storage. When infrastructure nodes are used, those nodes are only required to have OpenShift Container Storage subscriptions. Labels are used to indicate whether a node should be considered a worker or infrastructure node, see How to use dedicated worker nodes for Red Hat OpenShift Container Storage in the Managing and Allocating Storage Resources guide.

1.5. Infrastructure requirements

1.5.1. Platform requirements

Red Hat OpenShift Container Storage can be combined with an OpenShift Container Platform release that is one minor release behind or ahead of the OpenShift Container Storage version.

OpenShift Container Storage 4.6 can run on:

  • OpenShift Container Platform 4.5 (one version behind) for internal mode only
  • OpenShift Container Platform 4.6 (same version)

For a complete list of supported platform versions, see the Red Hat OpenShift Container Storage and Red Hat OpenShift Container Platform interoperability matrix.

Note

When upgrading Red Hat OpenShift Container Platform, you must upgrade Local Storage Operator version to match with the Red Hat OpenShift Container Platform version in order to have the Local Storage Operator fully supported with Red Hat OpenShift Container Storage.

1.5.1.1. IBM Power Systems [Technology Preview]

Supports internal Red Hat Openshift Container Storage clusters only.

An Internal cluster must both meet storage device requirements and have a storage class providing local SSD via the Local Storage Operator.

1.5.2. Resource requirements

OpenShift Container Storage services consist of an initial set of base services, followed by additional device sets. All of these OpenShift Container Storage services pods are scheduled by kubernetes on OpenShift Container Platform nodes according to Pod Placement Rules.

Table 1.2. Aggregate minimum resource requirements

Deployment ModeBase services

Internal

  • 48 CPU (logical)
  • 192 GB memory
  • 3 storage devices, each with additional 500GB of disk

External

  • Not applicable

Example: For a 3 node cluster in an internal-attached devices mode deployment, a minimum of 3 x 16 = 48 units of CPU and 3 x 64 = 192 GB of memory is required.

1.5.3. Pod placement rules

Kubernetes is responsible for pod placement based on declarative placement rules. The OpenShift Container Storage base service placement rules for Internal cluster can be summarized as follows:

  • Nodes are labeled with the cluster.ocs.openshift.io/openshift-storage key
  • Nodes are sorted into pseudo failure domains if none exist
  • Components requiring high availability are spread across failure domains
  • A storage device must be accessible in each failure domain

This leads to the requirement that there be at least three nodes, and that nodes be in three distinct rack or zone failure domains in the case of pre-existing topology labels.

For additional device sets, there must be a storage device, and sufficient resources for the pod consuming it, in each of the three failure domains. Manual placement rules can be used to override default placement rules, but generally this approach is only suitable for bare metal deployments.

1.5.4. Storage device requirements

Use this section to understand the different storage capacity requirements that you can consider when planning deployments and upgrades on IBM Power Systems.

Local storage devices

For local storage deployment, any disk size of 4 TiB or less can be used, and all disks should be of the same size and type. The number of local storage devices that can run per node is a function of the node size and resource requirements. Expanding the cluster in multiples of three, one node in each failure domain, is an easy way to satisfy pod placement rules.

Note

Disk partitioning is not supported.

Capacity planning

Always ensure that available storage capacity stays ahead of consumption. Recovery is difficult if available storage capacity is completely exhausted, and requires more intervention than simply adding capacity or deleting or migrating content.

Capacity alerts are issued when cluster storage capacity reaches 75% (near-full) and 85% (full) of total capacity. Always address capacity warnings promptly, and review your storage regularly to ensure that you do not run out of storage space. If you do run out of storage space completely, contact Red Hat Customer Support.

The following tables show example node configurations for Red Hat OpenShift Container Storage with dynamic storage devices.

Table 1.3. Example initial configurations with 3 nodes

Storage Device sizeStorage Devices per nodeTotal capacityUsable storage capacity

0.5 TiB

1

1.5 TiB

0.5 TiB

2 TiB

1

6 TiB

2 TiB

4 TiB

1

12 TiB

4 TiB

Table 1.4. Example of expanded configurations with 30 nodes (N)

Storage Device size (D)Storage Devices per node (M)Total capacity (D * M * N)Usable storage capacity (D*M*N/3)

0.5 TiB

3

45 TiB

15 TiB

2 TiB

6

360 TiB

120 TiB

4 TiB

9

1080 TiB

360 TiB

To start deploying your OpenShift Container Storage on IBM Power Systems, you can use the deployment guide.

Chapter 2. Deploying using local storage devices

Deploying OpenShift Container Storage on OpenShift Container Platform using local storage devices provided by IBM Power Systems enables you to create internal cluster resources. This results in internal provisioning of the base services, which helps to make additional storage classes available to applications.

Note

Only internal Openshift Container Storage clusters are supported on IBM Power Systems. See Planning your deployment for more information about deployment requirements.

2.1. Requirements for installing OpenShift Container Storage using local storage devices

  • You must upgrade to OpenShift Container Platform 4.6 before deploying OpenShift Container Storage 4.6. For information, see Updating OpenShift Container Platform clusters guide.
  • The Local Storage Operator version must match the Red Hat OpenShift Container Platform version in order to have the Local Storage Operator fully supported with Red Hat OpenShift Container Storage. The Local Storage Operator does not get upgraded when Red Hat OpenShift Container Platform is upgraded.
  • You must have at least three OpenShift Container Platform worker nodes in the cluster with locally attached storage devices on each of them.

    • Each of the three selected nodes must have at least one raw block device available to be used by OpenShift Container Storage.
    • The devices to be used must be empty, that is, there should be no persistent volumes (PVs), volume groups (VGs), or local volumes (LVs) remaining on the disks.
  • For minimum starting node requirements, see Resource requirements section in Planning guide.
  • You must have a minimum of three labeled nodes.

    • Each node that has local storage devices to be used by OpenShift Container Storage must have a specific label to deploy OpenShift Container Storage pods. To label the nodes, use the following command:

      $ oc label nodes <NodeNames> cluster.ocs.openshift.io/openshift-storage=''

2.2. Installing Red Hat OpenShift Container Storage Operator

You can install Red Hat OpenShift Container Storage Operator using the Red Hat OpenShift Container Platform Operator Hub. For information about the hardware and software requirements, see Planning your deployment.

Prerequisites

  • You must be logged into the OpenShift Container Platform (RHOCP) cluster.
  • You must have at least three worker nodes in the RHOCP cluster.
Note

When you need to override the cluster-wide default node selector for OpenShift Container Storage, you can use the following command in command line interface to specify a blank node selector for the openshift-storage namespace:

$ oc annotate namespace openshift-storage openshift.io/node-selector=

Procedure

  1. Click Operators → OperatorHub in the left pane of the OpenShift Web Console.

    Figure 2.1. List of operators in the Operator Hub

    Screenshot of list of operators in the Operator Hub of the OpenShift Web Console.
  2. Click on OpenShift Container Storage.

    You can use the Filter by keyword text box or the filter list to search for OpenShift Container Storage from the list of operators.

  3. On the OpenShift Container Storage operator page, click Install.

    Figure 2.2. Install Operator page

    Screenshot of Ocs Install Operator page.

    After Clicking on Install button, following page will appear.

    Screenshot of Install Operator page.
  4. On the Install Operator page, ensure the following options are selected

    1. Update Channel as stable-4.6
    2. Installation Mode as A specific namespace on the cluster
    3. Installed Namespace as Operator recommended namespace PR openshift-storage. If Namespace openshift-storage does not exist, it will be created during the operator installation.
    4. Enable operator recommended cluster monitoring on this namespace checkbox is selected. This is required for cluster monitoring.
    5. Approval Strategy as Automatic
  5. Click Install.

    Figure 2.3. Installed Operators dashboard

    Screenshot of the installed operators.

Verification steps

  • Verify that OpenShift Container Storage Operator shows the Status as Succeeded on the Installed Operators dashboard.

2.3. Installing Local Storage Operator

Use this procedure to install the Local Storage Operator from the Operator Hub before creating OpenShift Container Storage clusters on local storage devices.

Prerequisites

  • Create a namespace called openshift-local-storage as follows:

    1. Click Administration → Namespaces in the left pane of the OpenShift Web Console.
    2. Click Create Namespace.
    3. In the Create Namespace dialog box, enter openshift-local-storage for Name.
    4. Select No restrictions option for Default Network Policy.
    5. Click Create.

Procedure

  1. Click Operators → OperatorHub in the left pane of the OpenShift Web Console.
  2. Search for Local Storage Operator from the list of operators and click on it.
  3. Click Install.

    Figure 2.4. Install Operator page

    Screenshot of Install Local storage Operator page.

    After Clicking on Install button, following page will appear.

    Screenshot of Install Operator page.
  4. On the Install Operator page, ensure the following options are selected

    1. Update Channel as stable-4.6
    2. Installation Mode as A specific namespace on the cluster
    3. Installed Namespace as openshift-local-storage.
    4. Approval Strategy as Automatic
  5. Click Install.

    Figure 2.5. Installed Operators dashboard

    Screenshot of the installed operators.

Verification steps

  • Verify that the Local Storage Operator shows the Status as Succeeded.

2.4. Finding available storage devices

Use this procedure to identify the device names for each of the three or more worker nodes that you have labeled with the OpenShift Container Storage label cluster.ocs.openshift.io/openshift-storage='' before creating PVs for IBM Power Systems.

Procedure

  1. List and verify the name of the worker nodes with the OpenShift Container Storage label.

    $ oc get nodes -l cluster.ocs.openshift.io/openshift-storage=

    Example output:

    NAME        STATUS   ROLES    AGE     VERSION
    worker-0    Ready    worker   39h   v1.18.3+2cf11e2
    worker-1    Ready    worker   39h   v1.18.3+2cf11e2
    Worker-2    Ready    worker   39h   v1.18.3+2cf11e2
  2. Log in to each worker node that is used for OpenShift Container Storage resources and disk that has additional storage attached for each available raw block device.

    $ oc debug node/<Nodename>

    Example output:

    $ oc debug node/worker-0
    Starting pod/worker-0-debug ...
    To use host binaries, run `chroot /host`
    Pod IP: 192.168.88.11
    If you don't see a command prompt, try pressing enter.
    sh-4.2# chroot /host
    sh-4.4# lsblk
    NAME                  MAJ:MIN  RM   SIZE  RO  TYPE  MOUNTPOINT
    loop0                  7:0     0    256GG  0   loop
    vda                    252:0   0     40G  0 disk
    |-vda1                 252:1   0     4M   0 part
    |-vda2                 252:2   0    384M  0 part     /boot
    `-vda4                 252:4   0   39.6G  0 part
      `-coreos-luks-root-nocrypt 253:0  0 39.6G  0 dm   /sysroot
    vdb                    252:16  0   512B   1  disk
    vdc                    252:32  0   256G   0  disk

    In this example, for worker-0, the available local device is vdc

  3. Repeat the above step for all the other worker nodes that have the storage devices to be used by OpenShift Container Storage. See this Knowledge Base article for more details.

2.5. Creating OpenShift Container Storage cluster on IBM Power Systems

Prerequisites

  • Ensure that all the requirements in the Requirements for installing OpenShift Container Storage using local storage devices section are met.
  • You must have three worker nodes with the same storage type and size attached to each node (for example, 200 GB) to use local storage devices on IBM Power Systems.
  • Verify your OpenShift Container Platform worker nodes are labeled for OpenShift Container Storage:

    oc get nodes -l cluster.ocs.openshift.io/openshift-storage -o jsonpath='{range .items[*]}{.metadata.name}{"\n"}'

To identify storage devices on each node, refer to Finding available storage devices.

Procedure

  1. Log into the OpenShift Web Console.
  2. In openshift-local-storage namespace Click OperatorsInstalled Operators from the left pane of the OpenShift Web Console to view the installed operators.

    Figure 2.6. Local Storage Operator page

    Screenshot of Local Storage operator dashboard.
    1. Click the Local Storage installed operator.
    2. On the Operator Details page, click the Local Volume Set link.

      Figure 2.7. Local Volume Set tab

      Screenshot of Local Volume Set tab on Local Storage Operator dashboard.
  3. Click Create Local Volume Set.

    Screenshot of Create Local Volume Set.
    1. Enter the Volume Set name. By default, Storage Class name appears for the Volume Set name.
    2. To discover available disks, you can choose one of the following:

      • All nodes to discover disks in all the nodes.
      • Select nodes to choose a subset of nodes from a list of nodes
    3. Select the Disk type.
    4. In the Advanced options, you can choose Block for the Disk mode, choose the minimum disk size equivalent to the size of additional attached disk, maximum disk size and set maximum disks limit.
    5. Click Create.

      The Create button is enabled only after you select a minimum of three nodes. Local Volume Set is created with one volume per worker node with the available disks.

  4. In openshift-storage namespace Click OperatorsInstalled Operators from the left pane of the OpenShift Web Console to view the installed operators.

    Figure 2.8. OpenShift Container Storage Operator page

    Screenshot of OpenShift Container Storage operator dashboard.
    1. Click the OpenShift Container Storage installed operator.
    2. On the Operator Details page, click the Storage Cluster link.

      Figure 2.9. Storage Cluster tab

      Screenshot of Storage Cluster tab on OpenShift Container Storage Operator dashboard.
  5. Click Create Storage Cluster.

    Screenshot of Create Cluster Service page
  6. Select Internal-Attached devices for the Select Mode.
Screenshot of storage cluster creation.
  1. Select the required storage class.
  2. Enable or Disable data encryption for the storage cluster based on the requirement.
  3. The nodes corresponding to the storage class are displayed based on the storage class that you selected from the drop down.
  4. Click Create.

    The Create button is enabled only after you select a minimum of three nodes. A new storage cluster of three volumes will be created with one volume per worker node. The default configuration uses a replication factor of 3.

Chapter 3. Verifying OpenShift Container Storage deployment for internal mode

Use this section to verify that OpenShift Container Storage is deployed correctly.

3.1. Verifying the state of the pods

To determine if OpenShift Container storage is deployed successfully, you can verify that the pods are in Running state.

Procedure

  1. Click Workloads → Pods from the left pane of the OpenShift Web Console.
  2. Select openshift-storage from the Project drop down list.

    For more information on the expected number of pods for each component and how it varies depending on the number of nodes, see Table 3.1, “Pods corresponding to OpenShift Container storage cluster”.

    Note

    When you need to override the cluster-wide default node selector for OpenShift Container Storage, you can perform the following steps through the command line interface:

    1. Specify a blank node selector for the openshift-storage namespace.

      $ oc annotate namespace openshift-storage openshift.io/node-selector=
    2. Delete the original pods generated by the DaemonSets.

      oc delete pod -l app=csi-cephfsplugin -n openshift-storage
      oc delete pod -l app=csi-rbdplugin -n openshift-storage
  3. Verify that the following pods are in running and completed state by clicking on the Running and the Completed tabs:

    Table 3.1. Pods corresponding to OpenShift Container storage cluster

    ComponentCorresponding pods

    OpenShift Container Storage Operator

    ocs-operator-*

    (1 pod on any worker node)

    Rook-ceph Operator

    rook-ceph-operator-*

    (1 pod on any worker node)

    MON

    rook-ceph-mon-*

    (3 pods distributed across storage nodes)

    MGR

    rook-ceph-mgr-*

    (1 pod on any storage node)

    MDS

    rook-ceph-mds-ocs-storagecluster-cephfilesystem-*

    (2 pods distributed across storage nodes)

    RGW

    rook-ceph-rgw-ocs-storagecluster-cephobjectstore-* (2 pods distributed across storage nodes)

    CSI

    • cephfs

      • csi-cephfsplugin-* (1 pod on each worker node)
      • csi-cephfsplugin-provisioner-* (2 pods distributed across storage nodes)
    • rbd

      • csi-rbdplugin-* (1 pod on each worker node)
      • csi-rbdplugin-provisioner-* (2 pods distributed across storage nodes)

    rook-ceph-drain-canary

    rook-ceph-drain-canary-*

    (1 pod on each storage node)

    rook-ceph-crashcollector

    rook-ceph-crashcollector-*

    (1 pod on each storage node)

    OSD

    • rook-ceph-osd-* (1 pod for each device)
    • rook-ceph-osd-prepare-ocs-deviceset-* (1 pod for each device)

3.2. Verifying the OpenShift Container Storage cluster is healthy

  • Click Home → Overview from the left pane of the OpenShift Web Console and click Persistent Storage tab.
  • In the Status card, verify that OCS Cluster and Data Resiliency has a green tick mark as shown in the following image:

    Figure 3.1. Health status card in Persistent Storage Overview Dashboard

    Screenshot of Health card in persistent storage dashboard
  • In the Details card, verify that the cluster information is displayed as follows:

    Service Name
    OpenShift Container Storage
    Cluster Name
    ocs-storagecluster-cephcluster
    Provider
    None
    Mode
    Internal
    Version
    ocs-operator:v4.6.0

For more information on the health of OpenShift Container Storage cluster using the persistent storage dashboard, see Monitoring OpenShift Container Storage.

3.3. Verifying that the OpenShift Container Storage specific storage classes exist

To verify the storage classes exists in the cluster:

  • Click Storage → Storage Classes from the left pane of the OpenShift Web Console.
  • Verify that the following storage classes are created with the OpenShift Container Storage cluster creation:

    • ocs-storagecluster-ceph-rbd
    • ocs-storagecluster-cephfs
    • openshift-storage.noobaa.io
    • ocs-storagecluster-ceph-rgw

Chapter 4. Uninstalling OpenShift Container Storage

4.1. Uninstalling OpenShift Container Storage on Internal mode

Use the steps in this section to uninstall OpenShift Container Storage.

Uninstall Annotations

Annotations on the Storage Cluster are used to change the behavior of the uninstall process. To define the uninstall behavior, the following two annotations have been introduced in the storage cluster:

  • uninstall.ocs.openshift.io/cleanup-policy: delete
  • uninstall.ocs.openshift.io/mode: graceful

The below table provides information on the different values that can used with these annotations:

Table 4.1. uninstall.ocs.openshift.io uninstall annotations descriptions

AnnotationValueDefaultBehavior

cleanup-policy

delete

Yes

Rook cleans up the physical drives and the DataDirHostPath

cleanup-policy

retain

No

Rook does not clean up the physical drives and the DataDirHostPath

mode

graceful

Yes

Rook and NooBaa pauses the uninstall process until the PVCs and the OBCs are removed by the administrator/user

mode

forced

No

Rook and NooBaa proceeds with uninstall even if PVCs/OBCs provisioned using Rook and NooBaa exist respectively.

You can change the cleanup policy or the uninstall mode by editing the value of the annotation by using the following commands:

$ oc annotate storagecluster ocs-storagecluster uninstall.ocs.openshift.io/cleanup-policy="retain" --overwrite
storagecluster.ocs.openshift.io/ocs-storagecluster annotated
$ oc annotate storagecluster ocs-storagecluster uninstall.ocs.openshift.io/mode="forced" --overwrite
storagecluster.ocs.openshift.io/ocs-storagecluster annotated

Prerequisites

  • Ensure that the OpenShift Container Storage cluster is in a healthy state. The uninstall process can fail when some of the pods are not terminated successfully due to insufficient resources or nodes. In case the cluster is in an unhealthy state, contact Red Hat Customer Support before uninstalling OpenShift Container Storage.
  • Ensure that applications are not consuming persistent volume claims (PVCs) using the storage classes provided by OpenShift Container Storage.
  • If any custom resources (such as custom storage classes, cephblockpools) were created by the admin, they must be deleted by the admin after removing the resources which consumed them.

Procedure

  1. Delete the volume snapshots that are using OpenShift Container Storage.

    1. List the volume snapshots from all the namespaces.

      $ oc get volumesnapshot --all-namespaces
    2. From the output of the previous command, identify and delete the volume snapshots that are using OpenShift Container Storage.

      $ oc delete volumesnapshot <VOLUME-SNAPSHOT-NAME> -n <NAMESPACE>
  2. Delete PVCs that are using OpenShift Container Storage.

    In the default uninstall mode (graceful), the uninstaller waits till all the PVCs that use OpenShift Container Storage are deleted.

    If you wish to delete the Storage Cluster without deleting the PVCs beforehand, you may set the uninstall mode annotation to "forced" and skip this step. Doing so will result in orphan PVCs in the system.

    1. Delete OpenShift Container Platform monitoring stack PVCs using OpenShift Container Storage.

      See Section 4.2, “Removing monitoring stack from OpenShift Container Storage”

    2. Delete OpenShift Container Platform Registry PVCs using OpenShift Container Storage.

      See Section 4.3, “Removing OpenShift Container Platform registry from OpenShift Container Storage”

    3. Delete OpenShift Container Platform logging PVCs using OpenShift Container Storage.

      See Section 4.4, “Removing the cluster logging operator from OpenShift Container Storage”

    4. Delete other PVCs provisioned using OpenShift Container Storage.

      • Given below is a sample script to identify the PVCs provisioned using OpenShift Container Storage. The script ignores the PVCs that are used internally by Openshift Container Storage.

        #!/bin/bash
        
        RBD_PROVISIONER="openshift-storage.rbd.csi.ceph.com"
        CEPHFS_PROVISIONER="openshift-storage.cephfs.csi.ceph.com"
        NOOBAA_PROVISIONER="openshift-storage.noobaa.io/obc"
        RGW_PROVISIONER="openshift-storage.ceph.rook.io/bucket"
        
        NOOBAA_DB_PVC="noobaa-db"
        NOOBAA_BACKINGSTORE_PVC="noobaa-default-backing-store-noobaa-pvc"
        
        # Find all the OCS StorageClasses
        OCS_STORAGECLASSES=$(oc get storageclasses | grep -e "$RBD_PROVISIONER" -e "$CEPHFS_PROVISIONER" -e "$NOOBAA_PROVISIONER" -e "$RGW_PROVISIONER" | awk '{print $1}')
        
        # List PVCs in each of the StorageClasses
        for SC in $OCS_STORAGECLASSES
        do
                echo "======================================================================"
                echo "$SC StorageClass PVCs"
                echo "======================================================================"
                oc get pvc  --all-namespaces --no-headers 2>/dev/null | grep $SC | grep -v -e "$NOOBAA_DB_PVC" -e "$NOOBAA_BACKINGSTORE_PVC"
                echo
        done
        Note

        Omit RGW_PROVISIONER for cloud platforms.

      • Delete the PVCs.

        $ oc delete pvc <pvc name> -n <project-name>
        Note

        Ensure that you have removed any custom backing stores, bucket classes, etc., created in the cluster.

  3. Delete the Storage Cluster object and wait for the removal of the associated resources.

    $ oc delete -n openshift-storage storagecluster --all --wait=true
  4. Check for cleanup pods if the uninstall.ocs.openshift.io/cleanup-policy was set to delete(default) and ensure that their status is Completed.

    $ oc get pods -n openshift-storage | grep -i cleanup
    NAME                                READY   STATUS      RESTARTS   AGE
    cluster-cleanup-job-<xx>        	0/1     Completed   0          8m35s
    cluster-cleanup-job-<yy>     		0/1     Completed   0          8m35s
    cluster-cleanup-job-<zz>     		0/1     Completed   0          8m35s
  5. Confirm that the directory /var/lib/rook is now empty. This directory will be empty only if the uninstall.ocs.openshift.io/cleanup-policy annotation was set to delete(default).

    $ for i in $(oc get node -l cluster.ocs.openshift.io/openshift-storage= -o jsonpath='{ .items[*].metadata.name }'); do oc debug node/${i} -- chroot /host  ls -l /var/lib/rook; done
  6. If encryption was enabled at the time of install, remove dm-crypt managed device-mapper mapping from OSD devices on all the OpenShift Container Storage nodes.

    1. Create a debug pod and chroot to the host on the storage node.

      $ oc debug node <node name>
      $ chroot /host
    2. Get Device names and make note of the OpenShift Container Storage devices.

      $ dmsetup ls
      ocs-deviceset-0-data-0-57snx-block-dmcrypt (253:1)
    3. Remove the mapped device.

      $ cryptsetup luksClose --debug --verbose ocs-deviceset-0-data-0-57snx-block-dmcrypt

      If the above command gets stuck due to insufficient privileges, run the following commands:

      • Press CTRL+Z to exit the above command.
      • Find PID of the cryptsetup process which was stuck.

        $ ps

        Example output:

        PID     TTY    TIME     CMD
        778825   ?     00:00:00 cryptsetup

        Take a note of the PID number to kill. In this example,PID is 778825.

      • Terminate the process using kill command.

        $ kill -9 <PID>
      • Verify that the device name is removed.

        $ dmsetup ls
  7. Delete the namespace and wait till the deletion is complete. You will need to switch to another project if openshift-storage is the active project.

    For example:

    $ oc project default
    $ oc delete project openshift-storage --wait=true --timeout=5m

    The project is deleted if the following command returns a NotFound error.

    $ oc get project openshift-storage
    Note

    While uninstalling OpenShift Container Storage, if namespace is not deleted completely and remains in Terminating state, perform the steps in Troubleshooting and deleting remaining resources during Uninstall to identify objects that are blocking the namespace from being terminated.

  8. Delete local storage operator configurations if you have deployed OpenShift Container Storage using local storage devices. See Removing local storage operator configurations.
  9. Unlabel the storage nodes.

    $ oc label nodes  --all cluster.ocs.openshift.io/openshift-storage-
    $ oc label nodes  --all topology.rook.io/rack-
  10. Remove the OpenShift Container Storage taint if the nodes were tainted.

    $ oc adm taint nodes --all node.ocs.openshift.io/storage-
  11. Confirm all PVs provisioned using OpenShift Container Storage are deleted. If there is any PV left in the Released state, delete it.

    $ oc get pv
    $ oc delete pv <pv name>
  12. Delete the Multicloud Object Gateway storageclass.

    $ oc delete storageclass openshift-storage.noobaa.io --wait=true --timeout=5m
  13. Remove CustomResourceDefinitions.

    $ oc delete crd backingstores.noobaa.io bucketclasses.noobaa.io cephblockpools.ceph.rook.io cephclusters.ceph.rook.io cephfilesystems.ceph.rook.io cephnfses.ceph.rook.io cephobjectstores.ceph.rook.io cephobjectstoreusers.ceph.rook.io noobaas.noobaa.io ocsinitializations.ocs.openshift.io  storageclusterinitializations.ocs.openshift.io storageclusters.ocs.openshift.io cephclients.ceph.rook.io cephobjectrealms.ceph.rook.io cephobjectzonegroups.ceph.rook.io cephobjectzones.ceph.rook.io cephrbdmirrors.ceph.rook.io --wait=true --timeout=5m
  14. To ensure that OpenShift Container Storage is uninstalled completely, on the OpenShift Container Platform Web Console,

    1. Click Home → Overview to access the dashboard.
    2. Verify that the Persistent Storage tab no longer appear next to the Cluster tab.

4.1.1. Removing local storage operator configurations

Use the instructions in this section only if you have deployed OpenShift Container Storage using local storage devices.

Note

For OpenShift Container Storage deployments only using localvolume resources, go directly to step 8.

Procedure

  1. Identify the LocalVolumeSet and the corresponding StorageClassName being used by OpenShift Container Storage.
  2. Set the variable SC to the StorageClass providing the LocalVolumeSet.

    $ export SC="<StorageClassName>"
  3. Delete the LocalVolumeSet.

    $ oc delete localvolumesets.local.storage.openshift.io <name-of-volumeset> -n openshift-local-storage
  4. Delete the local storage PVs for the given StorageClassName.

    $ oc get pv | grep $SC | awk '{print $1}'| xargs oc delete pv
  5. Delete the StorageClassName.

    $ oc delete sc $SC
  6. Delete the symlinks created by the LocalVolumeSet.

    [[ ! -z $SC ]] && for i in $(oc get node -l cluster.ocs.openshift.io/openshift-storage= -o jsonpath='{ .items[*].metadata.name }'); do oc debug node/${i} -- chroot /host rm -rfv /mnt/local-storage/${SC}/; done
  7. Delete LocalVolumeDiscovery.

    $ oc delete localvolumediscovery.local.storage.openshift.io/auto-discover-devices -n openshift-local-storage
  8. Removing LocalVolume resources (if any).

    Use the following steps to remove the LocalVolume resources that were used to provision PVs in the current or previous OpenShift Container Storage version. Also, ensure that these resources are not being used by other tenants on the cluster.

    For each of the local volumes, do the following:

    1. Identify the LocalVolume and the corresponding StorageClassName being used by OpenShift Container Storage.
    2. Set the variable LV to the name of the LocalVolume and variable SC to the name of the StorageClass

      For example:

      $ LV=local-block
      $ SC=localblock
    3. Delete the local volume resource.

      $ oc delete localvolume -n local-storage --wait=true $LV
    4. Delete the remaining PVs and StorageClasses if they exist.

      $ oc delete pv -l storage.openshift.com/local-volume-owner-name=${LV} --wait --timeout=5m
      $ oc delete storageclass $SC --wait --timeout=5m
    5. Clean up the artifacts from the storage nodes for that resource.

      $ [[ ! -z $SC ]] && for i in $(oc get node -l cluster.ocs.openshift.io/openshift-storage= -o jsonpath='{ .items[*].metadata.name }'); do oc debug node/${i} -- chroot /host rm -rfv /mnt/local-storage/${SC}/; done

      Example output:

      Starting pod/node-xxx-debug ...
      To use host binaries, run `chroot /host`
      removed '/mnt/local-storage/localblock/nvme2n1'
      removed directory '/mnt/local-storage/localblock'
      
      Removing debug pod ...
      Starting pod/node-yyy-debug ...
      To use host binaries, run `chroot /host`
      removed '/mnt/local-storage/localblock/nvme2n1'
      removed directory '/mnt/local-storage/localblock'
      
      Removing debug pod ...
      Starting pod/node-zzz-debug ...
      To use host binaries, run `chroot /host`
      removed '/mnt/local-storage/localblock/nvme2n1'
      removed directory '/mnt/local-storage/localblock'
      
      Removing debug pod ...

4.2. Removing monitoring stack from OpenShift Container Storage

Use this section to clean up the monitoring stack from OpenShift Container Storage.

The PVCs that are created as a part of configuring the monitoring stack are in the openshift-monitoring namespace.

Prerequisites

Procedure

  1. List the pods and PVCs that are currently running in the openshift-monitoring namespace.

    $ oc get pod,pvc -n openshift-monitoring
    NAME                           READY   STATUS    RESTARTS   AGE
    pod/alertmanager-main-0         3/3     Running   0          8d
    pod/alertmanager-main-1         3/3     Running   0          8d
    pod/alertmanager-main-2         3/3     Running   0          8d
    pod/cluster-monitoring-
    operator-84457656d-pkrxm        1/1     Running   0          8d
    pod/grafana-79ccf6689f-2ll28    2/2     Running   0          8d
    pod/kube-state-metrics-
    7d86fb966-rvd9w                 3/3     Running   0          8d
    pod/node-exporter-25894         2/2     Running   0          8d
    pod/node-exporter-4dsd7         2/2     Running   0          8d
    pod/node-exporter-6p4zc         2/2     Running   0          8d
    pod/node-exporter-jbjvg         2/2     Running   0          8d
    pod/node-exporter-jj4t5         2/2     Running   0          6d18h
    pod/node-exporter-k856s         2/2     Running   0          6d18h
    pod/node-exporter-rf8gn         2/2     Running   0          8d
    pod/node-exporter-rmb5m         2/2     Running   0          6d18h
    pod/node-exporter-zj7kx         2/2     Running   0          8d
    pod/openshift-state-metrics-
    59dbd4f654-4clng                3/3     Running   0          8d
    pod/prometheus-adapter-
    5df5865596-k8dzn                1/1     Running   0          7d23h
    pod/prometheus-adapter-
    5df5865596-n2gj9                1/1     Running   0          7d23h
    pod/prometheus-k8s-0            6/6     Running   1          8d
    pod/prometheus-k8s-1            6/6     Running   1          8d
    pod/prometheus-operator-
    55cfb858c9-c4zd9                1/1     Running   0          6d21h
    pod/telemeter-client-
    78fc8fc97d-2rgfp                3/3     Running   0          8d
    
    NAME                                                              STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS                  AGE
    persistentvolumeclaim/my-alertmanager-claim-alertmanager-main-0   Bound    pvc-0d519c4f-15a5-11ea-baa0-026d231574aa   40Gi       RWO            ocs-storagecluster-ceph-rbd   8d
    persistentvolumeclaim/my-alertmanager-claim-alertmanager-main-1   Bound    pvc-0d5a9825-15a5-11ea-baa0-026d231574aa   40Gi       RWO            ocs-storagecluster-ceph-rbd   8d
    persistentvolumeclaim/my-alertmanager-claim-alertmanager-main-2   Bound    pvc-0d6413dc-15a5-11ea-baa0-026d231574aa   40Gi       RWO            ocs-storagecluster-ceph-rbd   8d
    persistentvolumeclaim/my-prometheus-claim-prometheus-k8s-0        Bound    pvc-0b7c19b0-15a5-11ea-baa0-026d231574aa   40Gi       RWO            ocs-storagecluster-ceph-rbd   8d
    persistentvolumeclaim/my-prometheus-claim-prometheus-k8s-1        Bound    pvc-0b8aed3f-15a5-11ea-baa0-026d231574aa   40Gi       RWO            ocs-storagecluster-ceph-rbd   8d
  2. Edit the monitoring configmap.

    $ oc -n openshift-monitoring edit configmap cluster-monitoring-config
  3. Remove any config sections that reference the OpenShift Container Storage storage classes as shown in the following example and save it.

    Before editing

    .
    .
    .
    apiVersion: v1
    data:
      config.yaml: |
        alertmanagerMain:
          volumeClaimTemplate:
            metadata:
              name: my-alertmanager-claim
            spec:
              resources:
                requests:
                  storage: 40Gi
              storageClassName: ocs-storagecluster-ceph-rbd
        prometheusK8s:
          volumeClaimTemplate:
            metadata:
              name: my-prometheus-claim
            spec:
              resources:
                requests:
                  storage: 40Gi
              storageClassName: ocs-storagecluster-ceph-rbd
    kind: ConfigMap
    metadata:
      creationTimestamp: "2019-12-02T07:47:29Z"
      name: cluster-monitoring-config
      namespace: openshift-monitoring
      resourceVersion: "22110"
      selfLink: /api/v1/namespaces/openshift-monitoring/configmaps/cluster-monitoring-config
      uid: fd6d988b-14d7-11ea-84ff-066035b9efa8
    .
    .
    .

    After editing

    .
    .
    .
    apiVersion: v1
    data:
      config.yaml: |
    kind: ConfigMap
    metadata:
      creationTimestamp: "2019-11-21T13:07:05Z"
      name: cluster-monitoring-config
      namespace: openshift-monitoring
      resourceVersion: "404352"
      selfLink: /api/v1/namespaces/openshift-monitoring/configmaps/cluster-monitoring-config
      uid: d12c796a-0c5f-11ea-9832-063cd735b81c
    .
    .
    .

    In this example, alertmanagerMain and prometheusK8s monitoring components are using the OpenShift Container Storage PVCs.

  4. Delete relevant PVCs. Make sure you delete all the PVCs that are consuming the storage classes.

    $ oc delete -n openshift-monitoring pvc <pvc-name> --wait=true --timeout=5m

4.3. Removing OpenShift Container Platform registry from OpenShift Container Storage

Use this section to clean up OpenShift Container Platform registry from OpenShift Container Storage. If you want to configure an alternative storage, see image registry

The PVCs that are created as a part of configuring OpenShift Container Platform registry are in the openshift-image-registry namespace.

Prerequisites

  • The image registry should have been configured to use an OpenShift Container Storage PVC.

Procedure

  1. Edit the configs.imageregistry.operator.openshift.io object and remove the content in the storage section.

    $ oc edit configs.imageregistry.operator.openshift.io

    Before editing

    .
    .
    .
    storage:
      pvc:
        claim: registry-cephfs-rwx-pvc
    .
    .
    .

    After editing

    .
    .
    .
    storage:
      emptyDir: {}
    .
    .
    .

    In this example, the PVC is called registry-cephfs-rwx-pvc, which is now safe to delete.

  2. Delete the PVC.

    $ oc delete pvc <pvc-name> -n openshift-image-registry --wait=true --timeout=5m

4.4. Removing the cluster logging operator from OpenShift Container Storage

Use this section to clean up the cluster logging operator from OpenShift Container Storage.

The PVCs that are created as a part of configuring cluster logging operator are in the openshift-logging namespace.

Prerequisites

  • The cluster logging instance should have been configured to use OpenShift Container Storage PVCs.

Procedure

  1. Remove the ClusterLogging instance in the namespace.

    $ oc delete clusterlogging instance -n openshift-logging --wait=true --timeout=5m

    The PVCs in the openshift-logging namespace are now safe to delete.

  2. Delete PVCs.

    $ oc delete pvc <pvc-name> -n openshift-logging --wait=true --timeout=5m

Chapter 5. Scaling storage nodes

To scale the storage capacity of OpenShift Container Storage in internal mode, you can do either of the following:

  • Scale up storage nodes - Add storage capacity to the existing Red Hat OpenShift Container Storage worker nodes
  • Scale out storage nodes - Add new worker nodes containing storage capacity

5.1. Requirements for scaling storage nodes

Before you proceed to scale the storage nodes, refer to the following sections to understand the node requirements for your specific Red Hat OpenShift Container Storage instance:

Important

Always ensure that you have plenty of storage capacity.

If storage ever fills completely, it is not possible to add capacity or delete or migrate content away from the storage to free up space. Completely full storage is very difficult to recover.

Capacity alerts are issued when cluster storage capacity reaches 75% (near-full) and 85% (full) of total capacity. Always address capacity warnings promptly, and review your storage regularly to ensure that you do not run out of storage space.

If you do run out of storage space completely, contact Red Hat Customer Support.

5.2. Scaling up storage by adding capacity to your OpenShift Container Storage nodes using local storage devices

Use this procedure to add storage capacity (additional storage devices) to your configured local storage based OpenShift Container Storage worker nodes on IBM Power Systems infrastructures.

Prerequisites

  • You must be logged into OpenShift Container Platform (RHOCP) cluster.
  • You must have installed local storage operator. Use the following procedures, see

  • You must have three OpenShift Container Platform worker nodes with the same storage type and size attached to each node (for example, 0.5TB SSD) as the original OpenShift Container Storage StorageCluster was created with.

Procedure

  1. To add storage capacity to OpenShift Container Platform nodes with OpenShift Container Storage installed, you need to

    1. Find the available devices that you want to add, that is, a minimum of one device per worker node. You can follow the procedure for finding available storage devices in the respective deployment guide.

      Note

      Make sure you perform this process for all the existing nodes (minimum of 3) for which you want to add storage.

    2. Check if the new disk is added to the node by running lsblk inside node.

      $ oc debug node/worker-0
      $lsblk

      Example output:

      Creating debug namespace/openshift-debug-node-ggrqr ...
      Starting pod/worker-2-debug ...
      To use host binaries, run `chroot /host`
      Pod IP: 192.168.88.23
      If you don't see a command prompt, try pressing enter.
      sh-4.4# chroot /host
      sh-4.4# lsblk
      NAME                         MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
      loop0                          7:0    0  256G  0 loop
      vda                          252:0    0   40G  0 disk
      |-vda1                       252:1    0    4M  0 part
      |-vda2                       252:2    0  384M  0 part /boot
      `-vda4                       252:4    0 39.6G  0 part
        `-coreos-luks-root-nocrypt 253:0    0 39.6G  0 dm   /sysroot
      vdb                          252:16   0  512B  1 disk
      vdc                          252:32   0  256G  0 disk
      vdd                          252:48   0  256G  0 disk
      sh-4.4#
      sh-4.4#
      Removing debug pod ...
      Removing debug namespace/openshift-debug-node-ggrqr ...
    3. Newly added disk will automatically gets discovered by LocalVolumeSet.
  2. Display the newly created PVs with storageclass name used in localVolumeSet CR.

    $ oc get pv | grep localblock | grep Available

    Example output:

    local-pv-290020c2   256Gi   RWO     Delete  Available   localblock      2m35s
    local-pv-7702952c   256Gi   RWO     Delete  Available   localblock      2m27s
    local-pv-a7a567d    256Gi   RWO     Delete  Available   localblock      2m22s
    ...

    There are three more available PVs of same size which will be used for new OSDs.

  3. Navigate to the OpenShift Web Console.
  4. Click on Operators on the left navigation bar.
  5. Select Installed Operators.
  6. In the window, click OpenShift Container Storage Operator:

    ocs installed operators ibm
  7. In the top navigation bar, scroll right and click Storage Cluster tab.

    Create OCS Cluster Service ibm
  8. The visible list should have only one item. Click (⋮) on the far right to extend the options menu.
  9. Select Add Capacity from the options menu.

    ocs add capacity dialog menu lso

    From this dialog box, set the Storage Class name to the name used in the localVolumeset CR. Available Capacity displayed is based on the local disks available in storage class.

  10. Once you are done with your setting, click Add. You might need to wait a couple of minutes for the storage cluster to reach Ready state.
  11. Verify that the new OSDs and their corresponding new PVCs are created.

    $ oc get -n openshift-storage pods -l app=rook-ceph-osd

    Example output:

    NAME                               READY   STATUS    RESTARTS   AGE
    rook-ceph-osd-0-6f8655ff7b-gj226   1/1     Running   0          1h
    rook-ceph-osd-1-6c66d77f65-cfgfq   1/1     Running   0          1h
    rook-ceph-osd-2-69f6b4c597-mtsdv   1/1     Running   0          1h
    rook-ceph-osd-3-c784bdbd4-w4cmj    1/1     Running   0          5m
    rook-ceph-osd-4-6d99845f5b-k7f8n   1/1     Running   0          5m
    rook-ceph-osd-5-fdd9897c9-r9mgb    1/1     Running   0          5m

    In the above example, osd-3, osd-4, and osd-5 are the newly added pods to the OpenShift Container Storage cluster.

    $ oc get pvc -n openshift-storage |grep localblock

    Example output:

    ocs-deviceset-localblock-0-data-0-sfsgf   Bound    local-pv-8137c873      256Gi     RWO    localblock  1h
    ocs-deviceset-localblock-0-data-1-qhs9m   Bound    local-pv-290020c2      256Gi     RWO    localblock  10m
    ocs-deviceset-localblock-1-data-0-499r2   Bound    local-pv-ec7f2b80      256Gi     RWO    localblock  1h
    ocs-deviceset-localblock-1-data-1-p9rth   Bound    local-pv-a7a567d       256Gi     RWO    localblock  10m
    ocs-deviceset-localblock-2-data-0-8pzjr   Bound    local-pv-1e31f771      256Gi     RWO    localblock  1h
    ocs-deviceset-localblock-2-data-1-7zwwn   Bound    local-pv-7702952c      256Gi     RWO    localblock  10m

    In the above example, we see three new PVCs are created.

Verification steps

  1. Navigate to OverviewPersistent Storage tab, then check the Capacity breakdown card.

    ocs add capacity expansion verification capacity card ibm

    Note that the capacity increases based on your selections.

    Important

    OpenShift Container Storage does not support cluster reduction either by reducing OSDs or reducing nodes.

5.3. Scaling out storage capacity

To scale out storage capacity, you need to perform the following steps:

  • Add a new node
  • Verify that the new node is added successfully
  • Scale up the storage capacity

5.3.1. Adding a node

You can add nodes to increase the storage capacity when existing worker nodes are already running at their maximum supported OSDs, which is increment of 3 OSDs of the capacity selected during initial configuration.

To add a storage node for IBM Power Systems, see Section 5.3.1.1, “Adding a node using a local storage device”

5.3.1.1. Adding a node using a local storage device

Prerequisites

  • You must be logged into OpenShift Container Platform (RHOCP) cluster.
  • You must have three OpenShift Container Platform worker nodes with the same storage type and size attached to each node (for example, 2TB SSD) as the original OpenShift Container Storage StorageCluster was created with.

Procedure

  1. Perform the following steps:

    1. Get a new IBM Power machine with the required infrastructure. See Platform requirements.
    2. Create a new OpenShift Container Platform node using the new IBM Power machine.
  2. Check for certificate signing requests (CSRs) related to OpenShift Container Storage that are in Pending state:

    $ oc get csr
  3. Approve all required OpenShift Container Storage CSRs for the new node:

    $ oc adm certificate approve <Certificate_Name>
  4. Click ComputeNodes, confirm if the new node is in Ready state.
  5. Apply the OpenShift Container Storage label to the new node using any one of the following:

    From User interface
    1. For the new node, click Action Menu (⋮)Edit Labels
    2. Add cluster.ocs.openshift.io/openshift-storage and click Save.
    From Command line interface
    • Execute the following command to apply the OpenShift Container Storage label to the new node:

      $ oc label node <new_node_name> cluster.ocs.openshift.io/openshift-storage=""
  6. Click OperatorsInstalled Operators from the OpenShift Web Console.

    From the Project drop-down list, make sure to select the project where the Local Storage Operator is installed.

  7. Click on Local Storage.
  8. Click the Local Volume Sets tab.
  9. Beside the LocalVolumeSet, click Action menu (⋮)Edit Local Volume Set.
  10. In the YAML, add the hostname of the new node in the values field under the node selector.

    Figure 5.1. YAML showing the addition of new hostnames

    Screenshot of YAML showing the addition of new hostnames.
  11. Click Save.
Note

It is recommended to add 3 nodes each in different zones. You must add 3 nodes and perform this procedure for all of them.

Verification steps

5.3.2. Verifying the addition of a new node

  1. Execute the following command and verify that the new node is present in the output:

    $ oc get nodes --show-labels | grep cluster.ocs.openshift.io/openshift-storage= |cut -d' ' -f1
  2. Click WorkloadsPods, confirm that at least the following pods on the new node are in Running state:

    • csi-cephfsplugin-*
    • csi-rbdplugin-*

5.4. Scaling up storage capacity

To scale up storage capacity, see Scaling up storage by adding capacity.

Chapter 6. Replacing nodes

For OpenShift Container Storage, node replacement can be performed proactively for an operational node and reactively for a failed node for the IBM Power Systems related deployments.

6.1. Replacing an operational or failed storage node on IBM Power Systems

Prerequisites

  • Red Hat recommends that replacement nodes are configured with similar infrastructure and resources to the node being replaced.
  • You must be logged into OpenShift Container Platform (RHOCP) cluster.

Procedure

  1. Check the labels on the failed node and make note of the rack label.

    $ oc get nodes --show-labels | grep failed-node-name
  2. Identify the mon (if any) and object storage device (OSD) pods that are running in the failed node.

    $ oc get pods -n openshift-storage -o wide | grep -i failed-node-name
  3. Scale down the deployments of the pods identified in the previous step.

    For example:

    $ oc scale deployment rook-ceph-mon-a --replicas=0 -n openshift-storage
    $ oc scale deployment rook-ceph-osd-1 --replicas=0 -n openshift-storage
    $ oc scale deployment --selector=app=rook-ceph-crashcollector,node_name=failed-node-name  --replicas=0 -n openshift-storage
  4. Mark the failed node so that it cannot be scheduled for work.

    $ oc adm cordon failed-node-name
  5. Drain the failed node of existing work.

    $ oc adm drain failed-node-name --force --delete-local-data --ignore-daemonsets
    Note

    If the failed node is not connected to the network, remove the pods running on it by using the command:

    $ oc get pods -A -o wide | grep -i failed-node-name |  awk '{if ($4 == "Terminating") system ("oc -n " $1 " delete pods " $2  " --grace-period=0 " " --force ")}'
    $ oc adm drain failed-node-name --force --delete-local-data --ignore-daemonsets
  6. Delete the failed node.

    $ oc delete node failed-node-name
  7. Get a new IBM Power machine with required infrastructure. See Installing a cluster on IBM Power Systems.
  8. Create a new OpenShift Container Platform node using the new IBM Power Systems machine.
  9. Check for certificate signing requests (CSRs) related to OpenShift Container Storage that are in Pending state:

    $ oc get csr
  10. Approve all required OpenShift Container Storage CSRs for the new node:

    $ oc adm certificate approve certificate-name
  11. Click ComputeNodes in OpenShift Web Console, confirm if the new node is in Ready state.
  12. Apply the OpenShift Container Storage label to the new node using your preferred interface:

    • From OpenShift web console

      1. For the new node, click Action Menu (⋮)Edit Labels.
      2. Add cluster.ocs.openshift.io/openshift-storage and click Save.
    • From the command line interface

      1. Execute the following command to apply the OpenShift Container Storage label to the new node:

        $ oc label node new-node-name cluster.ocs.openshift.io/openshift-storage=""
  13. Add the local storage devices available in these worker nodes to the OpenShift Container Storage StorageCluster.

    1. Determine which localVolumeSet to edit.

      Replace local-storage-project in the following commands with the name of your local storage project. The default project name is openshift-local-storage in OpenShift Container Storage 4.6 and later. Previous versions use local-storage by default.

      # oc get -n local-storage-project localvolumeset
      NAME           AGE
      localblock    25h
    2. Update the localVolumeSet definition to include the new node and remove the failed node.

      # oc edit -n local-storage-project localvolumeset localblock
      [...]
          nodeSelector:
          nodeSelectorTerms:
            - matchExpressions:
                - key: kubernetes.io/hostname
                  operator: In
                  values:
                  #- worker-0
                  - worker-1
                  - worker-2
                  - worker-3
      [...]

      Remember to save before exiting the editor.

  14. Verify that the new localblock PV is available.

    $ oc get pv | grep localblock
    NAME              CAPACITY   ACCESSMODES RECLAIMPOLICY STATUS     CLAIM             STORAGECLASS                 AGE
    local-pv-3e8964d3    500Gi    RWO         Delete       Bound      ocs-deviceset-localblock-2-data-0-mdbg9  localblock     25h
    local-pv-414755e0    500Gi    RWO         Delete       Bound      ocs-deviceset-localblock-1-data-0-4cslf  localblock     25h
    local-pv-b481410   500Gi     RWO        Delete       Available                                            localblock     3m24s
    local-pv-5c9b8982    500Gi    RWO         Delete       Bound      ocs-deviceset-localblock-0-data-0-g2mmc  localblock     25h
  15. Change to the openshift-storage project.

    $ oc project openshift-storage
  16. Remove the failed OSD from the cluster. You can specify multiple failed OSDs if required.

    1. Identify the PVC as afterwards we need to delete PV associated with that specific PVC.

      # osd_id_to_remove=1
      # oc get -n openshift-storage -o yaml deployment rook-ceph-osd-${osd_id_to_remove} | grep ceph.rook.io/pvc

      where, osd_id_to_remove is the integer in the pod name immediately after the rook-ceph-osd prefix. In this example, the deployment name is rook-ceph-osd-1.

      Example output:

      ceph.rook.io/pvc: ocs-deviceset-localblock-0-data-0-g2mmc
          ceph.rook.io/pvc: ocs-deviceset-localblock-0-data-0-g2mmc

      In this example, the PVC name is ocs-deviceset-localblock-0-data-0-g2mmc.

    2. Remove the failed OSD from the cluster.

      # oc process -n openshift-storage ocs-osd-removal -p FAILED_OSD_IDS=${osd_id_to_remove},{osd_id_to_remove2} | oc create -f -
  17. Verify that the OSD is removed successfully by checking the status of the ocs-osd-removal pod.

    A status of Completed confirms that the OSD removal job succeeded.

    # oc get pod -l job-name=ocs-osd-removal-${osd_id_to_remove} -n openshift-storage
    Note

    If ocs-osd-removal fails and the pod is not in the expected Completed state, check the pod logs for further debugging. For example:

    # oc logs -l job-name=ocs-osd-removal-${osd_id_to_remove} -n openshift-storage --tail=-1
  18. Delete the PV associated with the failed node.

    1. Identify the PV associated with the PVC.

      # oc get -n openshift-storage pvc ocs-deviceset-<x>-<y>-<pvc-suffix>

      where, x, y, and pvc-suffix are the values in the DeviceSet identified in the previous step.

      For example:

      # oc get -n openshift-storage pvc ocs-deviceset-localblock-0-data-0-g2mmc
      NAME                      STATUS        VOLUME        CAPACITY   ACCESS MODES   STORAGECLASS   AGE
      ocs-deviceset-localblock-0-data-0-g2mmc   Bound   local-pv-5c9b8982   500Gi      RWO            localblock     24h

      In this example, the associated PV is local-pv-5c9b8982.

    2. Delete the PV.

      # oc delete pv <persistent-volume>

      For example:

      # oc delete pv local-pv-5c9b8982
      persistentvolume "local-pv-5c9b8982" deleted
  19. Delete the crashcollector pod deployment.

    $ oc delete deployment --selector=app=rook-ceph-crashcollector,node_name=failed-node-name -n openshift-storage
  20. Deploy the new OSD by restarting the rook-ceph-operator to force operator reconciliation.

    # oc get -n openshift-storage pod -l app=rook-ceph-operator

    Example output:

    NAME                                  READY   STATUS    RESTARTS   AGE
    rook-ceph-operator-77758ddc74-dlwn2   1/1     Running   0          1d20h
    1. Delete the rook-ceph-operator.

      # oc delete -n openshift-storage pod rook-ceph-operator-77758ddc74-dlwn2

      Example output:

      pod "rook-ceph-operator-77758ddc74-dlwn2" deleted
  21. Verify that the rook-ceph-operator pod is restarted.

    # oc get -n openshift-storage pod -l app=rook-ceph-operator

    Example output:

    NAME                                  READY   STATUS    RESTARTS   AGE
    rook-ceph-operator-77758ddc74-wqf25   1/1     Running   0          66s

    Creation of the new OSD and mon might take several minutes after the operator restarts.

  22. Delete the ocs-osd-removal job.

    # oc delete job ocs-osd-removal-${osd_id_to_remove}

    For example:

    # oc delete job ocs-osd-removal-1
    job.batch "ocs-osd-removal-1" deleted

Verification steps

  • Execute the following command and verify that the new node is present in the output:

    $ oc get nodes --show-labels | grep cluster.ocs.openshift.io/openshift-storage= |cut -d' ' -f1
  • Click WorkloadsPods, confirm that at least the following pods on the new node are in Running state:

    • csi-cephfsplugin-*
    • csi-rbdplugin-*
  • Verify that all other required OpenShift Container Storage pods are in Running state.

    • Make sure that the new incremental mon is created and is in the Running state.

      $ oc get pod -n openshift-storage | grep mon

      Example output:

      rook-ceph-mon-b-74f6dc9dd6-4llzq                                   1/1     Running     0          6h14m
      rook-ceph-mon-c-74948755c-h7wtx                                  1/1     Running     0          4h24m
      rook-ceph-mon-d-598f69869b-4bv49                                   1/1     Running     0          162m

      OSD and Mon might take several minutes to get to the Running state.

  • If verification steps fail, contact Red Hat Support.

Chapter 7. Replacing Storage Devices

7.1. Replacing operational or failed storage devices on IBM Power Systems

You can replace an object storage device (OSD) in OpenShift Container Storage deployed using local storage devices on IBM Power Systems. Use this procedure when an underlying storage device needs to be replaced.

Procedure

  1. Identify the OSD that needs to be replaced and the OpenShift Container Platform node that has the OSD scheduled on it.

    # oc get -n openshift-storage pods -l app=rook-ceph-osd -o wide

    Example output:

    rook-ceph-osd-0-86bf8cdc8-4nb5t   0/1     crashLoopBackOff   0   24h   10.129.2.26     worker-0     <none>       <none>
    rook-ceph-osd-1-7c99657cfb-jdzvz   1/1     Running   0          24h     10.128.2.46     worker-1     <none>       <none>
    rook-ceph-osd-2-5f9f6dfb5b-2mnw9    1/1     Running   0          24h     10.131.0.33    worker-2     <none>       <none>

    In this example, rook-ceph-osd-0-86bf8cdc8-4nb5t needs to be replaced and worker-0 is the RHOCP node on which the OSD is scheduled.

    Note

    If the OSD to be replaced is healthy, the status of the pod will be Running.

  2. Scale down the OSD deployment for the OSD to be replaced.

    # osd_id_to_remove=0
    # oc scale -n openshift-storage deployment rook-ceph-osd-${osd_id_to_remove} --replicas=0

    where osd_id_to_remove is the integer in the pod name immediately after the rook-ceph-osd prefix. In this example, the deployment name is rook-ceph-osd-0.

    Example output:

    deployment.apps/rook-ceph-osd-0 scaled
  3. Verify that the rook-ceph-osd pod is terminated.

    # oc get -n openshift-storage pods -l ceph-osd-id=${osd_id_to_remove}

    Example output:

    No resources found in openshift-storage namespace.
    Note

    If the rook-ceph-osd pod is in terminating state, use the force option to delete the pod.

    # oc delete pod rook-ceph-osd-0-86bf8cdc8-4nb5t --grace-period=0 --force

    Example output:

    warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
      pod "rook-ceph-osd-0-86bf8cdc8-4nb5t" force deleted
  4. Remove the old OSD from the cluster so that a new OSD can be added.

    1. Identify the DeviceSet associated with the OSD to be replaced.

      # oc get -n openshift-storage -o yaml deployment rook-ceph-osd-${osd_id_to_remove} | grep ceph.rook.io/pvc

      Example output:

      ceph.rook.io/pvc: ocs-deviceset-localblock-0-data-0-64xjl
          ceph.rook.io/pvc: ocs-deviceset-localblock-0-data-0-64xjl

      In this example, the PVC name is ocs-deviceset-localblock-0-data-0-64xjl.

    2. Remove the old OSD from the cluster

      # oc process -n openshift-storage ocs-osd-removal -p FAILED_OSD_IDS=${osd_id_to_remove} | oc -n openshift-storage create -f -

      Example Output:

      job.batch/ocs-osd-removal-0 created
      Warning

      This step results in OSD being completely removed from the cluster. Make sure that the correct value of osd_id_to_remove is provided.

  5. Verify that the OSD is removed successfully by checking the status of the ocs-osd-removal pod. A status of Completed confirms that the OSD removal job completed successfully.

    # oc get pod -l job-name=ocs-osd-removal-${osd_id_to_remove} -n openshift-storage
    Note

    If ocs-osd-removal fails and the pod is not in the expected Completed state, check the pod logs for further debugging. For example:

    # oc logs ${osd_id_to_remove} -n openshift-storage --tail=-1
  6. Delete the persistent volume claim (PVC) resources associated with the OSD to be replaced.

    1. Identify the PV associated with the PVC.

      # oc get -n openshift-storage pvc ocs-deviceset-<x>-<y>-<pvc-suffix>

      where, x, y, and pvc-suffix are the values in the DeviceSet identified in an step 4(a).

      Example output:

      NAME                      STATUS        VOLUME        CAPACITY   ACCESS MODES   STORAGECLASS   AGE
      ocs-deviceset-localblock-0-data-0-64xjl   Bound    local-pv-8137c873    256Gi      RWO     localblock     24h

      In this example, the associated PV is local-pv-8137c873.

    2. Identify the name of the device to be replaced.

      # oc get pv local-pv-<pv-suffix> -o yaml | grep path

      where, pv-suffix is the value in the PV name identified in an earlier step.

      Example output:

      path: /mnt/local-storage/localblock/vdc

      In this example, the device name is vdc.

    3. Identify the prepare-pod associated with the OSD to be replaced.

      # oc describe -n openshift-storage pvc ocs-deviceset-<x>-<y>-<pvc-suffix> | grep Mounted

      where, x, y, and pvc-suffix are the values in the DeviceSet identified in an earlier step.

      Example output:

      Mounted By:    rook-ceph-osd-prepare-ocs-deviceset-localblock-0-data-0-64knzkc

      In this example the prepare-pod name is rook-ceph-osd-prepare-ocs-deviceset-localblock-0-data-0-64knzkc.

    4. Delete the osd-prepare pod before removing the associated PVC.

      # oc delete -n openshift-storage pod rook-ceph-osd-prepare-ocs-deviceset-<x>-<y>-<pvc-suffix>-<pod-suffix>

      where, x, y, pvc-suffix, and pod-suffix are the values in the osd-prepare pod name identified in an earlier step.

      Example output:

      pod "rook-ceph-osd-prepare-ocs-deviceset-localblock-0-data-0-64knzkc" deleted
    5. Delete the PVC associated with the OSD to be replaced.

      # oc delete -n openshift-storage pvc ocs-deviceset-<x>-<y>-<pvc-suffix>

      where, x, y, and pvc-suffix are the values in the DeviceSet identified in an earlier step.

      Example output:

      persistentvolumeclaim "ocs-deviceset-localblock-0-data-0-64xjl" deleted
  7. Replace the old device and use the new device to create a new OpenShift Container Platform PV.

    1. Log in to OpenShift Container Platform node with the device to be replaced. In this example, the OpenShift Container Platform node is worker-0.

      # oc debug node/worker-0

      Example output:

      Starting pod/worker-0-debug ...
      To use host binaries, run `chroot /host`
      Pod IP: 192.168.88.21
      If you don't see a command prompt, try pressing enter.
      # chroot /host
    2. Record the /dev/disk that is to be replaced using the device name, vdc, identified earlier.

      # ls -alh /mnt/local-storage/localblock

      Example output:

      total 0
      drwxr-xr-x. 2 root root 17 Nov  18 15:23 .
      drwxr-xr-x. 3 root root 24 Nov  18 15:23 ..
      lrwxrwxrwx. 1 root root  8 Nov  18 15:23 vdc -> /dev/vdc
    3. Find the name of the LocalVolumeSet CR, and remove or comment out the device /dev/disk that is to be replaced.

      # oc get -n openshift-local-storage localvolumeset
      NAME          AGE
      localblock   25h
  8. Log in to OpenShift Container Platform node with the device to be replaced and remove the old symlink.

    # oc debug node/worker-0

    Example output:

    Starting pod/worker-0-debug ...
    To use host binaries, run `chroot /host`
    Pod IP: 192.168.88.21
    If you don't see a command prompt, try pressing enter.
    # chroot /host
    1. Identify the old symlink for the device name to be replaced. In this example, the device name is vdc.

      # ls -alh /mnt/local-storage/localblock

      Example output:

      total 0
      drwxr-xr-x. 2 root root 17 Nov  18 15:23 .
      drwxr-xr-x. 3 root root 24 Nov  18 15:23 ..
      lrwxrwxrwx. 1 root root  8 Nov  18 15:23 vdc -> /dev/vdc
    2. Remove the symlink.

      # rm /mnt/local-storage/localblock/vdc
    3. Verify that the symlink is removed.

      # ls -alh /mnt/local-storage/localblock

      Example output:

      total 0
      drwxr-xr-x. 2 root root 6 Nov 18 17:11 .
      drwxr-xr-x. 3 root root 24 Nov 18 15:23 ..
      Important

      For new deployments of OpenShift Container Storage 4.5 or later, LVM is not in use, ceph-volume raw mode is in play instead. Therefore, additional validation is not needed and you can proceed to the next step.

  9. Delete the PV associated with the device to be replaced, which was identified in earlier steps. In this example, the PV name is local-pv-8137c873.

    # oc delete pv local-pv-8137c873

    Example output:

    persistentvolume "local-pv-8137c873" deleted
  10. Replace the device with the new device.
  11. Log back into the correct OpenShift Cotainer Platform node and identify the device name for the new drive. The device name must change unless you are reseating the same device.

    # lsblk

    Example output:

    NAME                         MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
    vda                          252:0    0   40G  0 disk
    |-vda1                       252:1    0    4M  0 part
    |-vda2                       252:2    0  384M  0 part /boot
    `-vda4                       252:4    0 39.6G  0 part
      `-coreos-luks-root-nocrypt 253:0    0 39.6G  0 dm   /sysroot
    vdb                          252:16   0  512B  1 disk
    vdd                          252:32   0  256G  0 disk

    In this example, the new device name is vdd.

  12. After the new /dev/disk is available ,it will be auto detected by localvolumeset.
  13. Verify that there is a new PV in Available state and of the correct size.

    # oc get pv | grep 256Gi

    Example output:

    local-pv-1e31f771   256Gi   RWO    Delete  Bound  openshift-storage/ocs-deviceset-localblock-2-data-0-6xhkf   localblock    24h
    local-pv-ec7f2b80   256Gi   RWO    Delete  Bound  openshift-storage/ocs-deviceset-localblock-1-data-0-hr2fx   localblock    24h
    local-pv-8137c873   256Gi   RWO    Delete  Available                                                          localblock    32m
  14. Create new OSD for new device.

    1. Deploy the new OSD by restarting the rook-ceph-operator to force operator reconciliation.

      1. Identify the name of the rook-ceph-operator.

        # oc get -n openshift-storage pod -l app=rook-ceph-operator

        Example output:

        NAME                                  READY   STATUS    RESTARTS   AGE
        rook-ceph-operator-85f6494db4-sg62v   1/1     Running   0          1d20h
      2. Delete the rook-ceph-operator.

        # oc delete -n openshift-storage pod rook-ceph-operator-85f6494db4-sg62v

        Example output:

        pod "rook-ceph-operator-85f6494db4-sg62v" deleted

        In this example, the rook-ceph-operator pod name is rook-ceph-operator-85f6494db4-sg62v.

      3. Verify that the rook-ceph-operator pod is restarted.

        # oc get -n openshift-storage pod -l app=rook-ceph-operator

        Example output:

        NAME                                  READY   STATUS    RESTARTS   AGE
        rook-ceph-operator-85f6494db4-wx9xx   1/1     Running   0          50s

        Creation of the new OSD may take several minutes after the operator restarts.

Verfication steps

  • Verify that there is a new OSD running and a new PVC created.

    # oc get -n openshift-storage pods -l app=rook-ceph-osd

    Example output:

    rook-ceph-osd-0-76d8fb97f9-mn8qz   1/1     Running   0          23m
    rook-ceph-osd-1-7c99657cfb-jdzvz   1/1     Running   1          25h
    rook-ceph-osd-2-5f9f6dfb5b-2mnw9   1/1     Running   0          25h
    # oc get -n openshift-storage pvc | grep localblock

    Example output:

    ocs-deviceset-localblock-0-data-0-q4q6b   Bound    local-pv-8137c873       256Gi     RWO         localblock         10m
    ocs-deviceset-localblock-1-data-0-hr2fx   Bound    local-pv-ec7f2b80       256Gi     RWO         localblock         1d20h
    ocs-deviceset-localblock-2-data-0-6xhkf   Bound    local-pv-1e31f771       256Gi     RWO         localblock         1d20h
  • Log in to OpenShift Web Console and view the storage dashboard.

    Figure 7.1. OSD status in OpenShift Container Platform storage dashboard after device replacement

    RHOCP storage dashboard showing the healthy OSD.

Legal Notice

Copyright © 2020 Red Hat, Inc.
The text of and illustrations in this document are licensed by Red Hat under a Creative Commons Attribution–Share Alike 3.0 Unported license ("CC-BY-SA"). An explanation of CC-BY-SA is available at http://creativecommons.org/licenses/by-sa/3.0/. In accordance with CC-BY-SA, if you distribute this document or an adaptation of it, you must provide the URL for the original version.
Red Hat, as the licensor of this document, waives the right to enforce, and agrees not to assert, Section 4d of CC-BY-SA to the fullest extent permitted by applicable law.
Red Hat, Red Hat Enterprise Linux, the Shadowman logo, the Red Hat logo, JBoss, OpenShift, Fedora, the Infinity logo, and RHCE are trademarks of Red Hat, Inc., registered in the United States and other countries.
Linux® is the registered trademark of Linus Torvalds in the United States and other countries.
Java® is a registered trademark of Oracle and/or its affiliates.
XFS® is a trademark of Silicon Graphics International Corp. or its subsidiaries in the United States and/or other countries.
MySQL® is a registered trademark of MySQL AB in the United States, the European Union and other countries.
Node.js® is an official trademark of Joyent. Red Hat is not formally related to or endorsed by the official Joyent Node.js open source or commercial project.
The OpenStack® Word Mark and OpenStack logo are either registered trademarks/service marks or trademarks/service marks of the OpenStack Foundation, in the United States and other countries and are used with the OpenStack Foundation's permission. We are not affiliated with, endorsed or sponsored by the OpenStack Foundation, or the OpenStack community.
All other trademarks are the property of their respective owners.