Chapter 1. OpenShift Container Storage deployed using dynamic devices

1.1. OpenShift Container Storage deployed on AWS

1.1.1. Replacing an operational AWS node on user-provisioned infrastructure

Perform this procedure to replace an operational node on AWS user-provisioned infrastructure.

Prerequisites

  • Red Hat recommends that replacement nodes are configured with similar infrastructure and resources to the node being replaced.
  • You must be logged into the OpenShift Container Platform (RHOCP) cluster.

Procedure

  1. Identify the node that needs to be replaced.
  2. Mark the node as unschedulable using the following command:

    $ oc adm cordon <node_name>
  3. Drain the node using the following command:

    $ oc adm drain <node_name> --force --delete-local-data --ignore-daemonsets
    Important

    This activity may take at least 5-10 minutes or more. Ceph errors generated during this period are temporary and are automatically resolved when the new node is labeled and functional.

  4. Delete the node using the following command:

    $ oc delete nodes <node_name>
  5. Create a new AWS machine instance with the required infrastructure. See Platform requirements.
  6. Create a new OpenShift Container Platform node using the new AWS machine instance.
  7. Check for certificate signing requests (CSRs) related to OpenShift Container Platform that are in Pending state:

    $ oc get csr
  8. Approve all required OpenShift Container Platform CSRs for the new node:

    $ oc adm certificate approve <Certificate_Name>
  9. Click ComputeNodes, confirm if the new node is in Ready state.
  10. Apply the OpenShift Container Storage label to the new node.

    From the web user interface
    1. For the new node, click Action Menu (⋮)Edit Labels
    2. Add cluster.ocs.openshift.io/openshift-storage and click Save.
    From the command line interface
    • Execute the following command to apply the OpenShift Container Storage label to the new node:

      $ oc label node <new_node_name> cluster.ocs.openshift.io/openshift-storage=""

Verification steps

  1. Execute the following command and verify that the new node is present in the output:

    $ oc get nodes --show-labels | grep cluster.ocs.openshift.io/openshift-storage= |cut -d' ' -f1
  2. Click WorkloadsPods, confirm that at least the following pods on the new node are in Running state:

    • csi-cephfsplugin-*
    • csi-rbdplugin-*
  3. Verify that all other required OpenShift Container Storage pods are in Running state.
  4. Verify that new OSD pods are running on the replacement node.

    $ oc get pods -o wide -n openshift-storage| egrep -i new-node-name | egrep osd
  5. (Optional) If cluster-wide encryption is enabled on the cluster, verify that the new OSD devices are encrypted.

    1. For each of the new nodes identified in previous step, do the following:

      1. Create a debug pod and open a chroot environment for the selected host(s).

        $ oc debug node/<node name>
        $ chroot /host
      2. Run “lsblk” and check for the “crypt” keyword beside the ocs-deviceset name(s)

        $ lsblk
  6. If verification steps fail, contact Red Hat Support.

1.1.2. Replacing an operational AWS node on installer-provisioned infrastructure

Use this procedure to replace an operational node on AWS installer-provisioned infrastructure (IPI).

Procedure

  1. Log in to OpenShift Web Console and click ComputeNodes.
  2. Identify the node that needs to be replaced. Take a note of its Machine Name.
  3. Mark the node as unschedulable using the following command:

    $ oc adm cordon <node_name>
  4. Drain the node using the following command:

    $ oc adm drain <node_name> --force --delete-local-data --ignore-daemonsets
    Important

    This activity may take at least 5-10 minutes or more. Ceph errors generated during this period are temporary and are automatically resolved when the new node is labeled and functional.

  5. Click ComputeMachines. Search for the required machine.
  6. Besides the required machine, click the Action menu (⋮)Delete Machine.
  7. Click Delete to confirm the machine deletion. A new machine is automatically created.
  8. Wait for new machine to start and transition into Running state.

    Important

    This activity may take at least 5-10 minutes or more.

  9. Click ComputeNodes, confirm if the new node is in Ready state.
  10. Apply the OpenShift Container Storage label to the new node using any one of the following:

    From User interface
    1. For the new node, click Action Menu (⋮)Edit Labels
    2. Add cluster.ocs.openshift.io/openshift-storage and click Save.
    From Command line interface
    • Execute the following command to apply the OpenShift Container Storage label to the new node:

      $ oc label node <new_node_name> cluster.ocs.openshift.io/openshift-storage=""

Verification steps

  1. Execute the following command and verify that the new node is present in the output:

    $ oc get nodes --show-labels | grep cluster.ocs.openshift.io/openshift-storage= |cut -d' ' -f1
  2. Click WorkloadsPods, confirm that at least the following pods on the new node are in Running state:

    • csi-cephfsplugin-*
    • csi-rbdplugin-*
  3. Verify that all other required OpenShift Container Storage pods are in Running state.
  4. Verify that new OSD pods are running on the replacement node.

    $ oc get pods -o wide -n openshift-storage| egrep -i new-node-name | egrep osd
  5. (Optional) If cluster-wide encryption is enabled on the cluster, verify that the new OSD devices are encrypted.

    1. For each of the new nodes identified in previous step, do the following:

      1. Create a debug pod and open a chroot environment for the selected host(s).

        $ oc debug node/<node name>
        $ chroot /host
      2. Run “lsblk” and check for the “crypt” keyword beside the ocs-deviceset name(s)

        $ lsblk
  6. If verification steps fail, contact Red Hat Support.

1.1.3. Replacing a failed AWS node on user-provisioned infrastructure

Perform this procedure to replace a failed node which is not operational on AWS user-provisioned infrastructure (UPI) for OpenShift Container Storage.

Prerequisites

  • Red Hat recommends that replacement nodes are configured with similar infrastructure and resources to the node being replaced.
  • You must be logged into the OpenShift Container Platform (RHOCP) cluster.

Procedure

  1. Identify the AWS machine instance of the node that needs to be replaced.
  2. Log in to AWS and terminate the identified AWS machine instance.
  3. Create a new AWS machine instance with the required infrastructure. See platform requirements.
  4. Create a new OpenShift Container Platform node using the new AWS machine instance.
  5. Check for certificate signing requests (CSRs) related to OpenShift Container Platform that are in Pending state:

    $ oc get csr
  6. Approve all required OpenShift Container Platform CSRs for the new node:

    $ oc adm certificate approve <Certificate_Name>
  7. Click ComputeNodes, confirm if the new node is in Ready state.
  8. Apply the OpenShift Container Storage label to the new node using any one of the following:

    From User interface
    1. For the new node, click Action Menu (⋮)Edit Labels
    2. Add cluster.ocs.openshift.io/openshift-storage and click Save.
    From Command line interface
    • Execute the following command to apply the OpenShift Container Storage label to the new node:

      $ oc label node <new_node_name> cluster.ocs.openshift.io/openshift-storage=""

Verification steps

  1. Execute the following command and verify that the new node is present in the output:

    $ oc get nodes --show-labels | grep cluster.ocs.openshift.io/openshift-storage= |cut -d' ' -f1
  2. Click WorkloadsPods, confirm that at least the following pods on the new node are in Running state:

    • csi-cephfsplugin-*
    • csi-rbdplugin-*
  3. Verify that all other required OpenShift Container Storage pods are in Running state.
  4. Verify that new OSD pods are running on the replacement node.

    $ oc get pods -o wide -n openshift-storage| egrep -i new-node-name | egrep osd
  5. (Optional) If cluster-wide encryption is enabled on the cluster, verify that the new OSD devices are encrypted.

    1. For each of the new nodes identified in previous step, do the following:

      1. Create a debug pod and open a chroot environment for the selected host(s).

        $ oc debug node/<node name>
        $ chroot /host
      2. Run “lsblk” and check for the “crypt” keyword beside the ocs-deviceset name(s)

        $ lsblk
  6. If verification steps fail, contact Red Hat Support.

1.1.4. Replacing a failed AWS node on installer-provisioned infrastructure

Perform this procedure to replace a failed node which is not operational on AWS installer-provisioned infrastructure (IPI) for OpenShift Container Storage.

Procedure

  1. Log in to OpenShift Web Console and click ComputeNodes.
  2. Identify the faulty node and click on its Machine Name.
  3. Click ActionsEdit Annotations, and click Add More.
  4. Add machine.openshift.io/exclude-node-draining and click Save.
  5. Click ActionsDelete Machine, and click Delete.
  6. A new machine is automatically created, wait for new machine to start.

    Important

    This activity may take at least 5-10 minutes or more. Ceph errors generated during this period are temporary and are automatically resolved when the new node is labeled and functional.

  7. Click ComputeNodes, confirm if the new node is in Ready state.
  8. Apply the OpenShift Container Storage label to the new node using any one of the following:

    From User interface
    1. For the new node, click Action Menu (⋮)Edit Labels
    2. Add cluster.ocs.openshift.io/openshift-storage and click Save.
    From Command line interface
    • Execute the following command to apply the OpenShift Container Storage label to the new node:

      $ oc label node <new_node_name> cluster.ocs.openshift.io/openshift-storage=""
  9. [Optional]: If the failed AWS instance is not removed automatically, terminate the instance from AWS console.

Verification steps

  1. Execute the following command and verify that the new node is present in the output:

    $ oc get nodes --show-labels | grep cluster.ocs.openshift.io/openshift-storage= |cut -d' ' -f1
  2. Click WorkloadsPods, confirm that at least the following pods on the new node are in Running state:

    • csi-cephfsplugin-*
    • csi-rbdplugin-*
  3. Verify that all other required OpenShift Container Storage pods are in Running state.
  4. Verify that new OSD pods are running on the replacement node.

    $ oc get pods -o wide -n openshift-storage| egrep -i new-node-name | egrep osd
  5. (Optional) If cluster-wide encryption is enabled on the cluster, verify that the new OSD devices are encrypted.

    1. For each of the new nodes identified in previous step, do the following:

      1. Create a debug pod and open a chroot environment for the selected host(s).

        $ oc debug node/<node name>
        $ chroot /host
      2. Run “lsblk” and check for the “crypt” keyword beside the ocs-deviceset name(s)

        $ lsblk
  6. If verification steps fail, contact Red Hat Support.

1.2. OpenShift Container Storage deployed on VMware

1.2.1. Replacing an operational VMware node on user-provisioned infrastructure

Perform this procedure to replace an operational node on VMware user-provisioned infrastructure (UPI).

Prerequisites

  • Red Hat recommends that replacement nodes are configured with similar infrastructure, resources, and disks to the node being replaced.
  • You must be logged into the OpenShift Container Platform (RHOCP) cluster.

Procedure

  1. Identify the node and its VM that needs to be replaced.
  2. Mark the node as unschedulable using the following command:

    $ oc adm cordon <node_name>
  3. Drain the node using the following command:

    $ oc adm drain <node_name> --force --delete-local-data --ignore-daemonsets
    Important

    This activity may take at least 5-10 minutes or more. Ceph errors generated during this period are temporary and are automatically resolved when the new node is labeled and functional.

  4. Delete the node using the following command:

    $ oc delete nodes <node_name>
  5. Log in to vSphere and terminate the identified VM.

    Important

    VM should be deleted only from the inventory and not from the disk.

  6. Create a new VM on vSphere with the required infrastructure. See Platform requirements.
  7. Create a new OpenShift Container Platform worker node using the new VM.
  8. Check for certificate signing requests (CSRs) related to OpenShift Container Platform that are in Pending state:

    $ oc get csr
  9. Approve all required OpenShift Container Platform CSRs for the new node:

    $ oc adm certificate approve <Certificate_Name>
  10. Click ComputeNodes, confirm if the new node is in Ready state.
  11. Apply the OpenShift Container Storage label to the new node using any one of the following:

    From User interface
    1. For the new node, click Action Menu (⋮)Edit Labels
    2. Add cluster.ocs.openshift.io/openshift-storage and click Save.
    From Command line interface
    • Execute the following command to apply the OpenShift Container Storage label to the new node:

      $ oc label node <new_node_name> cluster.ocs.openshift.io/openshift-storage=""

Verification steps

  1. Execute the following command and verify that the new node is present in the output:

    $ oc get nodes --show-labels | grep cluster.ocs.openshift.io/openshift-storage= |cut -d' ' -f1
  2. Click WorkloadsPods, confirm that at least the following pods on the new node are in Running state:

    • csi-cephfsplugin-*
    • csi-rbdplugin-*
  3. Verify that all other required OpenShift Container Storage pods are in Running state.
  4. Verify that new OSD pods are running on the replacement node.

    $ oc get pods -o wide -n openshift-storage| egrep -i new-node-name | egrep osd
  5. (Optional) If cluster-wide encryption is enabled on the cluster, verify that the new OSD devices are encrypted.

    1. For each of the new nodes identified in previous step, do the following:

      1. Create a debug pod and open a chroot environment for the selected host(s).

        $ oc debug node/<node name>
        $ chroot /host
      2. Run “lsblk” and check for the “crypt” keyword beside the ocs-deviceset name(s)

        $ lsblk
  6. If verification steps fail, contact Red Hat Support.

1.2.2. Replacing an operational VMware node on installer-provisioned infrastructure

Use this procedure to replace an operational node on VMware installer-provisioned infrastructure (IPI).

Procedure

  1. Log in to OpenShift Web Console and click ComputeNodes.
  2. Identify the node that needs to be replaced. Take a note of its Machine Name.
  3. Mark the node as unschedulable using the following command:

    $ oc adm cordon <node_name>
  4. Drain the node using the following command:

    $ oc adm drain <node_name> --force --delete-local-data --ignore-daemonsets
    Important

    This activity may take at least 5-10 minutes or more. Ceph errors generated during this period are temporary and are automatically resolved when the new node is labeled and functional.

  5. Click ComputeMachines. Search for the required machine.
  6. Besides the required machine, click the Action menu (⋮)Delete Machine.
  7. Click Delete to confirm the machine deletion. A new machine is automatically created.
  8. Wait for new machine to start and transition into Running state.

    Important

    This activity may take at least 5-10 minutes or more.

  9. Click ComputeNodes, confirm if the new node is in Ready state.
  10. Apply the OpenShift Container Storage label to the new node using any one of the following:

    From User interface
    1. For the new node, click Action Menu (⋮)Edit Labels
    2. Add cluster.ocs.openshift.io/openshift-storage and click Save.
    From Command line interface
    • Execute the following command to apply the OpenShift Container Storage label to the new node:

      $ oc label node <new_node_name> cluster.ocs.openshift.io/openshift-storage=""

Verification steps

  1. Execute the following command and verify that the new node is present in the output:

    $ oc get nodes --show-labels | grep cluster.ocs.openshift.io/openshift-storage= |cut -d' ' -f1
  2. Click WorkloadsPods, confirm that at least the following pods on the new node are in Running state:

    • csi-cephfsplugin-*
    • csi-rbdplugin-*
  3. Verify that all other required OpenShift Container Storage pods are in Running state.
  4. Verify that new OSD pods are running on the replacement node.

    $ oc get pods -o wide -n openshift-storage| egrep -i new-node-name | egrep osd
  5. (Optional) If cluster-wide encryption is enabled on the cluster, verify that the new OSD devices are encrypted.

    1. For each of the new nodes identified in previous step, do the following:

      1. Create a debug pod and open a chroot environment for the selected host(s).

        $ oc debug node/<node name>
        $ chroot /host
      2. Run “lsblk” and check for the “crypt” keyword beside the ocs-deviceset name(s)

        $ lsblk
  6. If verification steps fail, contact Red Hat Support.

1.2.3. Replacing a failed VMware node on user-provisioned infrastructure

Perform this procedure to replace a failed node on VMware user-provisioned infrastructure (UPI).

Prerequisites

  • Red Hat recommends that replacement nodes are configured with similar infrastructure, resources, and disks to the node being replaced.
  • You must be logged into the OpenShift Container Platform (RHOCP) cluster.

Procedure

  1. Identify the node and its VM that needs to be replaced.
  2. Delete the node using the following command:

    $ oc delete nodes <node_name>
  3. Log in to vSphere and terminate the identified VM.

    Important

    VM should be deleted only from the inventory and not from the disk.

  4. Create a new VM on vSphere with the required infrastructure. See Platform requirements.
  5. Create a new OpenShift Container Platform worker node using the new VM.
  6. Check for certificate signing requests (CSRs) related to OpenShift Container Platform that are in Pending state:

    $ oc get csr
  7. Approve all required OpenShift Container Platform CSRs for the new node:

    $ oc adm certificate approve <Certificate_Name>
  8. Click ComputeNodes, confirm if the new node is in Ready state.
  9. Apply the OpenShift Container Storage label to the new node using any one of the following:

    From User interface
    1. For the new node, click Action Menu (⋮)Edit Labels
    2. Add cluster.ocs.openshift.io/openshift-storage and click Save.
    From Command line interface
    • Execute the following command to apply the OpenShift Container Storage label to the new node:

      $ oc label node <new_node_name> cluster.ocs.openshift.io/openshift-storage=""

Verification steps

  1. Execute the following command and verify that the new node is present in the output:

    $ oc get nodes --show-labels | grep cluster.ocs.openshift.io/openshift-storage= |cut -d' ' -f1
  2. Click WorkloadsPods, confirm that at least the following pods on the new node are in Running state:

    • csi-cephfsplugin-*
    • csi-rbdplugin-*
  3. Verify that all other required OpenShift Container Storage pods are in Running state.
  4. Verify that new OSD pods are running on the replacement node.

    $ oc get pods -o wide -n openshift-storage| egrep -i new-node-name | egrep osd
  5. (Optional) If cluster-wide encryption is enabled on the cluster, verify that the new OSD devices are encrypted.

    1. For each of the new nodes identified in previous step, do the following:

      1. Create a debug pod and open a chroot environment for the selected host(s).

        $ oc debug node/<node name>
        $ chroot /host
      2. Run “lsblk” and check for the “crypt” keyword beside the ocs-deviceset name(s)

        $ lsblk
  6. If verification steps fail, contact Red Hat Support.

1.2.4. Replacing a failed VMware node on installer-provisioned infrastructure

Perform this procedure to replace a failed node which is not operational on VMware installer-provisioned infrastructure (IPI) for OpenShift Container Storage.

Procedure

  1. Log in to OpenShift Web Console and click ComputeNodes.
  2. Identify the faulty node and click on its Machine Name.
  3. Click ActionsEdit Annotations, and click Add More.
  4. Add machine.openshift.io/exclude-node-draining and click Save.
  5. Click ActionsDelete Machine, and click Delete.
  6. A new machine is automatically created, wait for new machine to start.

    Important

    This activity may take at least 5-10 minutes or more. Ceph errors generated during this period are temporary and are automatically resolved when the new node is labeled and functional.

  7. Click ComputeNodes, confirm if the new node is in Ready state.
  8. Apply the OpenShift Container Storage label to the new node using any one of the following:

    From User interface
    1. For the new node, click Action Menu (⋮)Edit Labels
    2. Add cluster.ocs.openshift.io/openshift-storage and click Save.
    From Command line interface
    • Execute the following command to apply the OpenShift Container Storage label to the new node:

      $ oc label node <new_node_name> cluster.ocs.openshift.io/openshift-storage=""
  9. [Optional]: If the failed VM is not removed automatically, terminate the VM from vSphere.

Verification steps

  1. Execute the following command and verify that the new node is present in the output:

    $ oc get nodes --show-labels | grep cluster.ocs.openshift.io/openshift-storage= |cut -d' ' -f1
  2. Click WorkloadsPods, confirm that at least the following pods on the new node are in Running state:

    • csi-cephfsplugin-*
    • csi-rbdplugin-*
  3. Verify that all other required OpenShift Container Storage pods are in Running state.
  4. Verify that new OSD pods are running on the replacement node.

    $ oc get pods -o wide -n openshift-storage| egrep -i new-node-name | egrep osd
  5. (Optional) If cluster-wide encryption is enabled on the cluster, verify that the new OSD devices are encrypted.

    1. For each of the new nodes identified in previous step, do the following:

      1. Create a debug pod and open a chroot environment for the selected host(s).

        $ oc debug node/<node name>
        $ chroot /host
      2. Run “lsblk” and check for the “crypt” keyword beside the ocs-deviceset name(s)

        $ lsblk
  6. If verification steps fail, contact Red Hat Support.

1.3. OpenShift Container Storage deployed on Red Hat Virtualization

1.3.1. Replacing an operational Red Hat Virtualization node on installer-provisioned infrastructure

Use this procedure to replace an operational node on Red Hat Virtualization installer-provisioned infrastructure (IPI).

Procedure

  1. Log in to OpenShift Web Console and click Compute → Nodes.
  2. Identify the node that needs to be replaced. Take a note of its Machine Name.
  3. Mark the node as unschedulable using the following command:

    $ oc adm cordon <node_name>
  4. Drain the node using the following command:

    $ oc adm drain <node_name> --force --delete-local-data --ignore-daemonsets
    Important

    This activity may take at least 5-10 minutes or more. Ceph errors generated during this period are temporary and are automatically resolved when the new node is labeled and functional.

  5. Click Compute → Machines. Search for the required machine.
  6. Besides the required machine, click the Action menu (⋮) → Delete Machine.
  7. Click Delete to confirm the machine deletion. A new machine is automatically created. Wait for new machine to start and transition into Running state.

    Important

    This activity may take at least 5-10 minutes or more.

  8. Click Compute → Nodes, confirm if the new node is in Ready state.
  9. Apply the OpenShift Container Storage label to the new node using any one of the following:

    From User interface
    1. For the new node, click Action Menu (⋮) → Edit Labels.
    2. Add cluster.ocs.openshift.io/openshift-storage and click Save.
    From Command line interface
    • Execute the following command to apply the OpenShift Container Storage label to the new node:
    $ oc label node <new_node_name> cluster.ocs.openshift.io/openshift-storage=""

Verification steps

  1. Execute the following command and verify that the new node is present in the output:

    $ oc get nodes --show-labels | grep cluster.ocs.openshift.io/openshift-storage= |cut -d' ' -f1
  2. Click WorkloadsPods, confirm that at least the following pods on the new node are in Running state:

    • csi-cephfsplugin-*
    • csi-rbdplugin-*
  3. Verify that all other required OpenShift Container Storage pods are in Running state.
  4. Verify that new OSD pods are running on the replacement node.

    $ oc get pods -o wide -n openshift-storage| egrep -i new-node-name | egrep osd
  5. (Optional) If cluster-wide encryption is enabled on the cluster, verify that the new OSD devices are encrypted.

    1. For each of the new nodes identified in previous step, do the following:

      1. Create a debug pod and open a chroot environment for the selected host(s).

        $ oc debug node/<node name>
        $ chroot /host
      2. Run “lsblk” and check for the “crypt” keyword beside the ocs-deviceset name(s)

        $ lsblk
  6. If verification steps fail, contact Red Hat Support.

1.3.2. Replacing a failed Red Hat Virtualization node on installer-provisioned infrastructure

Perform this procedure to replace a failed node which is not operational on Red Hat Virtualization installer-provisioned infrastructure (IPI) for OpenShift Container Storage.

Procedure

  1. Log in to OpenShift Web Console and click Compute → Nodes.
  2. Identify the faulty node. Take a note of its Machine Name.
  3. Log in to Red Hat Virtualization Administration Portal and remove the virtual disks associated with mon and OSDs from the failed Virtual Machine.

    This step is required so that the disks are not deleted when the VM instance is deleted as part of the Delete machine step.

    Important

    Do not select the Remove Permanently option when removing the disk(s).

  4. In the OpenShift Web Console, click Compute → Machines. Search for the required machine.
  5. Click Actions → Edit Annotations, and click Add More.
  6. Add machine.openshift.io/exclude-node-draining and click Save.
  7. Click Actions → Delete Machine, and click Delete.

    A new machine is automatically created, wait for new machine to start.

    Important

    This activity may take at least 5-10 minutes or more. Ceph errors generated during this period are temporary and are automatically resolved when the new node is labeled and functional.

  8. Click Compute → Nodes, confirm if the new node is in Ready state.
  9. Apply the OpenShift Container Storage label to the new node using any one of the following:

    From User interface
    1. For the new node, click Action Menu (⋮) → Edit Labels.
    2. Add cluster.ocs.openshift.io/openshift-storage and click Save.
    From Command line interface
    • Execute the following command to apply the OpenShift Container Storage label to the new node:

      $ oc label node <new_node_name> cluster.ocs.openshift.io/openshift-storage=""
  10. (Optional) If the failed VM is not removed automatically, remove the VM from Red Hat Virtualization Administration Portal.

Verification steps

  1. Execute the following command and verify that the new node is present in the output:

    $ oc get nodes --show-labels | grep cluster.ocs.openshift.io/openshift-storage= |cut -d' ' -f1
  2. Click WorkloadsPods, confirm that at least the following pods on the new node are in Running state:

    • csi-cephfsplugin-*
    • csi-rbdplugin-*
  3. Verify that all other required OpenShift Container Storage pods are in Running state.
  4. Verify that new OSD pods are running on the replacement node.

    $ oc get pods -o wide -n openshift-storage| egrep -i new-node-name | egrep osd
  5. (Optional) If cluster-wide encryption is enabled on the cluster, verify that the new OSD devices are encrypted.

    1. For each of the new nodes identified in previous step, do the following:

      1. Create a debug pod and open a chroot environment for the selected host(s).

        $ oc debug node/<node name>
        $ chroot /host
      2. Run “lsblk” and check for the “crypt” keyword beside the ocs-deviceset name(s)

        $ lsblk
  6. If verification steps fail, contact Red Hat Support.

1.4. OpenShift Container Storage deployed on Microsoft Azure

1.4.1. Replacing operational nodes on Azure installer-provisioned infrastructure

Use this procedure to replace an operational node on Azure installer-provisioned infrastructure (IPI).

Procedure

  1. Log in to OpenShift Web Console and click ComputeNodes.
  2. Identify the node that needs to be replaced. Take a note of its Machine Name.
  3. Mark the node as unschedulable using the following command:

    $ oc adm cordon <node_name>
  4. Drain the node using the following command:

    $ oc adm drain <node_name> --force --delete-local-data --ignore-daemonsets
    Important

    This activity may take at least 5-10 minutes or more. Ceph errors generated during this period are temporary and are automatically resolved when the new node is labeled and functional.

  5. Click ComputeMachines. Search for the required machine.
  6. Besides the required machine, click the Action menu (⋮)Delete Machine.
  7. Click Delete to confirm the machine deletion. A new machine is automatically created.
  8. Wait for new machine to start and transition into Running state.

    Important

    This activity may take at least 5-10 minutes or more.

  9. Click ComputeNodes, confirm if the new node is in Ready state.
  10. Apply the OpenShift Container Storage label to the new node using any one of the following:

    From User interface
    1. For the new node, click Action Menu (⋮)Edit Labels
    2. Add cluster.ocs.openshift.io/openshift-storage and click Save.
    From Command line interface
    • Execute the following command to apply the OpenShift Container Storage label to the new node:

      $ oc label node <new_node_name> cluster.ocs.openshift.io/openshift-storage=""

Verification steps

  1. Execute the following command and verify that the new node is present in the output:

    $ oc get nodes --show-labels | grep cluster.ocs.openshift.io/openshift-storage= |cut -d' ' -f1
  2. Click WorkloadsPods, confirm that at least the following pods on the new node are in Running state:

    • csi-cephfsplugin-*
    • csi-rbdplugin-*
  3. Verify that all other required OpenShift Container Storage pods are in Running state.
  4. Verify that new OSD pods are running on the replacement node.

    $ oc get pods -o wide -n openshift-storage| egrep -i new-node-name | egrep osd
  5. (Optional) If cluster-wide encryption is enabled on the cluster, verify that the new OSD devices are encrypted.

    1. For each of the new nodes identified in previous step, do the following:

      1. Create a debug pod and open a chroot environment for the selected host(s).

        $ oc debug node/<node name>
        $ chroot /host
      2. Run “lsblk” and check for the “crypt” keyword beside the ocs-deviceset name(s)

        $ lsblk
  6. If verification steps fail, contact Red Hat Support.

1.4.2. Replacing failed nodes on Azure installer-provisioned infrastructure

Perform this procedure to replace a failed node which is not operational on Azure installer-provisioned infrastructure (IPI) for OpenShift Container Storage.

Procedure

  1. Log in to OpenShift Web Console and click ComputeNodes.
  2. Identify the faulty node and click on its Machine Name.
  3. Click ActionsEdit Annotations, and click Add More.
  4. Add machine.openshift.io/exclude-node-draining and click Save.
  5. Click ActionsDelete Machine, and click Delete.
  6. A new machine is automatically created, wait for new machine to start.

    Important

    This activity may take at least 5-10 minutes or more. Ceph errors generated during this period are temporary and are automatically resolved when the new node is labeled and functional.

  7. Click ComputeNodes, confirm if the new node is in Ready state.
  8. Apply the OpenShift Container Storage label to the new node using any one of the following:

    From User interface
    1. For the new node, click Action Menu (⋮)Edit Labels
    2. Add cluster.ocs.openshift.io/openshift-storage and click Save.
    From Command line interface
    • Execute the following command to apply the OpenShift Container Storage label to the new node:

      $ oc label node <new_node_name> cluster.ocs.openshift.io/openshift-storage=""
  9. [Optional]: If the failed Azure instance is not removed automatically, terminate the instance from Azure console.

Verification steps

  1. Execute the following command and verify that the new node is present in the output:

    $ oc get nodes --show-labels | grep cluster.ocs.openshift.io/openshift-storage= |cut -d' ' -f1
  2. Click WorkloadsPods, confirm that at least the following pods on the new node are in Running state:

    • csi-cephfsplugin-*
    • csi-rbdplugin-*
  3. Verify that all other required OpenShift Container Storage pods are in Running state.
  4. Verify that new OSD pods are running on the replacement node.

    $ oc get pods -o wide -n openshift-storage| egrep -i new-node-name | egrep osd
  5. (Optional) If cluster-wide encryption is enabled on the cluster, verify that the new OSD devices are encrypted.

    1. For each of the new nodes identified in previous step, do the following:

      1. Create a debug pod and open a chroot environment for the selected host(s).

        $ oc debug node/<node name>
        $ chroot /host
      2. Run “lsblk” and check for the “crypt” keyword beside the ocs-deviceset name(s)

        $ lsblk
  6. If verification steps fail, contact Red Hat Support.