Chapter 2. Deploying OpenShift Container Storage using local storage devices

Use this section to deploy OpenShift Container Storage on bare metal, Amazon EC2 storage optimized I3, and VMware infrastructures where OpenShift Container Platform is already installed.

Important

Installing OpenShift Container Storage on Amazon EC2 storage optimized I3 instance using local storage operator is a Technology Preview feature. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. Red Hat OpenShift Container Storage deployment assumes a new cluster, without any application or other workload running on the 3 worker nodes. Applications should run on additional worker nodes.

To deploy OpenShift Container Storage using local storage devices, perform the following steps:

2.1. Requirements for installing OpenShift Container Storage using local storage devices

  • You must have at least three OpenShift Container Platform worker nodes in the cluster with locally attached storage devices on each of them.

    • Each of the three worker nodes must have at least one raw block device available to be used by OpenShift Container Storage.
    • For minimum starting node requirements, see Node Requirements section in Planning guide.
    • The devices to be used must be empty, that is, there should be no PVs, VGs, or LVs remaining on the disks.
  • You must have a minimum of three labeled nodes.

    • Each worker node that has local storage devices to be used by OpenShift Container Storage must have a specific label to deploy OpenShift Container Storage pods. To label the nodes, use the following command:

      $ oc label nodes <NodeName> cluster.ocs.openshift.io/openshift-storage=''
  • There should not be any storage providers managing locally mounted storage on the storage nodes that would conflict with the use of Local Storage Operator for Red Hat OpenShift Container Storage.
  • Local Storage Operator version must match with the Red Hat OpenShift Container Platform version in order to have the Local Storage Operator fully supported with Red Hat OpenShift Container Storage. Local Storage Operator does not get upgraded when Red Hat OpenShift Container Platform is upgraded.

2.2. Installing Red Hat OpenShift Container Storage Operator using the Operator Hub

You can install Red Hat OpenShift Container Storage using the Red Hat OpenShift Container Platform Operator Hub on Amazon Web Services (AWS) and VMware vSphere platforms. For information about the hardware and software requirements, see Planning your deployment guide.

Prerequisites

  • Log in to OpenShift Container Platform cluster.
  • You must have at least three worker nodes in the OpenShift Container Platform cluster.
  • You must create a namespace called openshift-storage as follows:

    1. Click Administration → Namespaces in the left pane of the OpenShift Web Console.
    2. Click Create Namespace.
    3. In the Create Namespace dialog box, enter openshift-storage for Name and openshift.io/cluster-monitoring=true for Labels. This label is required to get the dashboards.
    4. Select No restrictions option for Default Network Policy.
    5. Click Create.
Note

When you need to override the cluster-wide default node selector for OpenShift Container Storage, you can use the following command in command line interface to specify a blank node selector for the openshift-storage namespace:

$ oc annotate namespace openshift-storage openshift.io/node-selector=

Procedure

  1. Click Operators → OperatorHub in the left pane of the OpenShift Web Console.

    Figure 2.1. List of operators in the Operator Hub

    Screenshot of list of operators in the Operator Hub of the OpenShift Web Console.
  2. Search for OpenShift Container Storage from the list of operators and click on it.
  3. On the OpenShift Container Storage operator page, click Install.
  4. On the Create Operator Subscription page:

    Figure 2.2. Create Operator Subscription page

    Screenshot of create operator subscription page.
    1. Select A specific namespace on the cluster for the Installation Mode option.

      • Select openshift-storage namespace from the drop down menu.
    2. Select stable-4.4 as the update channel.
    3. Select an Approval Strategy:

      • Automatic specifies that you want OpenShift Container Platform to upgrade OpenShift Container Storage automatically.
      • Manual specifies that you want to have control to upgrade OpenShift Container Storage manually.
  5. Click Subscribe.

    Figure 2.3. Installed operators

    Screenshot of the installed operators.

    The Installed Operators page is displayed with the status of the operator.

Verification steps

  • Verify that OpenShift Container Storage Operator show the Status as Succeeded.

2.3. Installing Local Storage Operator

Use this procedure to install Local Storage Operator from the Operator Hub before creating OpenShift Container Storage clusters on local storage devices in Amazon EC2 I3, VMware, and bare metal infrastructures.

Prerequisites

  • Create a namespace called local-storage as follows:

    1. Click Administration → Namespaces in the left pane of the OpenShift Web Console.
    2. Click Create Namespace.
    3. In the Create Namespace dialog box, enter local-storage for Name.
    4. Select No restrictions option for Default Network Policy.
    5. Click Create.

Procedure

  1. Click Operators → OperatorHub in the left pane of the OpenShift Web Console.
  2. Search for Local Storage Operator from the list of operators and click on it.
  3. Click Install.

    Figure 2.4. Create Operator Subscription page

    Screenshot of Operator Subscription page.
  4. Select A specific namespace on the cluster for the Installation Mode option.

    • Select local-storage namespace from the drop down menu.
  5. Select a desired value for the Update Channel option.
  6. Select the desired Approval Strategy.
  7. Click Subscribe.
  8. Verify that the Local Storage Operator show the Status as Succeeded.

2.4. Finding available storage devices

Use this procedure to identify the device name for each of the three or more worker nodes that you have labeled with OpenShift Container Storage label, cluster.ocs.openshift.io/openshift-storage='' before creating PVs for bare metal, Amazon EC2 I3, or VMware storage devices.

Procedure

  1. List and verify the name of the worker nodes with the OpenShift Container Storage label.

    $ oc get nodes -l cluster.ocs.openshift.io/openshift-storage=

    Example output:

    NAME                                        STATUS   ROLES    AGE     VERSION
    ip-10-0-135-71.us-east-2.compute.internal    Ready    worker   6h45m   v1.16.2
    ip-10-0-145-125.us-east-2.compute.internal   Ready    worker   6h45m   v1.16.2
    ip-10-0-160-91.us-east-2.compute.internal    Ready    worker   6h45m   v1.16.2
  2. Log in to each worker node that is used for OpenShift Container Storage resources and find the unique by-id device name for each available raw block device.

    $ oc debug node/<Nodename>

    Example output:

    $ oc debug node/ip-10-0-135-71.us-east-2.compute.internal
    Starting pod/ip-10-0-135-71us-east-2computeinternal-debug ...
    To use host binaries, run `chroot /host`
    Pod IP: 10.0.135.71
    If you don't see a command prompt, try pressing enter.
    sh-4.2# chroot /host
    sh-4.4# lsblk
    NAME                         MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
    nvme1n1                      259:0    0   2.3T  0 disk
    nvme2n1                      259:1    0   2.3T  0 disk
    nvme0n1                      259:2    0   120G  0 disk
    |-nvme0n1p1                  259:3    0   384M  0 part /boot
    |-nvme0n1p2                  259:4    0   127M  0 part /boot/efi
    |-nvme0n1p3                  259:5    0     1M  0 part
    `-nvme0n1p4                  259:6    0 119.5G  0 part
      `-coreos-luks-root-nocrypt 253:0    0 119.5G  0 dm   /sysroot

    In this example, the local devices available are nvme1n1 and nvme2n1.

  3. Find the unique by-id device name depending on the hardware serial number for each device.

    sh-4.4#  ls -l /dev/disk/by-id/ | grep Storage
    lrwxrwxrwx. 1 root root 13 Jun 26 07:29 nvme-Amazon_EC2_NVMe_Instance_Storage_AWS1924C57D4F1FC5236 -> ../../nvme2n1
    lrwxrwxrwx. 1 root root 13 Jun 26 07:29 nvme-Amazon_EC2_NVMe_Instance_Storage_AWS22ABDB45A3BC9028F -> ../../nvme1n1

    In this example, the local devices available are nvme1n1 and nvme2n1 with a size of 2.3 TiB

    For each worker node that has the OpenShift Container Storage label (a minimum of three), you need to find the unique by-id device names. In this example, the by-id device names are:

    • nvme-Amazon_EC2_NVMe_Instance_Storage_AWS1924C57D4F1FC5236
    • nvme-Amazon_EC2_NVMe_Instance_Storage_AWS22ABDB45A3BC9028F
Note

You must repeat finding the device name by-id for all the other nodes that have the storage devices to be used by OpenShift Container Storage. See https://access.redhat.com/solutions/4928841 for more details.

2.5. Creating OpenShift Container Storage cluster on Amazon EC2 storage optimized - i3en.2xlarge instance type

Use this procedure to create OpenShift Container Storage cluster on Amazon EC2 (storage optimized - i3en.2xlarge instance type) infrastructure, which involves:

  1. Creating PVs by using the LocalVolume CR
  2. Creating a new StorageClass
Important

Installing OpenShift Container Storage on Amazon EC2 storage optimized - i3en.2xlarge instance using local storage operator is a Technology Preview feature. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. Red Hat OpenShift Container Storage deployment assumes a new cluster, without any application or other workload running on the 3 worker nodes. Applications should run on additional worker nodes.

The Amazon EC2 storage optimized - i3en.2xlarge instance type includes two non-volatile memory express (NVMe) disks. The example in this procedure illustrates the use of both the disks that the instance type comes with.

Warning

It is not recommended to use ephemeral storage of Amazon EC2 I3 for OpenShift Container Storage persistent data, because stopping all the three nodes can cause data loss. It is recommended to use ephemeral storage only in scenarios such as the following:

  • Cloud burst where data is copied from another location for a specific data crunching, which is limited in time
  • Development or testing environment

When you are using the ephemeral storage of Amazon EC2 I3, it is recommended to:

  • Use three availability zones to decrease the risk of losing all the data
  • Limit the number of users with ec2:StopInstances permissions to avoid instance shutdown by mistake

Prerequisites

  • Ensure that all the requirements in the Requirements for installing OpenShift Container Storage using local storage devices section are met.
  • Verify your OpenShift Container Platform worker nodes are labeled for OpenShift Container Storage, which is used as the nodeSelector.

    $ oc get nodes -l cluster.ocs.openshift.io/openshift-storage -o jsonpath='{range .items[*]}{.metadata.name}{"\n"}'

    Example output:

    ip-10-0-135-71.us-east-2.compute.internal
    ip-10-0-145-125.us-east-2.compute.internal
    ip-10-0-160-91.us-east-2.compute.internal

Procedure

  1. Create local persistent volumes (PVs) on the storage nodes using LocalVolume custom resource (CR).

    Example of LocalVolume CR local-storage-block.yaml using OpenShift Storage Container label as node selector and by-id device identifier:

    apiVersion: local.storage.openshift.io/v1
    kind: LocalVolume
    metadata:
      name: local-block
      namespace: local-storage
      labels:
        app: ocs-storagecluster
    spec:
      tolerations:
      - key: "node.ocs.openshift.io/storage"
        value: "true"
        effect: NoSchedule
      nodeSelector:
        nodeSelectorTerms:
          - matchExpressions:
              - key: cluster.ocs.openshift.io/openshift-storage
                operator: In
                values:
                  - ''
      storageClassDevices:
        - storageClassName: localblock
          volumeMode: Block
          devicePaths:
            - /dev/disk/by-id/nvme-Amazon_EC2_NVMe_Instance_Storage_AWS10382E5D7441494EC   # <-- modify this line
            - /dev/disk/by-id/nvme-Amazon_EC2_NVMe_Instance_Storage_AWS1F45C01D7E84FE3E9   # <-- modify this line
            - /dev/disk/by-id/nvme-Amazon_EC2_NVMe_Instance_Storage_AWS136BC945B4ECB9AE4   # <-- modify this line
            - /dev/disk/by-id/nvme-Amazon_EC2_NVMe_Instance_Storage_AWS10382E5D7441464EP   # <-- modify this line
            - /dev/disk/by-id/nvme-Amazon_EC2_NVMe_Instance_Storage_AWS1F45C01D7E84F43E7   # <-- modify this line
            - /dev/disk/by-id/nvme-Amazon_EC2_NVMe_Instance_Storage_AWS136BC945B4ECB9AE8   # <-- modify this line

    Each Amazon EC2 i3en.2xlarge instance has two disks and this example uses both the disks.

  2. Create the LocalVolume CR.

    $ oc create -f local-storage-block.yaml

    Example output:

    localvolume.local.storage.openshift.io/local-block created
  3. Check if the pods are created.

    $ oc -n local-storage get pods

    Example output:

    NAME                                      READY   STATUS    RESTARTS   AGE
    local-block-local-diskmaker-59rmn         1/1     Running   0          15m
    local-block-local-diskmaker-6n7ct         1/1     Running   0          15m
    local-block-local-diskmaker-jwtsn         1/1     Running   0          15m
    local-block-local-provisioner-6ssxc       1/1     Running   0          15m
    local-block-local-provisioner-swwvx       1/1     Running   0          15m
    local-block-local-provisioner-zmv5j       1/1     Running   0          15m
    local-storage-operator-7848bbd595-686dg   1/1     Running   0          15m
  4. Check if the PVs are created.

    You must see a new PV for each of the local storage devices on the three worker nodes. Refer the example in the Finding available storage devices section that shows two available storage devices per worker node with a size 2328Gi for each node.

    $ oc get pv

    Example output:

    NAME               CAPACITY ACCESS MODES  RECLAIM POLICY STATUS       CLAIM     STORAGECLASS  REASON   AGE
    local-pv-1a46bc79  2328Gi   RWO           Delete         Available              localblock             14m
    local-pv-429d90ee  2328Gi   RWO           Delete         Available              localblock             14m
    local-pv-4d0a62e3  2328Gi   RWO           Delete         Available              localblock             14m
    local-pv-55c05d76  2328Gi   RWO           Delete         Available              localblock             14m
    local-pv-5c7b0990  2328Gi   RWO           Delete         Available              localblock             14m
    local-pv-a6b283b   2328Gi   RWO           Delete         Available              localblock             14m
  5. Check if a new StorageClass is created due to the creation of LocalVolume CR. This StorageClass is used while creating StorageCluster to create PVCs.

    $ oc get sc | grep localblock

    Example output:

    NAME            PROVISIONER                    RECLAIMPOLICY
    VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION     AGE
    localblock      kubernetes.io/no-provisioner   Delete
    WaitForFirstConsumer  false                3h47m
  6. Create StorageCluster CR that uses the localblock StorageClass and the PVs that are created.

    Example of StorageCluster CR ocs-cluster-service.yaml using monDataDirHostPath and localblock StorageClass.

    apiVersion: ocs.openshift.io/v1
    kind: StorageCluster
    metadata:
      name: ocs-storagecluster
      namespace: openshift-storage
    spec:
      manageNodes: false
      resources:
        mds:
          limits:
            cpu: 3
            memory: 8Gi
          requests:
            cpu: 1
            memory: 8Gi
        noobaa-core:
          limits:
            cpu: 2
            memory: 8Gi
          requests:
            cpu: 1
            memory: 8Gi
        noobaa-db:
          limits:
            cpu: 2
            memory: 8Gi
          requests:
            cpu: 1
            memory: 8Gi
      monDataDirHostPath: /var/lib/rook
      storageDeviceSets:
        - count: 2
          dataPVCTemplate:
            spec:
              accessModes:
                - ReadWriteOnce
              resources:
                requests:
                  storage: 2328Gi
              storageClassName: localblock
              volumeMode: Block
          name: ocs-deviceset
          placement: {}
          portable: false
          replica: 3
          resources: {}
    Important

    To ensure that the OSDs have a guaranteed size across the nodes, the storage size for storageDeviceSets must be specified as less than or equal to the size of the desired PVs created on the nodes.

  7. Create StorageCluster CR.

    $ oc create -f ocs-cluster-service.yaml

    Example output

    storagecluster.ocs.openshift.io/ocs-cluster-service created

2.6. Creating OpenShift Container Storage cluster on VMware

Use this procedure to create storage cluster on VMware infrastructure.

VMware supports the following three types of local storage:

  • Virtual machine disk (VMDK)
  • Raw device mapping (RDM)
  • VMDirectPath I/O

Prerequisites

To identify storage devices on each node, refer to Finding available storage devices.

Procedure

  1. Create the LocalVolume CR for block PVs.

    Example of LocalVolume CR local-storage-block.yaml using OpenShift Container Storage label as node selector:

    apiVersion: local.storage.openshift.io/v1
    kind: LocalVolume
    metadata:
      name: local-block
      namespace: local-storage
      labels:
        app: ocs-storagecluster
    spec:
      nodeSelector:
        nodeSelectorTerms:
        - matchExpressions:
            - key: cluster.ocs.openshift.io/openshift-storage
              operator: In
              values:
              - ""
      storageClassDevices:
        - storageClassName: localblock
          volumeMode: Block
          devicePaths:
            - /dev/disk/by-id/scsi-36000c2991c27c2e5ba7c47d1e4352de2   # <-- modify this line
            - /dev/disk/by-id/scsi-36000c29682ca9e347926406711f3dc4e   # <-- modify this line
            - /dev/disk/by-id/scsi-36000c296aaf03a9b1e4b01d086bc6348   # <-- modify this line
  2. Create LocalVolume CR for block PVs.

    $ oc create -f local-storage-block.yaml

    Example output:

    localvolume.local.storage.openshift.io/local-block created
  3. Check if the pods are created.

    $ oc -n local-storage get pods

    Example output:

    NAME                                      READY   STATUS    RESTARTS   AGE
    local-block-local-diskmaker-5brzv         1/1     Running   0          31s
    local-block-local-diskmaker-8sxcs         1/1     Running   0          31s
    local-block-local-diskmaker-s7s9p         1/1     Running   0          31s
    local-block-local-provisioner-9cbw8       1/1     Running   0          31s
    local-block-local-provisioner-cpddv       1/1     Running   0          31s
    local-block-local-provisioner-f6h7h       1/1     Running   0          31s
    local-storage-operator-75b9776b75-vwdzh   1/1     Running   0          2m47s
  4. Check the new localblock StorageClass.

    $ oc get sc | grep localblock

    Example output:

    NAME            PROVISIONER                    RECLAIMPOLICY
    VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION     AGE
    localblock      kubernetes.io/no-provisioner   Delete
    WaitForFirstConsumer  false                3h47m
  5. Check the PVs that are created with the Available status.

    $ oc get pv

    Example output:

    NAME                CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS      CLAIM   STORAGECLASS   REASON   AGE
    local-pv-150fdc87   2TiB      RWO            Delete           Available           localblock              2m11s
    local-pv-183bfc0a   2TiB      RWO            Delete           Available           localblock              2m11s
    local-pv-b2f5cb25   2TiB      RWO            Delete           Available           localblock              2m21s

    In this example, three PVs are used for OSD storage.

  6. Create StorageCluster CR ocs-cluster-service-VMware.yaml that uses the monDataDirHostPath and localblock StorageClass.

    apiVersion: ocs.openshift.io/v1
    kind: StorageCluster
    metadata:
      name: ocs-storagecluster
      namespace: openshift-storage
    spec:
      manageNodes: false
      monDataDirHostPath: /var/lib/rook
      storageDeviceSets:
      - count: 1
        dataPVCTemplate:
          spec:
            accessModes:
            - ReadWriteOnce
            resources:
              requests:
                storage: 2Ti
            storageClassName: localblock
            volumeMode: Block
        name: ocs-deviceset
        placement: {}
        portable: false
        replica: 3
        resources: {}
    Important

    To ensure that the OSDs have a guaranteed size across the nodes, the storage size for storageDeviceSets must be specified as less than or equal to the size of the desired PVs created on the nodes.

  7. Create StorageCluster CR.

    $ oc create -f ocs-cluster-service-VMware.yaml

    Example output:

    storagecluster.ocs.openshift.io/ocs-storagecluster created

2.7. Creating OpenShift Container Storage cluster on bare metal

Prerequisites

  • Ensure that all the requirements in the Requirements for installing OpenShift Container Storage using local storage devices section are met.
  • You must have three worker nodes with the same storage type and size attached to each node (for example, 2TB NVMe hard drive) to use local storage devices on bare metal.
  • Verify your OpenShift Container Platform worker nodes are labeled for OpenShift Container Storage:

    $ oc get nodes -l cluster.ocs.openshift.io/openshift-storage -o jsonpath='{range .items[*]}{.metadata.name}{"\n"}'

To identify storage devices on each node, refer to Finding available storage devices.

Procedure

  1. Create LocalVolume CR for block PVs.

    Example of LocalVolume CR local-storage-block.yaml using OCS label as node selector.

    apiVersion: local.storage.openshift.io/v1
    kind: LocalVolume
    metadata:
      name: local-block
      namespace: local-storage
      labels:
        app: ocs-storagecluster
    spec:
      nodeSelector:
        nodeSelectorTerms:
        - matchExpressions:
            - key: cluster.ocs.openshift.io/openshift-storage
              operator: In
              values:
              - ""
      storageClassDevices:
        - storageClassName: localblock
          volumeMode: Block
          devicePaths:
            - /dev/disk/by-id/nvme-INTEL_SSDPEKKA128G7_BTPY81260978128A   # <-- modify this line
            - /dev/disk/by-id/nvme-INTEL_SSDPEKKA128G7_BTPY80440W5U128A   # <-- modify this line
            - /dev/disk/by-id/nvme-INTEL_SSDPEKKA128G7_BTPYB85AABDE128A   # <-- modify this line
            - /dev/disk/by-id/nvme-INTEL_SSDPEKKA128G7_BTPY0A60CB81128A   # <-- modify this line
            - /dev/disk/by-id/nvme-INTEL_SSDPEKKA128G7_BTPY0093D45E128A   # <-- modify this line
            - /dev/disk/by-id/nvme-INTEL_SSDPEKKA128G7_BTPYE46F6060128A   # <-- modify this line
  2. Create the LocalVolume CR for block PVs.

    $ oc create -f local-storage-block.yaml
  3. Check if the pods are created.

    $ oc -n local-storage get pods
  4. Check if the PVs are created.

    $ oc get pv

    Example output:

    NAME                CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS      CLAIM   STORAGECLASS   REASON   AGE
    local-pv-150fdc87   2Ti      RWO            Delete           Available           localblock              2m11s
    local-pv-183bfc0a   2Ti      RWO            Delete           Available           localblock              2m11s
    local-pv-b2f5cb25   2Ti      RWO            Delete           Available           localblock              2m21s
  5. Check for the new localblock StorageClass.

    $ oc get sc|egrep -e "localblock|NAME"

    Example output:

    NAME       PROVISIONER                  RECLAIMPOLICY
    localblock kubernetes.io/no-provisioner Delete
    
    VOLUMEBINDINGMODE    ALLOWVOLUMEEXPANSION AGE
    WaitForFirstConsumer false                4d23h
  6. Create StorageCluster CR that uses the localblock StorageClass and the three PVs that are created.
  7. Example of StorageCluster CR cluster-service-metal.yaml using monDataDirHostPath and localblock StorageClass.

    apiVersion: ocs.openshift.io/v1
    kind: StorageCluster
    metadata:
      name: ocs-storagecluster
      namespace: openshift-storage
    spec:
      manageNodes: false
      monDataDirHostPath: /var/lib/rook
      storageDeviceSets:
      - count: 2
        dataPVCTemplate:
          spec:
            accessModes:
            - ReadWriteOnce
            resources:
              requests:
                storage: 2Ti
            storageClassName: localblock
            volumeMode: Block
        name: ocs-deviceset
        placement: {}
        portable: false
        replica: 3
        resources: {}
    Important

    To ensure that the OSDs have a guaranteed size across the nodes, the storage size for storageDeviceSets must be specified as less than or equal to the size of the desired PVs created on the nodes.

  8. Create the StorageCluster CR.

    $ oc create -f cluster-service-metal.yaml

    Example output:

    storagecluster.ocs.openshift.io/ocs-storagecluster created