Chapter 5. Managing namespace buckets

Namespace buckets let you connect data repositories on different providers together, so you can interact with all of your data through a single unified view. Add the object bucket associated with each provider to the namespace bucket, and access your data through the namespace bucket to see all of your object buckets at once. This lets you write to your preferred storage provider while reading from multiple other storage providers, greatly reducing the cost of migrating to a new storage provider.

Note

A namespace bucket can only be used if its write target is available and functional.

5.1. Amazon S3 API endpoints for objects in namespace buckets

You can interact with objects in the namespace buckets using the Amazon Simple Storage Service (S3) API.

Red Hat OpenShift Data Foundation 4.6 onwards supports the following namespace bucket operations:

See the Amazon S3 API reference documentation for the most up-to-date information about these operations and how to use them.

5.2. Adding a namespace bucket using the Multicloud Object Gateway CLI and YAML

For more information about namespace buckets, see Managing namespace buckets.

Depending on the type of your deployment and whether you want to use YAML or the Multicloud Object Gateway (MCG) CLI, choose one of the following procedures to add a namespace bucket:

5.2.1. Adding an AWS S3 namespace bucket using YAML

Prerequisites

Procedure

  1. Create a secret with the credentials:

    apiVersion: v1
    kind: Secret
    metadata:
      name: <namespacestore-secret-name>
      type: Opaque
    data:
      AWS_ACCESS_KEY_ID: <AWS ACCESS KEY ID ENCODED IN BASE64>
      AWS_SECRET_ACCESS_KEY: <AWS SECRET ACCESS KEY ENCODED IN BASE64>
    1. You must supply and encode your own AWS access key ID and secret access key using Base64, and use the results in place of <AWS ACCESS KEY ID ENCODED IN BASE64> and <AWS SECRET ACCESS KEY ENCODED IN BASE64>.
    2. Replace <namespacestore-secret-name> with a unique name.
  2. Create a NamespaceStore resource using OpenShift Custom Resource Definitions (CRDs). A NamespaceStore represents underlying storage to be used as a read or write target for the data in the MCG namespace buckets. To create a NamespaceStore resource, apply the following YAML:

    apiVersion: noobaa.io/v1alpha1
    kind: NamespaceStore
    metadata:
      finalizers:
      - noobaa.io/finalizer
      labels:
        app: noobaa
      name: <resource-name>
      namespace: openshift-storage
    spec:
      awsS3:
        secret:
          name: <namespacestore-secret-name>
          namespace: <namespace-secret>
        targetBucket: <target-bucket>
      type: aws-s3
    1. Replace <resource-name> with the name you want to give to the resource.
    2. Replace <namespacestore-secret-name> with the secret created in step 1.
    3. Replace <namespace-secret> with the namespace where the secret can be found.
    4. Replace <target-bucket> with the target bucket you created for the NamespaceStore.
  3. Create a namespace bucket class that defines a namespace policy for the namespace buckets. The namespace policy requires a type of either single or multi.

    • A namespace policy of type single requires the following configuration:

      apiVersion: noobaa.io/v1alpha1
      kind: BucketClass
      metadata:
        labels:
          app: noobaa
        name: <my-bucket-class>
        namespace: openshift-storage
      spec:
        namespacePolicy:
          type:
          single:
            resource: <resource>
      • Replace <my-bucket-class> with a unique namespace bucket class name.
      • Replace <resource> with the name of a single namespace-store that defines the read and write target of the namespace bucket.
    • A namespace policy of type multi requires the following configuration:

      apiVersion: noobaa.io/v1alpha1
      
      kind: BucketClass
      metadata:
        labels:
          app: noobaa
        name: <my-bucket-class>
        namespace: openshift-storage
      spec:
        namespacePolicy:
          type: Multi
          multi:
            writeResource: <write-resource>
            readResources:
            - <read-resources>
            - <read-resources>
      • Replace <my-bucket-class> with a unique bucket class name.
      • Replace <write-resource> with the name of a single namespace-store that defines the write target of the namespace bucket.
      • Replace <read-resources> with a list of the names of the namespace-stores that defines the read targets of the namespace bucket.
  4. Apply the following YAML to create a bucket using an Object Bucket Class (OBC) resource that uses the bucket class defined in step 2.

    apiVersion: objectbucket.io/v1alpha1
    kind: ObjectBucketClaim
    metadata:
      name: <resource-name>
      namespace: openshift-storage
    spec:
      generateBucketName: <my-bucket>
      storageClassName: openshift-storage.noobaa.io
      additionalConfig:
        bucketclass: <my-bucket-class>
    1. Replace <resource-name> with the name you want to give to the resource.
    2. Replace <my-bucket> with the name you want to give to the bucket.
    3. Replace <my-bucket-class> with the bucket class created in the previous step.

Once the OBC is provisioned by the operator, a bucket is created in the MCG, and the operator creates a Secret and ConfigMap with the same name and on the same namespace of the OBC.

5.2.2. Adding an IBM COS namespace bucket using YAML

Prerequisites

Procedure

  1. Create a secret with the credentials:

    apiVersion: v1
    kind: Secret
    metadata:
      name: <namespacestore-secret-name>
      type: Opaque
    data:
      IBM_COS_ACCESS_KEY_ID: <IBM COS ACCESS KEY ID ENCODED IN BASE64>
      IBM_COS_SECRET_ACCESS_KEY: <IBM COS SECRET ACCESS KEY ENCODED IN BASE64>
    1. You must supply and encode your own IBM COS access key ID and secret access key using Base64, and use the results in place of <IBM COS ACCESS KEY ID ENCODED IN BASE64> and <IBM COS SECRET ACCESS KEY ENCODED IN BASE64>.
    2. Replace <namespacestore-secret-name> with a unique name.
  2. Create a NamespaceStore resource using OpenShift Custom Resource Definitions (CRDs). A NamespaceStore represents underlying storage to be used as a read or write target for the data in the MCG namespace buckets. To create a NamespaceStore resource, apply the following YAML:

    apiVersion: noobaa.io/v1alpha1
    kind: NamespaceStore
    metadata:
      finalizers:
      - noobaa.io/finalizer
      labels:
        app: noobaa
      name: bs
      namespace: openshift-storage
    spec:
      s3Compatible:
        endpoint: <IBM COS ENDPOINT>
        secret:
          name: <namespacestore-secret-name>
          namespace: <namespace-secret>
        signatureVersion: v2
        targetBucket: <target-bucket>
      type: ibm-cos
    1. Replace <IBM COS ENDPOINT> with the appropriate IBM COS endpoint.
    2. Replace <namespacestore-secret-name> with the secret created in step 1.
    3. Replace <namespace-secret> with the namespace where the secret can be found.
    4. Replace <target-bucket> with the target bucket you created for the NamespaceStore.
  3. Create a namespace bucket class that defines a namespace policy for the namespace buckets. The namespace policy requires a type of either single or multi.

    • A namespace policy of type single requires the following configuration:

      apiVersion: noobaa.io/v1alpha1
      kind: BucketClass
      metadata:
        labels:
          app: noobaa
        name: <my-bucket-class>
        namespace: openshift-storage
      spec:
        namespacePolicy:
          type:
          single:
            resource: <resource>
      • Replace <my-bucket-class> with a unique namespace bucket class name.
      • Replace <resource> with a the name of a single namespace-store that defines the read and write target of the namespace bucket.
    • A namespace policy of type multi requires the following configuration:

      apiVersion: noobaa.io/v1alpha1
      kind: BucketClass
      metadata:
        labels:
          app: noobaa
        name: <my-bucket-class>
        namespace: openshift-storage
      spec:
        namespacePolicy:
          type: Multi
          multi:
            writeResource: <write-resource>
            readResources:
            - <read-resources>
            - <read-resources>
      • Replace <my-bucket-class> with a unique bucket class name.
      • Replace <write-resource> with the name of a single namespace-store that defines the write target of the namespace bucket.
      • Replace <read-resources> with a list of the names of namespace-stores that defines the read targets of the namespace bucket.
  4. Apply the following YAML to create a bucket using an Object Bucket Class (OBC) resource that uses the bucket class defined in step 2.

    apiVersion: objectbucket.io/v1alpha1
    kind: ObjectBucketClaim
    metadata:
      name: <resource-name>
      namespace: openshift-storage
    spec:
      generateBucketName: <my-bucket>
      storageClassName: openshift-storage.noobaa.io
      additionalConfig:
        bucketclass: <my-bucket-class>
    1. Replace <resource-name> with the name you want to give to the resource.
    2. Replace <my-bucket> with the name you want to give to the bucket.
    3. Replace <my-bucket-class> with the bucket class created in the previous step.

Once the OBC is provisioned by the operator, a bucket is created in the MCG, and the operator creates a Secret and ConfigMap with the same name and on the same namespace of the OBC.

5.2.3. Adding an AWS S3 namespace bucket using the Multicloud Object Gateway CLI

Prerequisites

# subscription-manager repos --enable=rh-odf-4-for-rhel-8-x86_64-rpms
# yum install mcg
Note

Specify the appropriate architecture for enabling the repositories using subscription manager. For instance, in case of IBM Z infrastructure use the following command:

# subscription-manager repos --enable=rh-odf-4-for-rhel-8-s390x-rpms

Alternatively, you can install the MCG package from the OpenShift Data Foundation RPMs found here https://access.redhat.com/downloads/content/547/ver=4/rhel---8/4/x86_64/package.

Note

Choose the correct Product Variant according to your architecture.

Procedure

  1. Create a NamespaceStore resource. A NamespaceStore represents an underlying storage to be used as a read or write target for the data in MCG namespace buckets. From the MCG command-line interface, run the following command:

    noobaa namespacestore create aws-s3 <namespacestore> --access-key <AWS ACCESS KEY> --secret-key <AWS SECRET ACCESS KEY> --target-bucket <bucket-name> -n openshift-storage
    1. Replace <namespacestore> with the name of the NamespaceStore.
    2. Replace <AWS ACCESS KEY> and <AWS SECRET ACCESS KEY> with an AWS access key ID and secret access key you created for this purpose.
    3. Replace <bucket-name> with an existing AWS bucket name. This argument tells the MCG which bucket to use as a target bucket for its backing store, and subsequently, data storage and administration.
  2. Create a namespace bucket class that defines a namespace policy for the namespace buckets. The namespace policy requires a type of either single or multi.

    • Run the following command to create a namespace bucket class with a namespace policy of type single:

      noobaa bucketclass create namespace-bucketclass single <my-bucket-class> --resource <resource> -n openshift-storage
      • Replace <resource-name> with the name you want to give the resource.
      • Replace <my-bucket-class> with a unique bucket class name.
      • Replace <resource> with a single namespace-store that defines the read and write target of the namespace bucket.
    • Run the following command to create a namespace bucket class with a namespace policy of type multi:

      noobaa bucketclass create namespace-bucketclass multi <my-bucket-class> --write-resource <write-resource> --read-resources <read-resources> -n openshift-storage
      • Replace <resource-name> with the name you want to give the resource.
      • Replace <my-bucket-class> with a unique bucket class name.
      • Replace <write-resource> with a single namespace-store that defines the write target of the namespace bucket.
      • Replace <read-resources> with a list of namespace-stores separated by commas that defines the read targets of the namespace bucket.
  3. Run the following command to create a bucket using an Object Bucket Class (OBC) resource that uses the bucket class defined in step 2.

    noobaa obc create my-bucket-claim -n openshift-storage --app-namespace my-app --bucketclass <custom-bucket-class>
    1. Replace <bucket-name> with a bucket name of your choice.
    2. Replace <custom-bucket-class> with the name of the bucket class created in step 2.

Once the OBC is provisioned by the operator, a bucket is created in the MCG, and the operator creates a Secret and ConfigMap with the same name and on the same namespace of the OBC.

5.2.4. Adding an IBM COS namespace bucket using the Multicloud Object Gateway CLI

Prerequisites

  • A running OpenShift Data Foundation Platform.
  • Access to the Multicloud Object Gateway (MCG), see Chapter 2, Accessing the Multicloud Object Gateway with your applications.
  • Download the MCG command-line interface:

    # subscription-manager repos --enable=rh-odf-4-for-rhel-8-x86_64-rpms
    # yum install mcg
    Note

    Specify the appropriate architecture for enabling the repositories using subscription manager.

    • For IBM Power, use the following command:
    # subscription-manager repos --enable=rh-odf-4-for-rhel-8-ppc64le-rpms
    • For IBM Z infrastructure, use the following command:
    # subscription-manager repos --enable=rh-odf-4-for-rhel-8-s390x-rpms

    Alternatively, you can install the MCG package from the OpenShift Data Foundation RPMs found here https://access.redhat.com/downloads/content/547/ver=4/rhel---8/4/x86_64/package.

    Note

    Choose the correct Product Variant according to your architecture.

Procedure

  1. Create a NamespaceStore resource. A NamespaceStore represents an underlying storage to be used as a read or write target for the data in MCG namespace buckets. From the MCG command-line interface, run the following command:

    noobaa namespacestore create ibm-cos <namespacestore> --endpoint <IBM COS ENDPOINT> --access-key <IBM ACCESS KEY> --secret-key <IBM SECRET ACCESS KEY> --target-bucket <bucket-name> -n openshift-storage
    1. Replace <namespacestore> with the name of the NamespaceStore.
    2. Replace <IBM ACCESS KEY>, <IBM SECRET ACCESS KEY>, <IBM COS ENDPOINT> with an IBM access key ID, secret access key and the appropriate regional endpoint that corresponds to the location of the existing IBM bucket.
    3. Replace <bucket-name> with an existing IBM bucket name. This argument tells the MCG which bucket to use as a target bucket for its backing store, and subsequently, data storage and administration.
  2. Create a namespace bucket class that defines a namespace policy for the namespace buckets. The namespace policy requires a type of either single or multi.

    • Run the following command to create a namespace bucket class with a namespace policy of type single:

      noobaa bucketclass create namespace-bucketclass single <my-bucket-class> --resource <resource> -n openshift-storage
      • Replace <resource-name> with the name you want to give the resource.
      • Replace <my-bucket-class> with a unique bucket class name.
      • Replace <resource> with a single namespace-store that defines the read and write target of the namespace bucket.
    • Run the following command to create a namespace bucket class with a namespace policy of type multi:

      noobaa bucketclass create namespace-bucketclass multi <my-bucket-class> --write-resource <write-resource> --read-resources <read-resources> -n openshift-storage
      • Replace <resource-name> with the name you want to give the resource.
      • Replace <my-bucket-class> with a unique bucket class name.
      • Replace <write-resource> with a single namespace-store that defines the write target of the namespace bucket.
      • Replace <read-resources> with a list of namespace-stores separated by commas that defines the read targets of the namespace bucket.
  3. Run the following command to create a bucket using an Object Bucket Class (OBC) resource that uses the bucket class defined in step 2.

    noobaa obc create my-bucket-claim -n openshift-storage --app-namespace my-app --bucketclass <custom-bucket-class>
    1. Replace <bucket-name> with a bucket name of your choice.
    2. Replace <custom-bucket-class> with the name of the bucket class created in step 2.

Once the OBC is provisioned by the operator, a bucket is created in the MCG, and the operator creates a Secret and ConfigMap with the same name and on the same namespace of the OBC.

5.3. Adding a namespace bucket using the OpenShift Container Platform user interface

Starting with OpenShift Data Foundation 4.8, namespace buckets can be added using the OpenShift Container Platform user interface. For more information about namespace buckets, see Managing namespace buckets.

Prerequisites

  • Openshift Container Platform with OpenShift Data Foundation operator installed.
  • Access to the Multicloud Object Gateway (MCG).

Procedure

  1. Log into the OpenShift Web Console.
  2. Click Storage → Data Foundation.
  3. Click the Namespace Store tab to create a namespacestore resources to be used in the namespace bucket.

    1. Click Create namespace store.
    2. Enter a namespacestore name.
    3. Choose a provider.
    4. Choose a region.
    5. Either select an existing secret, or click Switch to credentials to create a secret by entering a secret key and secret access key.
    6. Choose a target bucket.
    7. Click Create.
    8. Verify the namespacestore is in the Ready state.
    9. Repeat these steps until you have the desired amount of resources.
  4. Click the Bucket Class tab → Create a new Bucket Class.

    1. Select the Namespace radio button.
    2. Enter a Bucket Class name.
    3. Add a description (optional).
    4. Click Next.
  5. Choose a namespace policy type for your namespace bucket, and then click Next.
  6. Select the target resource(s).

    • If your namespace policy type is Single, you need to choose a read resource.
    • If your namespace policy type is Multi, you need to choose read resources and a write resource.
    • If your namespace policy type is Cache, you need to choose a Hub namespace store that defines the read and write target of the namespace bucket.
  7. Click Next.
  8. Review your new bucket class, and then click Create Bucketclass.
  9. On the BucketClass page, verify that your newly created resource is in the Created phase.
  10. In the OpenShift Web Console, click StorageData Foundation.
  11. In the Status card, click Storage System and click the storage system link from the pop up that appears.
  12. In the Object tab, click Multicloud Object GatewayBucketsNamespace Buckets tab .
  13. Click Create Namespace Bucket.

    1. On the Choose Name tab, specify a Name for the namespace bucket and click Next.
    2. On the Set Placement tab:

      1. Under Read Policy, select the checkbox for each namespace resource created in step 5 that the namespace bucket should read data from.
      2. If the namespace policy type you are using is Multi, then Under Write Policy, specify which namespace resource the namespace bucket should write data to.
      3. Click Next.
    3. Click Create.

Verification

  • Verify that the namespace bucket is listed with a green check mark in the State column, the expected number of read resources, and the expected write resource name.

5.4. Sharing legacy application data with cloud native application using S3 protocol

Many legacy applications use file systems to share data sets. You can access and share the legacy data in the file system by using the S3 operations. To share data you need to:

  • Export the pre-existing file system datasets, that is, RWX volume such as Ceph FileSystem (CephFS) or create a new file system datasets using the S3 protocol.
  • Access file system datasets from both file system and S3 protocol.
  • Configure S3 accounts and map them to the existing or a new file system unique identifiers (UIDs) and group identifiers (GIDs).

5.4.1. Creating a NamespaceStore to use a file system

Prerequisites

  • Openshift Container Platform with OpenShift Data Foundation operator installed.
  • Access to the Multicloud Object Gateway (MCG).

Procedure

  1. Log into the OpenShift Web Console.
  2. Click StorageData Foundation.
  3. Click the NamespaceStore tab to create NamespaceStore resources to be used in the namespace bucket.
  4. Click Create namespacestore.
  5. Enter a name for the NamespaceStore.
  6. Choose Filesystem as the provider.
  7. Choose the Persistent volume claim.
  8. Enter a folder name.

    If the folder name exists, then that folder is used to create the NamespaceStore or else a folder with that name is created.

  9. Click Create.
  10. Verify the NamespaceStore is in the Ready state.

5.4.2. Creating accounts with NamespaceStore filesystem configuration

You can either create a new account with NamespaceStore filesystem configuration or convert an existing normal account into a NamespaceStore filesystem account by editing the YAML.

Note

You cannot remove a NamespaceStore filesystem configuration from an account.

Prerequisites

  • Download the Multicloud Object Gateway (MCG) command-line interface:

    # subscription-manager repos --enable=rh-odf-4-for-rhel-8-x86_64-rpms
    # yum install mcg

Procedure

  • Create a new account with NamespaceStore filesystem configuration using the MCG command-line interface.

    $ noobaa account create <noobaa-account-name> [flags]

    For example:

    $ noobaa account create testaccount --full_permission --nsfs_account_config --gid 10001 --uid 10001 –default_resource fs_namespacestore

    allow_bucket_create

    Indicates whether the account is allowed to create new buckets. Supported values are true or false. Default value is true.

    allowed_buckets

    A comma separated list of bucket names to which the user is allowed to have access and management rights.

    default_resource

    The NamespaceStore resource on which the new buckets will be created when using the S3 CreateBucket operation. The NamespaceStore must be backed by an RWX (ReadWriteMany) persistent volume claim (PVC).

    full_permission

    Indicates whether the account should be allowed full permission or not. Supported values are true or false. Default value is false.

    new_buckets_path

    The filesystem path where directories corresponding to new buckets will be created. The path is inside the filesystem of NamespaceStore filesystem PVCs where new directories are created to act as the filesystem mapping of newly created object bucket classes.

    nsfs_account_config

    A mandatory field that indicates if the account is used for NamespaceStore filesystem.

    nsfs_only

    Indicates whether the account is used only for NamespaceStore filesystem or not. Supported values are true or false. Default value is false. If it is set to 'true', it limits you from accessing other types of buckets.

    uid

    The user ID of the filesystem to which the MCG account will be mapped and it is used to access and manage data on the filesystem

    gid

    The group ID of the filesystem to which the MCG account will be mapped and it is used to access and manage data on the filesystem

    The MCG system sends a response with the account configuration and its S3 credentials:

    # NooBaaAccount spec:
    allow_bucket_creation: true
    Allowed_buckets:
      full_permission: true
      permission_list: []
    default_resource: noobaa-default-namespace-store
    Nsfs_account_config:
      gid: 10001
      new_buckets_path: /
      nsfs_only: true
      uid: 10001
    INFO[0006] ✅ Exists: Secret "noobaa-account-testaccount"
    Connection info:
      AWS_ACCESS_KEY_ID      : <aws-access-key-id>
      AWS_SECRET_ACCESS_KEY  : <aws-secret-access-key>

    You can list all the custom resource definition (CRD) based accounts by using the following command:

    $ noobaa account list
    NAME          ALLOWED_BUCKETS   DEFAULT_RESOURCE               PHASE   AGE
    testaccount   [*]               noobaa-default-backing-store   Ready   1m17s

    If you are interested in a particular account, you can read its custom resource definition (CRD) directly by the account name:

    oc get noobaaaccount/testaccount -o yaml
    spec:
      allow_bucket_creation: true
      allowed_buckets:
        full_permission: true
        permission_list: []
      default_resource: noobaa-default-namespace-store
      nsfs_account_config:
        gid: 10001
        new_buckets_path: /
        nsfs_only: true
        uid: 10001

5.4.3. Accessing legacy application data from the openshift-storage namespace

When using the Multicloud Object Gateway (MCG) NamespaceStore filesystem (NSFS) feature, you need to have the Persistent Volume Claim (PVC) where the data resides in the openshift-storage namespace. In almost all cases, the data you need to access is not in the openshift-storage namespace, but in the namespace that the legacy application uses.

In order to access data stored in another namespace, you need to create a PVC in the openshift-storage namespace that points to the same CephFS volume that the legacy application uses.

Procedure

  1. Display the application namespace with scc:

    $ oc get ns <application_namespace> -o yaml | grep scc
    <application_namespace>

    Specify the name of the application namespace.

    Example 5.1. Example

    $ oc get ns testnamespace -o yaml | grep scc

    Example 5.2. Example output

    openshift.io/sa.scc.mcs: s0:c26,c5
    openshift.io/sa.scc.supplemental-groups: 1000660000/10000
    openshift.io/sa.scc.uid-range: 1000660000/10000
  2. Navigate into the application namespace:

    $ oc project <application_namespace>

    Example 5.3. Example

    $ oc project testnamespace
  3. Ensure that a ReadWriteMany (RWX) PVC is mounted on the pod that you want to consume from the noobaa S3 endpoint using the MCG NSFS feature:

    $ oc get pvc

    Example 5.4. Example output

    NAME                                               STATUS VOLUME
    CAPACITY ACCESS MODES STORAGECLASS              AGE
    cephfs-write-workload-generator-no-cache-pv-claim  Bound  pvc-aa58fb91-c3d2-475b-bbee-68452a613e1a
    10Gi     RWX          ocs-storagecluster-cephfs 12s
    $ oc get pod

    Example 5.5. Example output

    NAME                                                READY   STATUS              RESTARTS   AGE
    cephfs-write-workload-generator-no-cache-1-cv892    1/1     Running             0          11s
  4. Check the mount point of the Persistent Volume (PV) inside your pod.

    1. Get the volume name of the PV from the pod:

      $ oc get pods <pod_name> -o jsonpath='{.spec.volumes[]}'
      <pod_name>

      Specify the name of the pod.

      Example 5.6. Example

      $ oc get pods cephfs-write-workload-generator-no-cache-1-cv892 -o jsonpath='{.spec.volumes[]}'

      Example 5.7. Example output

      {"name":"app-persistent-storage","persistentVolumeClaim":{"claimName":"cephfs-write-workload-generator-no-cache-pv-claim"}}

      In this example, the name of the volume for the PVC is cephfs-write-workload-generator-no-cache-pv-claim.

    2. List all the mounts in the pod, and check for the mount point of the volume that you identified in the previous step:

      $ oc get pods <pod_name> -o jsonpath='{.spec.containers[].volumeMounts}'

      Example 5.8. Example

      $ oc get pods cephfs-write-workload-generator-no-cache-1-cv892 -o jsonpath='{.spec.containers[].volumeMounts}'

      Example 5.9. Example output

      [{"mountPath":"/mnt/pv","name":"app-persistent-storage"},{"mountPath":"/var/run/secrets/kubernetes.io/serviceaccount","name":"kube-api-access-8tnc5","readOnly":true}]
  5. Confirm the mount point of the RWX PV in your pod:

    $ oc exec -it <pod_name> -- df <mount_path>
    <mount_path>

    Specify the path to the mount point that you identified in the previous step.

    Example 5.10. Example

    $ oc exec -it cephfs-write-workload-generator-no-cache-1-cv892 -- df /mnt/pv

    Example 5.11. Example output

    main
    Filesystem
    1K-blocks Used Available  Use%  Mounted on
    172.30.202.87:6789,172.30.120.254:6789,172.30.77.247:6789:/volumes/csi/csi-vol-cc416d9e-dbf3-11ec-b286-0a580a810213/edcfe4d5-bdcb-4b8e-8824-8a03ad94d67c
    10485760  0    10485760   0%    /mnt/pv
  6. Ensure that the UID and SELinux labels are the same as the ones that the legacy namespace uses:

    $ oc exec -it <pod_name> -- ls -latrZ <mount_path>

    Example 5.12. Example

    $ oc exec -it cephfs-write-workload-generator-no-cache-1-cv892 -- ls -latrZ /mnt/pv/

    Example 5.13. Example output

    total 567
    drwxrwxrwx. 3 root       root system_u:object_r:container_file_t:s0:c26,c5      2 May 25 06:35 .
    -rw-r--r--. 1 1000660000 root system_u:object_r:container_file_t:s0:c26,c5 580138 May 25 06:35 fs_write_cephfs-write-workload-generator-no-cache-1-cv892-data.log
    drwxrwxrwx. 3 root       root system_u:object_r:container_file_t:s0:c26,c5     30 May 25 06:35 ..
  7. Get the information of the legacy application RWX PV that you want to make accessible from the openshift-storage namespace:

    $ oc get pv | grep <pv_name>
    <pv_name>

    Specify the name of the PV.

    Example 5.14. Example

    $ oc get pv | grep pvc-aa58fb91-c3d2-475b-bbee-68452a613e1a

    Example 5.15. Example output

    pvc-aa58fb91-c3d2-475b-bbee-68452a613e1a   10Gi       RWX            Delete           Bound    testnamespace/cephfs-write-workload-generator-no-cache-pv-claim   ocs-storagecluster-cephfs              47s
  8. Ensure that the PVC from the legacy application is accessible from the openshift-storage namespace so that one or more noobaa-endpoint pods can access the PVC.

    1. Find the values of the subvolumePath and volumeHandle from the volumeAttributes. You can get these values from the YAML description of the legacy application PV:

      $ oc get pv <pv_name> -o yaml

      Example 5.16. Example

      $ oc get pv pvc-aa58fb91-c3d2-475b-bbee-68452a613e1a -o yaml

      Example 5.17. Example output

      apiVersion: v1
      kind: PersistentVolume
      metadata:
        annotations:
          pv.kubernetes.io/provisioned-by: openshift-storage.cephfs.csi.ceph.com
        creationTimestamp: "2022-05-25T06:27:49Z"
        finalizers:
        - kubernetes.io/pv-protection
        name: pvc-aa58fb91-c3d2-475b-bbee-68452a613e1a
        resourceVersion: "177458"
        uid: 683fa87b-5192-4ccf-af2f-68c6bcf8f500
      spec:
        accessModes:
        - ReadWriteMany
        capacity:
          storage: 10Gi
        claimRef:
          apiVersion: v1
          kind: PersistentVolumeClaim
          name: cephfs-write-workload-generator-no-cache-pv-claim
          namespace: testnamespace
          resourceVersion: "177453"
          uid: aa58fb91-c3d2-475b-bbee-68452a613e1a
        csi:
          controllerExpandSecretRef:
            name: rook-csi-cephfs-provisioner
            namespace: openshift-storage
          driver: openshift-storage.cephfs.csi.ceph.com
          nodeStageSecretRef:
            name: rook-csi-cephfs-node
            namespace: openshift-storage
          volumeAttributes:
            clusterID: openshift-storage
            fsName: ocs-storagecluster-cephfilesystem
            storage.kubernetes.io/csiProvisionerIdentity: 1653458225664-8081-openshift-storage.cephfs.csi.ceph.com
            subvolumeName: csi-vol-cc416d9e-dbf3-11ec-b286-0a580a810213
            subvolumePath: /volumes/csi/csi-vol-cc416d9e-dbf3-11ec-b286-0a580a810213/edcfe4d5-bdcb-4b8e-8824-8a03ad94d67c
          volumeHandle: 0001-0011-openshift-storage-0000000000000001-cc416d9e-dbf3-11ec-b286-0a580a810213
        persistentVolumeReclaimPolicy: Delete
        storageClassName: ocs-storagecluster-cephfs
        volumeMode: Filesystem
      status:
        phase: Bound
    2. Use the subvolumePath and volumeHandle values that you identified in the previous step to create a new PV and PVC object in the openshift-storage namespace that points to the same CephFS volume as the legacy application PV:

      Example 5.18. Example YAML file

      $ cat << EOF >> pv-openshift-storage.yaml
      apiVersion: v1
      kind: PersistentVolume
      metadata:
        name: cephfs-pv-legacy-openshift-storage
      spec:
        storageClassName: ""
        accessModes:
        - ReadWriteMany
        capacity:
          storage: 10Gi     1
        csi:
          driver: openshift-storage.cephfs.csi.ceph.com
          nodeStageSecretRef:
            name: rook-csi-cephfs-node
            namespace: openshift-storage
          volumeAttributes:
          # Volume Attributes can be copied from the Source testnamespace PV
            "clusterID": "openshift-storage"
            "fsName": "ocs-storagecluster-cephfilesystem"
            "staticVolume": "true"
          # rootpath is the subvolumePath: you copied from the Source testnamespace PV
            "rootPath": /volumes/csi/csi-vol-cc416d9e-dbf3-11ec-b286-0a580a810213/edcfe4d5-bdcb-4b8e-8824-8a03ad94d67c
          volumeHandle: 0001-0011-openshift-storage-0000000000000001-cc416d9e-dbf3-11ec-b286-0a580a810213-clone   2
        persistentVolumeReclaimPolicy: Retain
        volumeMode: Filesystem
      ---
      apiVersion: v1
      kind: PersistentVolumeClaim
      metadata:
        name: cephfs-pvc-legacy
        namespace: openshift-storage
      spec:
        storageClassName: ""
        accessModes:
        - ReadWriteMany
        resources:
          requests:
            storage: 10Gi     3
        volumeMode: Filesystem
        # volumeName should be same as PV name
        volumeName: cephfs-pv-legacy-openshift-storage
      EOF
      1
      The storage capacity of the PV that you are creating in the openshift-storage namespace must be the same as the original PV.
      2
      The volume handle for the target PV that you create in openshift-storage needs to have a different handle than the original application PV, for example, add -clone at the end of the volume handle.
      3
      The storage capacity of the PVC that you are creating in the openshift-storage namespace must be the same as the original PVC.
    3. Create the PV and PVC in the openshift-storage namespace using the YAML file specified in the previous step:

      $ oc create -f <YAML_file>
      <YAML_file>

      Specify the name of the YAML file.

      Example 5.19. Example

      $ oc create -f pv-openshift-storage.yaml

      Example 5.20. Example output

      persistentvolume/cephfs-pv-legacy-openshift-storage created
      persistentvolumeclaim/cephfs-pvc-legacy created
    4. Ensure that the PVC is available in the openshift-storage namespace:

      $ oc get pvc -n openshift-storage

      Example 5.21. Example output

      NAME                                  STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS                  AGE
      cephfs-pvc-legacy                     Bound    cephfs-pv-legacy-openshift-storage         10Gi       RWX                                          14s
    5. Navigate into the openshift-storage project:

      $ oc project openshift-storage

      Example 5.22. Example output

      Now using project "openshift-storage" on server "https://api.cluster-5f6ng.5f6ng.sandbox65.opentlc.com:6443".
    6. Create the NSFS namespacestore:

      $ noobaa namespacestore create nsfs <nsfs_namespacestore> --pvc-name='<cephfs_pvc_name>' --fs-backend='CEPH_FS'
      <nsfs_namespacestore>
      Specify the name of the NSFS namespacestore.
      <cephfs_pvc_name>

      Specify the name of the CephFS PVC in the openshift-storage namespace.

      Example 5.23. Example

      $ noobaa namespacestore create nsfs legacy-namespace --pvc-name='cephfs-pvc-legacy' --fs-backend='CEPH_FS'
    7. Ensure that the noobaa-endpoint pod restarts and that it successfully mounts the PVC at the NSFS namespacestore, for example, /nsfs/legacy-namespace mountpoint:

      $ oc exec -it <noobaa_endpoint_pod_name> -- df -h /nsfs/<nsfs_namespacestore>
      <noobaa_endpoint_pod_name>

      Specify the name of the noobaa-endpoint pod.

      Example 5.24. Example

      $ oc exec -it noobaa-endpoint-5875f467f5-546c6 -- df -h /nsfs/legacy-namespace

      Example 5.25. Example output

      Filesystem                                                                                                                                                Size  Used Avail Use% Mounted on
      172.30.202.87:6789,172.30.120.254:6789,172.30.77.247:6789:/volumes/csi/csi-vol-cc416d9e-dbf3-11ec-b286-0a580a810213/edcfe4d5-bdcb-4b8e-8824-8a03ad94d67c   10G     0   10G   0% /nsfs/legacy-namespace
    8. Create a MCG user account:

      $ noobaa account create <user_account> --full_permission --allow_bucket_create=true --new_buckets_path='/' --nsfs_only=true --nsfs_account_config=true --gid <gid_number> --uid <uid_number> --default_resource='legacy-namespace'
      <user_account>
      Specify the name of the MCG user account.
      <gid_number>
      Specify the GID number.
      <uid_number>

      Specify the UID number.

      Example 5.26. Example

      Important

      Use the same UID and GID as that of the legacy application. You can find it from the previous output.

      $ noobaa account create leguser --full_permission --allow_bucket_create=true --new_buckets_path='/' --nsfs_only=true --nsfs_account_config=true --gid 0 --uid 1000660000 --default_resource='legacy-namespace'
    9. Create a MCG bucket.

      1. Create a dedicated folder for S3 inside the NSFS share on the CephFS PV and PVC of the legacy application pod:

        $ oc exec -it <pod_name> -- mkdir <mount_path>/nsfs

        Example 5.27. Example

        $ oc exec -it cephfs-write-workload-generator-no-cache-1-cv892 -- mkdir /mnt/pv/nsfs
      2. Create the MCG bucket using the nsfs/ path:

        $ noobaa api bucket_api create_bucket '{
          "name": "<bucket_name>",
          "namespace":{
            "write_resource": { "resource": "<nsfs_namespacestore>", "path": "nsfs/" },
            "read_resources": [ { "resource": "<nsfs_namespacestore>", "path": "nsfs/" }]
          }
        }'

        Example 5.28. Example

        $ noobaa api bucket_api create_bucket '{
          "name": "legacy-bucket",
          "namespace":{
            "write_resource": { "resource": "legacy-namespace", "path": "nsfs/" },
            "read_resources": [ { "resource": "legacy-namespace", "path": "nsfs/" }]
          }
        }'
    10. Check the SELinux labels of the folders residing in the PVCs in the legacy application and openshift-storage namespaces:

      $ oc exec -it <noobaa_endpoint_pod_name> -n openshift-storage -- ls -ltraZ /nsfs/<nsfs_namespacstore>

      Example 5.29. Example

      $ oc exec -it noobaa-endpoint-5875f467f5-546c6 -n openshift-storage -- ls -ltraZ /nsfs/legacy-namespace

      Example 5.30. Example output

      total 567
      drwxrwxrwx. 3 root       root system_u:object_r:container_file_t:s0:c0,c26      2 May 25 06:35 .
      -rw-r--r--. 1 1000660000 root system_u:object_r:container_file_t:s0:c0,c26 580138 May 25 06:35 fs_write_cephfs-write-workload-generator-no-cache-1-cv892-data.log
      drwxrwxrwx. 3 root       root system_u:object_r:container_file_t:s0:c0,c26     30 May 25 06:35 ..
      $ oc exec -it <pod_name> -- ls -latrZ <mount_path>

      Example 5.31. Example

      $ oc exec -it cephfs-write-workload-generator-no-cache-1-cv892 -- ls -latrZ /mnt/pv/

      Example 5.32. Example output

      total 567
      drwxrwxrwx. 3 root       root system_u:object_r:container_file_t:s0:c26,c5      2 May 25 06:35 .
      -rw-r--r--. 1 1000660000 root system_u:object_r:container_file_t:s0:c26,c5 580138 May 25 06:35 fs_write_cephfs-write-workload-generator-no-cache-1-cv892-data.log
      drwxrwxrwx. 3 root       root system_u:object_r:container_file_t:s0:c26,c5     30 May 25 06:35 ..

      In these examples, you can see that the SELinux labels are not the same which results in permission denied or access issues.

  9. Ensure that the legacy application and openshift-storage pods use the same SELinux labels on the files.

    You can do this one of two ways:

  10. Delete the NSFS namespacestore:

    1. Delete the MCG bucket:

      $ noobaa bucket delete <bucket_name>

      Example 5.33. Example

      $ noobaa bucket delete legacy-bucket
    2. Delete the MCG user account:

      $ noobaa account delete <user_account>

      Example 5.34. Example

      $ noobaa account delete leguser
    3. Delete the NSFS namespacestore:

      $ noobaa namespacestore delete <nsfs_namespacestore>

      Example 5.35. Example

      $ noobaa namespacestore delete legacy-namespace
  11. Delete the PV and PVC:

    Important

    Before you delete the PV and PVC, ensure that the PV has a retain policy configured.

    $ oc delete pv <cephfs_pv_name>
    $ oc delete pvc <cephfs_pvc_name>
    <cephfs_pv_name>
    Specify the CephFS PV name of the legacy application.
    <cephfs_pvc_name>

    Specify the CephFS PVC name of the legacy application.

    Example 5.36. Example

    $ oc delete pv cephfs-pv-legacy-openshift-storage
    $ oc delete pvc cephfs-pvc-legacy

5.4.3.1. Changing the default SELinux label on the legacy application project to match the one in the openshift-storage project

  1. Display the current openshift-storage namespace with sa.scc.mcs:

    $ oc get ns openshift-storage -o yaml | grep sa.scc.mcs

    Example 5.37. Example output

    openshift.io/sa.scc.mcs: s0:c26,c0
  2. Edit the legacy application namespace, and modify the sa.scc.mcs with the value from the sa.scc.mcs of the openshift-storage namespace:

    $ oc edit ns <appplication_namespace>

    Example 5.38. Example

    $ oc edit ns testnamespace
    $ oc get ns <application_namespace> -o yaml | grep sa.scc.mcs

    Example 5.39. Example

    $ oc get ns testnamespace -o yaml | grep sa.scc.mcs

    Example 5.40. Example output

    openshift.io/sa.scc.mcs: s0:c26,c0
  3. Restart the legacy application pod. A relabel of all the files take place and now the SELinux labels match with the openshift-storage deployment.

5.4.3.2. Modifying the SELinux label only for the deployment config that has the pod which mounts the legacy application PVC

  1. Create a new scc with the MustRunAs and seLinuxOptions options, with the Multi Category Security (MCS) that the openshift-storage project uses:

    Example 5.41. Example YAML file

    $ cat << EOF >> scc.yaml
    allowHostDirVolumePlugin: false
    allowHostIPC: false
    allowHostNetwork: false
    allowHostPID: false
    allowHostPorts: false
    allowPrivilegeEscalation: true
    allowPrivilegedContainer: false
    allowedCapabilities: null
    apiVersion: security.openshift.io/v1
    defaultAddCapabilities: null
    fsGroup:
      type: MustRunAs
    groups:
    - system:authenticated
    kind: SecurityContextConstraints
    metadata:
      annotations:
      name: restricted-pvselinux
    priority: null
    readOnlyRootFilesystem: false
    requiredDropCapabilities:
    - KILL
    - MKNOD
    - SETUID
    - SETGID
    runAsUser:
      type: MustRunAsRange
    seLinuxContext:
      seLinuxOptions:
        level: s0:c26,c0
      type: MustRunAs
    supplementalGroups:
      type: RunAsAny
    users: []
    volumes:
    - configMap
    - downwardAPI
    - emptyDir
    - persistentVolumeClaim
    - projected
    - secret
    EOF
    $ oc create -f scc.yaml
  2. Create a service account for the deployment and add it to the newly created scc.

    1. Create a service account:

      $ oc create serviceaccount <service_account_name>
      <service_account_name>

      Specify the name of the service account.

      Example 5.42. Example

      $ oc create serviceaccount testnamespacesa
    2. Add the service account to the newly created scc:

      $ oc adm policy add-scc-to-user restricted-pvselinux -z <service_account_name>

      Example 5.43. Example

      $ oc adm policy add-scc-to-user restricted-pvselinux -z testnamespacesa
  3. Patch the legacy application deployment so that it uses the newly created service account. Now, this allows you to specify the SELinux label in the deployment:

    $ oc patch dc/<pod_name> '{"spec":{"template":{"spec":{"serviceAccountName": "<service_account_name>"}}}}'

    Example 5.44. Example

    $ oc patch dc/cephfs-write-workload-generator-no-cache --patch '{"spec":{"template":{"spec":{"serviceAccountName": "testnamespacesa"}}}}'
  4. Edit the deployment to specify the security context to use at the SELinux label in the deployment configuration:

    $ oc edit dc <pod_name> -n <application_namespace>

    Add the following lines:

    spec:
     template:
        metadata:
          securityContext:
            seLinuxOptions:
              Level: <security_context_value>
    <security_context_value>

    You can find this value when you execute the command to create a dedicated folder for S3 inside the NSFS share, on the CephFS PV and PVC of the legacy application pod.

    Example 5.45. Example

    $ oc edit dc cephfs-write-workload-generator-no-cache -n testnamespace
    spec:
     template:
        metadata:
          securityContext:
            seLinuxOptions:
              level: s0:c26,c0
  5. Ensure that the security context to be used at the SELinux label in the deployment configuration is specified correctly:

    $ oc get dc <pod_name> -n <application_namespace> -o yaml | grep -A 2 securityContext

    Example 5.46. Example

    $ oc get dc cephfs-write-workload-generator-no-cache -n testnamespace -o yaml | grep -A 2 securityContext

    Example 5.47. Example output

          securityContext:
            seLinuxOptions:
              level: s0:c26,c0

    The legacy application is restarted and begins using the same SELinux labels as the openshift-storage namespace.