Chapter 3. Deploy standalone Multicloud Object Gateway

Deploying only the Multicloud Object Gateway component with OpenShift Data Foundation provides the flexibility in deployment and helps to reduce the resource consumption. Use this section to deploy only the standalone Multicloud Object Gateway component, which involves the following steps:

  • Installing Red Hat OpenShift Data Foundation Operator
  • Creating standalone Multicloud Object Gateway

3.1. Installing Red Hat OpenShift Data Foundation Operator

You can install Red Hat OpenShift Data Foundation Operator using the Red Hat OpenShift Container Platform Operator Hub.

Prerequisites

  • Access to an OpenShift Container Platform cluster using an account with cluster-admin and Operator installation permissions.
  • You must have at least three worker nodes in the Red Hat OpenShift Container Platform cluster.
  • For additional resource requirements, see the Planning your deployment guide.
Important
  • When you need to override the cluster-wide default node selector for OpenShift Data Foundation, you can use the following command in the command line interface to specify a blank node selector for the openshift-storage namespace (create openshift-storage namespace in this case):

    $ oc annotate namespace openshift-storage openshift.io/node-selector=
  • Taint a node as infra to ensure only Red Hat OpenShift Data Foundation resources are scheduled on that node. This helps you save on subscription costs. For more information, see How to use dedicated worker nodes for Red Hat OpenShift Data Foundation chapter in the Managing and Allocating Storage Resources guide.

Procedure

  1. Log in to the OpenShift Web Console.
  2. Click Operators → OperatorHub.
  3. Scroll or type OpenShift Data Foundation into the Filter by keyword box to find the OpenShift Data Foundation Operator.
  4. Click Install.
  5. Set the following options on the Install Operator page:

    1. Update Channel as stable-4.10.
    2. Installation Mode as A specific namespace on the cluster.
    3. Installed Namespace as Operator recommended namespace openshift-storage. If Namespace openshift-storage does not exist, it is created during the operator installation.
    4. Select Approval Strategy as Automatic or Manual.

      If you select Automatic updates, then the Operator Lifecycle Manager (OLM) automatically upgrades the running instance of your Operator without any intervention.

      If you select Manual updates, then the OLM creates an update request. As a cluster administrator, you must then manually approve that update request to update the Operator to a newer version.

    5. Ensure that the Enable option is selected for the Console plugin.
    6. Click Install.

Verification steps

  • After the operator is successfully installed, a pop-up with a message, Web console update is available appears on the user interface. Click Refresh web console from this pop-up for the console changes to reflect.
  • In the Web Console:

    • Navigate to Installed Operators and verify that the OpenShift Data Foundation Operator shows a green tick indicating successful installation.
    • Navigate to Storage and verify if Data Foundation dashboard is available.

3.2. Creating a standalone Multicloud Object Gateway

You can create only the standalone Multicloud Object Gateway component while deploying OpenShift Data Foundation.

Prerequisites

  • Ensure that the OpenShift Data Foundation Operator is installed.

Procedure

  1. In the OpenShift Web Console, click OperatorsInstalled Operators to view all the installed operators.

    Ensure that the Project selected is openshift-storage.

  2. Click OpenShift Data Foundation operator and then click Create StorageSystem.
  3. In the Backing storage page, select the following:

    1. Select Multicloud Object Gateway for Deployment type.
    2. Select the Use an existing StorageClass option.
    3. Click Next.
  4. Optional: In the Security page, select Connect to an external key management service.

    1. Key Management Service Provider is set to Vault by default.
    2. Enter Vault Service Name, host Address of Vault server ('https://<hostname or ip>'), Port number, and Token.
    3. Expand Advanced Settings to enter additional settings and certificate details based on your Vault configuration:

      1. Enter the Key Value secret path in the Backend Path that is dedicated and unique to OpenShift Data Foundation.
      2. Optional: Enter TLS Server Name and Vault Enterprise Namespace.
      3. Upload the respective PEM encoded certificate file to provide the CA Certificate, Client Certificate, and Client Private Key.
      4. Click Save.
    4. Click Next.
  5. In the Review and create page, review the configuration details:

    To modify any configuration settings, click Back.

  6. Click Create StorageSystem.

Verification steps

Verifying that the OpenShift Data Foundation cluster is healthy
  1. In the OpenShift Web Console, click StorageData Foundation.
  2. In the Status card of the Overview tab, click Storage System and then click the storage system link from the pop up that appears.

    1. In the Status card of the Object tab, verify that both Object Service and Data Resiliency have a green tick.
    2. In the Details card, verify that the MCG information is displayed.
Verifying the state of the pods
  1. Click WorkloadsPods from the OpenShift Web Console.
  2. Select openshift-storage from the Project drop-down list and verify that the following pods are in Running state.

    Note

    If the Show default projects option is disabled, use the toggle button to list all the default projects.

    ComponentCorresponding pods

    OpenShift Data Foundation Operator

    • ocs-operator-* (1 pod on any worker node)
    • ocs-metrics-exporter-* (1 pod on any worker node)
    • odf-operator-controller-manager-* (1 pod on any worker node)
    • odf-console-* (1 pod on any worker node)
    • csi-addons-controller-manager-* (1 pod on any worker node)

    Rook-ceph Operator

    rook-ceph-operator-*

    (1 pod on any worker node)

    Multicloud Object Gateway

    • noobaa-operator-* (1 pod on any worker node)
    • noobaa-core-* (1 pod on any worker node)
    • noobaa-db-pg-* (1 pod on any worker node)
    • noobaa-endpoint-* (1 pod on any worker node)