Chapter 5. Provisioning an RHMAP 4.x MBaaS on OpenShift Container Platform

5.1. Overview

An OpenShift Container Platform cluster can serve as an MBaaS target and host your Cloud Apps and Cloud Services. This guide provides detailed steps to deploy the RHMAP 4.x MBaaS on an OpenShift Container Platform cluster.

5.2. Prerequisites

This guide assumes several prerequisites are met before the installation:

  • Ansible version 2.4 is installed on a management node which has SSH access to the OpenShift cluster. See Section 2.2, “Configure Ansible for installing RHMAP components.” for more information.
  • All nodes in the cluster must be registered with the Red Hat Subscription Manager. See Chapter 2, Preparing Infrastructure for Installation for detailed steps.
  • The MBaaS requires outbound internet access to perform npm installations, make sure that all relevant nodes have outbound internet access before installation.
  • An existing OpenShift Container Platform installation, version 3.7, 3.9 or 3.10.
  • The OpenShift Container Platform master and router must be accessible from the RHMAP Core.
  • A wildcard DNS entry must be configured for the OpenShift Container Platform router IP address.
  • A trusted wildcard certificate must be configured for the OpenShift Container Platform router. See Using Wildcard Certificates in OpenShift Container Platform documentation.
  • Image streams and images in the openshift namespace must be updated to the latest version. Refer to sections Updating the Default Image Streams and Templates and Importing the Latest Images in the OpenShift Container Platform Installation and Configuration guide.
  • You must have administrative access to the OpenShift cluster using the oc CLI tool, enabling you to:

    • Create a project, and any resource typically found in a project (for example, deployment configuration, service, route).
    • Edit a namespace definition.
    • Create a security context constraint.
    • Manage nodes, specifically labels.
  • The rhmap-installer will run a number of pre-req checks which must pass before proceeding with the installation. See RHMAP Installer Pre-Requisite Checks for details.

For information on installation and management of an OpenShift Container Platform cluster and its users, see the official OpenShift documentation.

5.3. Installation

The installation of an three-node MBaaS in OpenShift Container Platform results in a resilient three-node cluster:

  • MBaaS components are spread across all three nodes.
  • MongoDB replica set is spread over three nodes.
  • MongoDB data is backed by persistent volumes.
  • A Nagios service with health checks and alerts is set up for all MBaaS components.

The installation consists of several phases. Before the installation, you must prepare your OpenShift Container Platform cluster:

  • Set up persistent storage - you need to create Persistent Volumes with specific parameters in OpenShift Container Platform.
  • Label the nodes - nodes need to be labeled in a specific way, to match the node selectors expected by the OpenShift Container Platform template of the MBaaS.
  • Network Configuration - configuring the SDN network plugin used in OpenShift Container Platform so that Cloud Apps can communicate with MongoDB in the MBaaS.

After the OpenShift Container Platform cluster is properly configured:

5.3.1. Before The Installation

The installation procedure poses certain requirements on your OpenShift Container Platform cluster in order to guarantee fault tolerance and stability.

5.3.1.1. Network Configuration

Cloud Apps in an MBaaS communicate directly with a MongoDB replica set. In order for this to work, the OpenShift Container Platform SDN must be configured to use the ovs-subnet SDN plugin. For more detailed information on configuring this, see Migrating Between SDN Plug-ins in the OpenShift documentation.

5.3.1.1.1. Making Project Networks Global

If you cannot use the ovs-subnet SDN plugin, you must make the network of the MBaaS project global after installation. For example, if you use the ovs-multitenant SDN plugin, projects must be configured as global. The following command is an example of how to make a project global:

oadm pod-network make-projects-global live-mbaas

To determine if projects are global, use the following command:

oc get netnamespaces

In the output, any projects that are configured global have namespaces with a value of "0"

Note

If a project network is configured as global, you cannot reconfigure it to reduce network accessibility.

For further information on how to make projects global, see Making Project Networks Global in the OpenShift Container Platform documentation.

5.3.1.2. Persistent Storage Setup

Note

Ensure that the persistent volumes are configured according to the OpenShift documentation for configuring PersistentVolumes. If you are using NFS, see the Troubleshooting NFS Issues section for more information.

Some components of the MBaaS require persistent storage. For example, MongoDB for storing databases, and Nagios for storing historical monitoring data.

As a minimum, make sure your OpenShift Container Platform cluster has the following persistent volumes in an Available state, with at least the amount of free space listed below:

  • Three 50 GB persistent volumes, one for each MongoDB replica
  • One 1 GB persistent volume for Nagios

For detailed information on PersistentVolumes and how to create them, see Persistent Storage in the OpenShift Container Platform documentation.

5.3.1.3. Apply Node Labels for MBaaS

By applying labels to OpenShift Container Platform nodes, you can control which nodes the MBaaS components, MongoDB replicas and Cloud Apps will be deployed to.

This section describes the considerations for:

Cloud apps get deployed to nodes labeled with the default nodeSelector, which is usually set to type=compute (defined in the OpenShift Container Platform master configuration).

5.3.1.3.1. Labelling for MBaaS components

Red Hat recommends that MBaaS components are deployed to dedicated nodes and that these nodes are separated from other applications, for example, RHMAP Cloud Apps.

Refer to Infrastructure Sizing Considerations for Installation of RHMAP MBaaS for the recommended number of MBaaS nodes and Cloud App nodes for your configuration.

For example, if you have 12 nodes, the recommendation is:

  • Dedicate three nodes to MBaaS and MongoDB.
  • Dedicate three nodes to Cloud Apps.

To achieve this, apply a label, such as type=mbaas to the three dedicated MBaaS nodes.

oc label node mbaas-1 type=mbaas
oc label node mbaas-2 type=mbaas
oc label node mbaas-3 type=mbaas

Then, when creating the MBaaS project, as described later in Section 5.3.2, “Installing the MBaaS”, set this label as the nodeSelector.

You can check what type labels are applied to all nodes with the following command:

oc get nodes -L type
NAME          STATUS                     AGE       TYPE
ose-master    Ready,SchedulingDisabled   27d       master
infra-1       Ready                      27d       infra
infra-2       Ready                      27d       infra
app-1         Ready                      27d       compute
app-2         Ready                      27d       compute
app-3         Ready                      27d       compute
mbaas-1       Ready                      27d       mbaas
mbaas-2       Ready                      27d       mbaas
mbaas-3       Ready                      27d       mbaas

In this example, the deployment would be as follows:

  • Cloud apps get deployed to the three dedicated Cloud App nodes app-1, app-2, and app-3.
  • The MBaaS components get deployed to the three dedicated MBaaS nodes mbaas-1, mbaas-2, and mbaas-3 (if the nodeSelector is also set on the MBaaS Project).
5.3.1.3.2. Labelling for MongoDB replicas

In the production MBaaS template, the MongoDB replicas are spread over three MBaaS nodes. If you have more than three MBaaS nodes, any three of them can host the MongoDB replicas.

To apply the required labels (assuming the three nodes are named mbaas-1, mbaas-2, and mbaas-3):

oc label node mbaas-1 mbaas_id=mbaas1
oc label node mbaas-2 mbaas_id=mbaas2
oc label node mbaas-3 mbaas_id=mbaas3

You can verify the labels were applied correctly by running this command:

oc get nodes -L mbaas_id
NAME          STATUS                     AGE       MBAAS_ID
10.10.0.102   Ready                      27d       <none>
10.10.0.117   Ready                      27d       <none>
10.10.0.141   Ready                      27d       <none>
10.10.0.157   Ready                      27d       mbaas3
10.10.0.19    Ready,SchedulingDisabled   27d       <none>
10.10.0.28    Ready                      27d       mbaas1
10.10.0.33    Ready                      27d       <none>
10.10.0.4     Ready                      27d       <none>
10.10.0.99    Ready                      27d       mbaas2

See Updating Labels on Nodes in the OpenShift Container Platform documentation for more information on how to apply labels to nodes.

5.3.1.3.2.1. Why are MongoDB replicas spread over multiple nodes?

Each MongoDB replica is scheduled to a different node to support failover.

For example, if an OpenShift Container Platform node failed, data would be completely inaccessible if all three MongoDB replicas were scheduled on this failing node. Setting a different nodeSelector for each MongoDB DeploymentConfig, and having a corresponding OpenShift Container Platform node in the cluster matching this label will ensure the MongoDB Pods get scheduled to different nodes.

In the production MBaaS template, there is a different nodeSelector for each MongoDB DeploymentConfig:

  • mbaas_id=mbaas1 for mongodb-1
  • mbaas_id=mbaas2 for mongodb-2
  • mbaas_id=mbaas3 for mongodb-3

5.3.2. Installing the MBaaS

5.3.2.1. Setting Variables

The variables required for installation of RHMAP MBaaS are set in the following file:

/opt/rhmap/4.7/rhmap-installer/roles/deploy-mbaas/defaults/main.yml

Set up the monitoring parameters with SMTP server details, which are required to enable email alerting from Nagios. If you do not require email alerting or want to set it up at a later time, the sample values can be used.

monitoring:
  smtp_server: "localhost"
  smtp_username: "username"
  smtp_password: "password"
  smtp_from_address: "nagios@example.com"
  rhmap_admin_email: "root@localhost"

5.3.2.2. Run the Playbook

To provision a 1-node MBaaS, enter:

ansible-playbook -i my-inventory-file playbooks/1-node-mbaas.yml

To provision a 3-node MBaaS, enter:

ansible-playbook -i my-inventory-file playbooks/3-node-mbaas.yml

5.3.3. Verifying The Installation

  1. Ping the health endpoint.

    If all services are created, all Pods are running, and the route is exposed, the MBaaS health endpoint can be queried as follows:

    curl `oc get route mbaas --template "{{.spec.host}}"`/sys/info/health

    The endpoint responds with health information about the various MBaaS components and their dependencies. If there are no errors reported, the MBaaS is ready to be configured for use in the Studio. Successful output will resemble the following:

    {
      "status": "ok",
      "summary": "No issues to report. All tests passed without error",
      "details": [
        {
          "description": "Check Mongodb connection",
          "test_status": "ok",
          "result": {
            "id": "mongodb",
            "status": "OK",
            "error": null
          },
          "runtime": 33
        },
        {
          "description": "Check fh-messaging running",
          "test_status": "ok",
          "result": {
            "id": "fh-messaging",
            "status": "OK",
            "error": null
          },
          "runtime": 64
        },
        {
          "description": "Check fh-metrics running",
          "test_status": "ok",
          "result": {
            "id": "fh-metrics",
            "status": "OK",
            "error": null
          },
          "runtime": 201
        },
        {
          "description": "Check fh-statsd running",
          "test_status": "ok",
          "result": {
            "id": "fh-statsd",
            "status": "OK",
            "error": null
          },
          "runtime": 7020
        }
      ]
    }
  2. Verify that all Nagios checks are passing.

    Log in to the Nagios dashboard of the MBaaS by following the steps in the Accessing the Nagios Dashboard section in the Operations Guide.

    After logging in to the Nagios Dashboard, all checks under the left-hand-side Services menu should be indicated as OK.

See the Troubleshooting guide if any of the checks are not in an OK state.

After verifying that the MBaaS is installed correctly, you must create an MBaaS target for the new MBaaS in the Studio.

5.4. Creating an MBaaS Target

  1. In the Studio, navigate to the Admin > MBaaS Targets section. Click Create MBaaS Target.
  2. Enter the following information

    • MBaaS Id - a unique ID for the MBaaS, for example: live-mbaas.
    • OpenShift Master URL - the URL of the OpenShift Container Platform master, for example, https://master.openshift.example.com:8443.
    • OpenShift Router DNS - a wildcard DNS entry of the OpenShift Container Platform router, for example, *.cloudapps.example.com.
    • MBaaS Service Key

      Equivalent to the value of the FHMBAAS_KEY environment variable, which is automatically generated during installation. To find out this value, run the following command:

      oc env dc/fh-mbaas --list | grep FHMBAAS_KEY

      Alternatively, you can find the value in the OpenShift Container Platform Console, in the Deployment config of the fh-mbaas, in the Environment section.

    • MBaaS URL

      A URL of the route exposed for the fh-mbaas-service, including the https protocol prefix. This can be retrieved from the OpenShift Container Platform web console, or by running the following command:

      echo "https://"$(oc get route/mbaas -o template --template {{.spec.host}})
    • MBaaS Project URL - (Optional) URL where the OpenShift Container Platform MBaaS project is available e.g. https://mbaas-mymbaas.openshift.example.com:8443/console/project/my-mbaas/overview.
    • Nagios URL - (Optional) Exposed route where Nagios is running in OpenShift Container Platform e.g. https://nagios-my-mbaas.openshift.example.com.
  3. Click Save MBaaS and you will be directed to the MBaaS Status screen. The status should be reported back in less than a minute.

Once the process of creating the MBaaS has successfully completed, you can see the new MBaaS in the list of MBaaS targets.

OpenShift Container Platform MBaaS target

In your OpenShift Container Platform account, you can see the MBaaS represented by a project.

OpenShift Container Platform MBaaS target

5.5. After Installation