MBaaS Administration and Installation Guide

Red Hat Mobile Application Platform 4.3

For Red Hat Mobile Application Platform 4.3

Red Hat Customer Content Services

Abstract

This document provides guides related to installation and administration of the RHMAP 4.x MBaaS on OpenShift 3.

Chapter 1. RHMAP 4.x MBaaS

1.1. Overview

Red Hat Mobile Application (RHMAP) 4.x has a hybrid deployment model — the Core, the MBaaS, and the Build Farm are deployed in different locations.

  • Development and management of apps occurs in the multi-tenant cloud instance of the RHMAP Core hosted by Red Hat.
  • Application data, runtime, and integrations are deployed to the RHMAP MBaaS installed in a private or public instance of OpenShift Enterprise 3.
  • The Build Farm is deployed separately from the Core and the MBaaS and is shared between all instances of RHMAP. Third-party Linux, Windows, and Apple server hosting providers are used to support building Client App binaries for all platforms.

The Mobile Backend-as-a-Service (MBaaS) is a core component of RHMAP – the back-end platform hosting containerized Cloud App in conjunction with database storage (MongoDB). The Cloud Apps deployed in an MBaaS can make use of RHMAP APIs, such as data synchronization, caching, or push notifications, and integrate with enterprise systems or other Cloud Services.

1.2. Architecture of the MBaaS

The RHMAP MBaaS 4.x is built on top of several technologies, including OpenShift Enterprise 3, Kubernetes, Docker, and Red Hat Software Collections. The MBaaS consists of several components, each running in its own Docker container. Similarly, every Cloud App deployed to the MBaaS runs in a Docker container. Those containers are deployed and orchestrated by Kubernetes.

MBaaS deployment diagram

In the MBaaS, the users can configure multiple isolated runtime and storage environments to support software development life-cycle stages, such as development, testing, and production. Each environment can host multiple Cloud Apps.

1.3. Security considerations

Since the MBaaS is not hosted in Red Hat’s public multi-tenant cloud, the data transmitted between the mobile device and the Cloud App does not pass through any servers operated by Red Hat or any other third party. Private data from backend systems is transmitted directly between mobile devices and the MBaaS.

The following data still resides in the RHMAP Core:

  • User names and passwords of RHMAP accounts
  • Master database of the Core, with entries for projects, apps, and their IDs
  • Git repositories hosting the source code of Client and Cloud Apps
  • App store containing the built binaries of Client Apps

Chapter 2. Installing the MBaaS

Installation of an MBaaS consists of the following key steps:

  1. Follow the steps in Preparing Infrastructure for Installation.

    The nodes running RHMAP must have RHEL installed, be a part of an OpenShift cluster, and be registered with Red Hat Subscription Manager.

  2. Follow the Provisioning an RHMAP 4.x MBaaS in OpenShift 3 guide.

    As part of the provisioning process, OpenShift automatically downloads RHMAP container images from the Red Hat Docker registry.

  3. Adjust system resource usage of MBaaS components.

    If the MBaaS components are deployed on dedicated nodes in your cluster (separate from Cloud Apps), we strongly recommend that you adjust the resource limits of MBaaS components to take full advantage of the available system resources.

    Follow the Adjusting System Resource Usage of the MBaaS and Cloud Apps guide for detailed steps.

  4. Enable additional features.

Chapter 3. Provisioning an RHMAP 4.x MBaaS in OpenShift 3

3.1. Overview

An OpenShift 3 cluster can serve as an MBaaS target and host your Cloud Apps and Cloud Services. This guide provides detailed steps to deploy the RHMAP 4.x MBaaS on an OpenShift 3 cluster.

You can choose a simple automated installation to preview and test the MBaaS, or follow the manual installation steps for a fully supported production-ready MBaaS:

  • Automatic Installation

    You can quickly try the RHMAP 4.x MBaaS by choosing the automatic installation.

    The following limitations apply to the automatically installed MBaaS:

    • not suitable for production use
    • single replica for each MBaaS component
    • single MongoDB replica with no persistent storage
  • Manual Installation

    For production use, follow the manual installation procedure, which results in an MBaaS with the following characteristics:

    • suitable for production use
    • three replicas defined for each MBaaS component (with the exception of fh-statsd)
    • three MongoDB replicas with a 50GB persistent storage requirement each

      • nodeSelectors of mbaas_id=mbaas1, mbaas_id=mbaas2, and mbaas_id=mbaas3 for the MongoDB replicas

3.2. Prerequisites

This guide assumes several prerequisites are met before the installation:

  • All nodes in the cluster must be registered with the Red Hat Subscription Manager and have RHMAP entitlement certificates downloaded. See Preparing Infrastructure for Installation for detailed steps.
  • The MBaaS requires outbound internet access to perform npm installations, make sure that all relevant nodes have outbound internet access before installation. If you use a proxy, see the appropriate step in the MBaaS Installation procedure.
  • An existing OpenShift installation, version 3.2, 3.3 or 3.4.
  • The OpenShift master and router must be accessible from the RHMAP Core.
  • A wildcard DNS entry must be configured for the OpenShift router IP address.
  • A trusted wildcard certificate must be configured for the OpenShift router. See Using Wildcard Certificates in OpenShift documentation.
  • Image streams and images in the openshift namespace must be updated to the latest version. Refer to sections 3.3.8. Updating the Default Image Streams and Templates and 3.3.9. Importing the Latest Images in the OpenShift 3.2 Installation and Configuration guide.
  • You must have administrative access to the OpenShift cluster using the oc CLI tool, enabling you to:

    • Create a project, and any resource typically found in a project (for example, deployment configuration, service, route).
    • Edit a namespace definition.
    • Create a security context constraint.
    • Manage nodes, specifically labels.

For information on installation and management of an OpenShift cluster and its users, see the official OpenShift documentation.

3.3. Automatic Installation

The automatic installation of an MBaaS in OpenShift 3 results in the MBaaS components being installed on nodes chosen by the OpenShift scheduler. Only a single instance of each component runs at any time and thus makes the MBaaS susceptible to downtime in case of failure of a single node. The data of the MongoDB database is not backed by any permanent storage and is therefore transient.

Warning

The automatic installation procedure must not be used in production environments. You should only use this procedure for evaluation purposes, since the provided template is not optimized for resiliency and stability required in production environments. Follow the manual installation steps for production use.

There are no setup steps required before the automatic installation. Refer to Creating an MBaaS Target to continue the installation.

Note

In order for automatic MBaaS installation to work, the OpenShift SDN must be configured to use the ovs-subnet SDN plugin (this is the default). If it is not set to this, refer to Network Configuration.

3.4. Manual Installation

The manual installation of an MBaaS in OpenShift 3 results in a resilient three-node cluster:

  • MBaaS components are spread across all three nodes.
  • MongoDB replica set is spread over three nodes.
  • MongoDB data is backed by persistent volumes.
  • A Nagios service with health checks and alerts is set up for all MBaaS components.

The installation consists of several phases. Before the installation, you must prepare your OpenShift cluster:

  • Set up persistent storage - you need to create Persistent Volumes with specific parameters in OpenShift.
  • Label the nodes - nodes need to be labeled in a specific way, to match the node selectors expected by the OpenShift template of the MBaaS.
  • Network Configuration - configuring the SDN network plugin used in OpenShift so that Cloud Apps can communicate with MongoDB in the MBaaS.

After the OpenShift cluster is properly configured:

3.4.1. Before The Installation

The manual installation procedure poses certain requirements on your OpenShift cluster in order to guarantee fault tolerance and stability.

3.4.1.1. Network Configuration

Cloud Apps in an MBaaS communicate directly with a MongoDB replica set. In order for this to work, the OpenShift SDN must be configured to use the ovs-subnet SDN plugin. For more detailed information on configuring this, see Migrating Between SDN Plug-ins in the OpenShift Enterprise documentation.

3.4.1.1.1. Making Project Networks Global

If you cannot use the ovs-subnet SDN plugin, you must make the network of the MBaaS project global after installation. For example, if you use the ovs-multitenant SDN plugin, projects must be configured as global. The following command is an example of how to make a project global:

oadm pod-network make-projects-global live-mbaas

To determine if projects are global, use the following command:

oc get netnamespaces

In the output, any projects that are configured global have namespaces with a value of "0"

Note

If a project network is configured as global, you cannot reconfigure it to reduce network accessibility.

For further information on how to make projects global, see Making Project Networks Global in the OpenShift Enterprise documentation.

3.4.1.2. Persistent Storage Setup

Note

Ensure that any NFS server and shares that you use are configured according to the OpenShift documentation for configuring NFS PersistentVolumes. See the Troubleshooting NFS Issues section for more information on configuring NFS.

Some components of the MBaaS require persistent storage. For example, MongoDB for storing databases, and Nagios for storing historical monitoring data.

As a minimum, make sure your OpenShift cluster has the following persistent volumes in an Available state, with at least the amount of free space listed below:

  • Three 50 GB persistent volumes, one for each MongoDB replica
  • One 1 GB persistent volume for Nagios

For detailed information on PersistentVolumes and how to create them, see Persistent Storage in the OpenShift Enterprise documentation.

3.4.1.3. Apply Node Labels

By applying labels to OpenShift nodes, you can control which nodes the MBaaS components, MongoDB replicas and Cloud Apps will be deployed to.

This section describes the considerations for:

Cloud apps get deployed to nodes labeled with the default nodeSelector, which is usually set to type=compute (defined in the OpenShift master configuration).

You can skip this entire labeling section if your OpenShift cluster only has a single schedulable node. In such case, all MBaaS components, MongoDB replicas, and Cloud Apps will necessarily run on that single node.

3.4.1.3.1. Labelling for MBaaS components

It is recommended, but not required, to deploy the MBaaS components to dedicated nodes, separate from other applications (such as RHMAP Cloud Apps).

Refer to Infrastructure Sizing Considerations for Installation of RHMAP MBaaS for the recommended number of MBaaS nodes and Cloud App nodes for your configuration.

For example, if you have 12 nodes, the recommendation is:

  • Dedicate three nodes to MBaaS and MongoDB.
  • Dedicate three nodes to Cloud Apps.

To achieve this, apply a label, such as type=mbaas to the three dedicated MBaaS nodes.

oc label node mbaas-1 type=mbaas
oc label node mbaas-2 type=mbaas
oc label node mbaas-3 type=mbaas

Then, when creating the MBaaS project, as described later in Section 3.4.2, “Installing the MBaaS”, set this label as the nodeSelector.

You can check what type labels are applied to all nodes with the following command:

oc get nodes -L type
NAME          STATUS                     AGE       TYPE
ose-master    Ready,SchedulingDisabled   27d       master
infra-1       Ready                      27d       infra
infra-2       Ready                      27d       infra
app-1         Ready                      27d       compute
app-2         Ready                      27d       compute
app-3         Ready                      27d       compute
mbaas-1       Ready                      27d       mbaas
mbaas-2       Ready                      27d       mbaas
mbaas-3       Ready                      27d       mbaas

In this example, the deployment would be as follows:

  • Cloud apps get deployed to the three dedicated Cloud App nodes app-1, app-2, and app-3.
  • The MBaaS components get deployed to the three dedicated MBaaS nodes mbaas-1, mbaas-2, and mbaas-3 (if the nodeSelector is also set on the MBaaS Project).
3.4.1.3.2. Labelling for MongoDB replicas

In the production MBaaS template, the MongoDB replicas are spread over three MBaaS nodes. If you have more than three MBaaS nodes, any three of them can host the MongoDB replicas.

To apply the required labels (assuming the three nodes are named mbaas-1, mbaas-2, and mbaas-3):

oc label node mbaas-1 mbaas_id=mbaas1
oc label node mbaas-2 mbaas_id=mbaas2
oc label node mbaas-3 mbaas_id=mbaas3

You can verify the labels were applied correctly by running this command:

oc get nodes -L mbaas_id
NAME          STATUS                     AGE       MBAAS_ID
10.10.0.102   Ready                      27d       <none>
10.10.0.117   Ready                      27d       <none>
10.10.0.141   Ready                      27d       <none>
10.10.0.157   Ready                      27d       mbaas3
10.10.0.19    Ready,SchedulingDisabled   27d       <none>
10.10.0.28    Ready                      27d       mbaas1
10.10.0.33    Ready                      27d       <none>
10.10.0.4     Ready                      27d       <none>
10.10.0.99    Ready                      27d       mbaas2

See Updating Labels on Nodes in the OpenShift documentation for more information on how to apply labels to nodes.

3.4.1.3.2.1. Why are MongoDB replicas spread over multiple nodes?

Each MongoDB replica is scheduled to a different node to support failover.

For example, if an OpenShift node failed, data would be completely inaccessible if all three MongoDB replicas were scheduled on this failing node. Setting a different nodeSelector for each MongoDB DeploymentConfig, and having a corresponding OpenShift node in the cluster matching this label will ensure the MongoDB pods get scheduled to different nodes.

In the production MBaaS template, there is a different nodeSelector for each MongoDB DeploymentConfig:

  • mbaas_id=mbaas1 for mongodb-1
  • mbaas_id=mbaas2 for mongodb-2
  • mbaas_id=mbaas3 for mongodb-3

Excerpt of DeploymentConfig of mongodb-1

{
  "kind": "DeploymentConfig",
  "apiVersion": "v1",
  "metadata": {
    "name": "mongodb-1",
    "labels": {
      "name": "mongodb"
    }
  },
  "spec": {
    ...
    "template": {
      ...
      "spec": {
        "nodeSelector": {
          "mbaas_id": "mbaas1"
        }

3.4.2. Installing the MBaaS

3.4.2.1. MBaaS Templates

The following templates can be used to provision an MBaaS in Openshift 3 using the 'oc' client:

  • fh-mbass-template-1node.json (Automatic Installation Template)

    Important

    This template is not recommended for production use.

Use this template to create an MBaaS with the same characteristics as the MBaaS template created by selecting the Automatic Installation from RHMAP Studio:

+ a single MongoDB replica with NO persistent storage requirements a single replica for each MBaaS component

  • fh-mbaas-template-1node-persistent.json

    Important

    This template is not recommended for production use.

Use this template to create an MBaaS with characteristics that are similar to the Automatic Installation template, but also include persistent storage and a Nagios component for monitoring:

+ a single MongoDB replica with a 25GiB PersistentVolume requirement a single replica defined for each MBaaS component a Nagios service with a 1GiB** PersistentVolume requirement

  • fh-mbaas-template-3node.json (Manual Installation Template)

    Use this template to create an MBaaS with the same characteristics as the MBaaS template created by selecting the Manual Installation from RHMAP Studio, it is the template recommended for production use:

    • three MongoDB replicas with a 50GiB PersistentVolume requirement each
    • a Nagios service with a 1GiB PersistentVolume requirement
    • three replicas defined for each MBaaS component (with the exception of fh-statsd)
    • nodeSelectors of mbaas_id=mbaas1, mbaas_id=mbaas2 & mbaas_id=mbaas3 for each of the MongoDB replicas

3.4.2.2. Installation

In this step, you will provision the MBaaS to the OpenShift cluster from the command line, based on the MBaaS OpenShift template.

First, download the latest version of the MBaaS OpenShift template.

  1. In the Studio, navigate to the Admin > MBaaS Targets section. Click New MBaaS Target.
  2. Choose OpenShift 3 as Type.
  3. At the bottom of the page, click Download Template and save the template file fh-mbaas-template-3node.json. You may now close the New MBaaS Target screen.

Using the downloaded template, provision the MBaaS in the OpenShift cluster from the command line. For general information about the OpenShift CLI, see CLI Operations in the OpenShift Enterprise documentation.

  1. Create a new project.

    Log in to OpenShift as an administrator. You will be prompted for credentials.

    oc login <public URL of the OpenShift master>

    Create the project:

    Warning

    The name of the OpenShift project chosen here must have the suffix -mbaas. The part of the name before -mbaas is used later in this guide as the ID of the MBaaS target associated with this OpenShift project. For example, if the ID of the MBaaS target is live, the OpenShift project name set here must be live-mbaas.

    oc new-project live-mbaas
  2. Set the node selector of the project to target MBaaS nodes.

    This ensures that all MBaaS components are deployed to the dedicated MBaaS nodes.

    Note

    If you have chosen not to have dedicated MBaaS nodes in Section 3.4.1.3.1, “Labelling for MBaaS components”, skip this step.

    Set the openshift.io/node-selector annotation to type=mbaas in the project’s namespace:

    Note

    You may need to add this annotation if it is missing.

    oc edit ns live-mbaas
    apiVersion: v1
    kind: Namespace
    metadata:
     annotations:
       openshift.io/node-selector: type=mbaas
    ...
  3. Provide SMTP server details for email alerts.

    The Nagios monitoring software, which is a part of the MBaaS template, sends alerts over email through a user-provided SMTP server.

    Create a ServiceAccount for the monitoring container, and give it the admin role.

    oc create serviceaccount nagios
    oc policy add-role-to-user admin -z nagios

    Set the following environment variables with values appropriate for your environment:

    export SMTP_SERVER="localhost"
    export SMTP_USERNAME="username"
    export SMTP_PASSWORD="password"
    export SMTP_FROM_ADDRESS="nagios@example.com"
    export RHMAP_ADMIN_EMAIL="root@localhost"

    If you do not need email alerts or want to set it up later, use the values provided in the sample above.

  1. Configure the proxy settings if a proxy is required for outbound Internet access.

    RHMAP can be configured to use a proxy for outbound internet access. Skip this step if you do not use a proxy.

    Run the following commands using your proxy IP address and port:

    export PROXY_URL="http://<proxy-host>:<proxy-port>"

    If the proxy requires authentication, run the following commands using your proxy username and password:

    export PROXY_URL="http://<username>:<password>@<proxy-host>:<proxy-port>"
  2. Create all the MBaaS resources from the template.

    oc new-app -f fh-mbaas-template-3node.json \
    -p SMTP_SERVER="${SMTP_SERVER}" \
    -p SMTP_USERNAME="${SMTP_USERNAME}" \
    -p SMTP_PASSWORD="${SMTP_PASSWORD}" \
    -p SMTP_FROM_ADDRESS="${SMTP_FROM_ADDRESS}" \
    -p RHMAP_ADMIN_EMAIL="${RHMAP_ADMIN_EMAIL}" \
    -p PROXY_URL="${PROXY_URL}"

    After all the resources are created, you should see output similar to the following:

    --> Deploying template fh-mbaas for "fh-mbaas"
         With parameters:
          MONGODB_FHMBAAS_USER=u-mbaas
          ...
    
    --> Creating resources ...
    --> Creating resources ...
        configmap "fh-mbaas-info" created
        service "fh-mbaas-service" created
        service "fh-messaging-service" created
        service "fh-metrics-service" created
        service "fh-statsd-service" created
        service "mongodb-1" created
        service "mongodb-2" created
        service "mongodb-3" created
        service "nagios" created
        serviceaccount "nagios" created
        rolebinding "nagiosadmin" created
        deploymentconfig "fh-mbaas" created
        deploymentconfig "fh-messaging" created
        deploymentconfig "fh-metrics" created
        deploymentconfig "fh-statsd" created
        deploymentconfig "nagios" created
        persistentvolumeclaim "mongodb-claim-1" created
        persistentvolumeclaim "mongodb-claim-2" created
        persistentvolumeclaim "mongodb-claim-3" created
        deploymentconfig "mongodb-1" created
        deploymentconfig "mongodb-2" created
        deploymentconfig "mongodb-3" created
        pod "mongodb-initiator" created
        route "mbaas" created
        route "nagios" created
    
    --> Success
        Run 'oc status' to view your app.

    It may take a minute for all the resources to get created and up to 10 minutes for all the components to get to a Running status.

3.4.3. Verifying The Installation

  1. Ping the health endpoint.

    If all services are created, all pods are running, and the route is exposed, the MBaaS health endpoint can be queried as follows:

    curl `oc get route mbaas --template "{{.spec.host}}"`/sys/info/health

    The endpoint responds with health information about the various MBaaS components and their dependencies. If there are no errors reported, the MBaaS is ready to be configured for use in the Studio. Successful output will resemble the following:

    {
      "status": "ok",
      "summary": "No issues to report. All tests passed without error",
      "details": [
        {
          "description": "Check Mongodb connection",
          "test_status": "ok",
          "result": {
            "id": "mongodb",
            "status": "OK",
            "error": null
          },
          "runtime": 33
        },
        {
          "description": "Check fh-messaging running",
          "test_status": "ok",
          "result": {
            "id": "fh-messaging",
            "status": "OK",
            "error": null
          },
          "runtime": 64
        },
        {
          "description": "Check fh-metrics running",
          "test_status": "ok",
          "result": {
            "id": "fh-metrics",
            "status": "OK",
            "error": null
          },
          "runtime": 201
        },
        {
          "description": "Check fh-statsd running",
          "test_status": "ok",
          "result": {
            "id": "fh-statsd",
            "status": "OK",
            "error": null
          },
          "runtime": 7020
        }
      ]
    }
  2. Verify that all Nagios checks are passing.

    Log in to the Nagios dashboard of the MBaaS by following the steps in the Accessing the Nagios Dashboard section in the Operations Guide.

    After logging in to the Nagios Dashboard, all checks under the left-hand-side Services menu should be indicated as OK. If any of the checks are not in an OK state, consult the Troubleshooting and Debugging guide, which explains the likely causes and fixes for common problems.

After verifying that the MBaaS is installed correctly, you must create an MBaaS target for the new MBaaS in the Studio.

3.5. Creating an MBaaS Target

  1. In the Studio, navigate to the Admin > MBaaS Targets section. Click New MBaaS Target.
  2. The user is presented with two options for the Deployment Type, Manual (Recommended) or Automatic.

3.5.1. Automatic MBaaS Target Creation

  1. As Deployment Type, Click Automatic.
  2. Enter the following information

    • MBaaS Id - a unique ID for the MBaaS, for example, live. The ID must be equal to the OpenShift project name chosen in the Installing the MBaaS section, without the -mbaas suffix.
    • OpenShift Master URL - the URL of the OpenShift master, for example, https://master.openshift.example.com:8443.
    • OpenShift Router DNS - a wildcard DNS entry of the OpenShift router, for example, *.cloudapps.example.com.
    • OpenShift API Token - an API Token is a short lived authentication token allowing RHMAP to interact with the OpenShift installation. A new API Token can be obtained from the OpenShift Master: https://master_host/oauth/token/request.
  3. Click Save MBaaS and you will be directed to the MBaaS Status screen. It can take several minutes before the status is reported back.

3.5.2. Manual MBaaS Target Creation

  1. As Deployment Type, Click Manual (Recommended).
  2. Enter the following information

    • MBaaS Id - a unique ID for the MBaaS, for example, live. The ID must be equal to the OpenShift project name chosen in the Installing the MBaaS section, without the -mbaas suffix.
    • OpenShift Master URL - the URL of the OpenShift master, for example, https://master.openshift.example.com:8443.
    • OpenShift Router DNS - a wildcard DNS entry of the OpenShift router, for example, *.cloudapps.example.com.
    • MBaaS Service Key

      Equivalent to the value of the FHMBAAS_KEY environment variable, which is automatically generated during installation. To find out this value, run the following command:

      oc env dc/fh-mbaas --list | grep FHMBAAS_KEY

      Alternatively, you can find the value in the OpenShift Console, in the Details tab of the fh-mbaas deployment, in the Env Vars section.

    • MBaaS URL

      A URL of the route exposed for the fh-mbaas-service, including the https protocol prefix. This can be retrieved from the OpenShift web console, or by running the following command:

      echo "https://"$(oc get route/mbaas -o template --template {{.spec.host}})
    • MBaaS Project URL - (Optional) URL where the OpenShift MBaaS project is available e.g. https://mbaas-mymbaas.openshift.example.com:8443/console/project/my-mbaas/overview.
    • Nagios URL - (Optional) Exposed route where Nagios is running in OpenShift 3 e.g. https://nagios-my-mbaas.openshift.example.com.
  3. Click Save MBaaS and you will be directed to the MBaaS Status screen. For a manual installation, the status should be reported back in less than a minute.

Once the process of creating the MBaaS has successfully completed, you can see the new MBaaS in the list of MBaaS targets.

OpenShift 3 MBaaS target

In your OpenShift account, you can see the MBaaS represented by a project.

OpenShift 3 MBaaS target

3.6. After Installation

Chapter 4. Adjusting System Resource Usage of the MBaaS and Cloud Apps

4.1. Overview

In the RHMAP 4.x MBaaS based on OpenShift 3, each Cloud App and and each MBaaS component runs in its own container. This architecture allows for granular control of CPU and memory consumption. A fine level of control of system resources helps to ensure efficient use of nodes, and to guarantee uninterrupted operation of critical services.

An application can be prepared for various situations, such as high peak load or sustained load, by making decisions about the resource limits of individual components. For example, you could decide that MongoDB must keep working at all times, and assign it high, guaranteed amount of resources. At the same time, if the availability of a front-end Node.js server is less critical, the server can be assigned less initial resources, with the possibility to use more resources when available.

4.2. Prerequisites

The system resources of MBaaS components and Cloud Apps in the MBaaS can be regulated using the mechanisms available in OpenShift – resource requests, limits, and quota. Before proceeding with the instructions in this guide, we advise you to read the Quotas and Limit Ranges section in the OpenShift documentation.

4.3. Adjusting Resource Usage of the MBaaS

The RHMAP MBaaS is composed of several components, each represented by a single container running in its own pod. Each container has default resource requests and limits assigned in the MBaaS OpenShift template. See the section Overview of Resource Usage of MBaaS Components for a complete reference of the default values.

Depending on the deployment model of the MBaaS, you may have to adjust the resource limits and requests to fit your environment. If the MBaaS components are deployed on the same nodes as the Cloud Apps, there is no adjustment required.

However, when the MBaaS components are deployed on nodes dedicated to running the MBaaS only, it is strongly recommended to adjust the resource limits to take full advantage of the available resources on the dedicated nodes.

4.3.1. Calculating the Appropriate Resource Requests and Limits

Note

This section refers to CPU resources in two different terms – the commonly used term vCPU (virtual CPU), and the term millicores used in OpenShift documentation. The unit of 1 vCPU is equal to 1000 m (millicores), which is equivalent to 100% of the time of one CPU core.

The resource limits must be set accordingly for your environment and depend on the characteristics of load on your Cloud Apps. However, the following rules can be used as a starting point for adjustments of resource limits:

  • Allow 2 GiB of RAM and 1 vCPU for the underlying operating system.
  • Split the remainder of resources in equal parts amongst the MBaaS Components.

4.3.1.1. Example

Given a virtual machine with 16 GiB of RAM and 4 vCPUs, we allow 2 GiB of RAM and 1 vCPU for the operating system. This leaves 14GB RAM and 3 vCPUs (equal to 3000 m) to distribute amongst the 5 MBaaS components.

14 GiB / 5 = 2.8 GiB of RAM per component
3000 m / 5 = 600 m per component

In this example, the resource limit for each MBaaS component would be 2.8 GiB of RAM and 600 millicores of CPU. Depending on the desired level of quality of service of each component, set the resource request values as described in the section Quality of service tiers in the OpenShift documentation.

4.3.2. Overview of Resource Usage of MBaaS Components

The following table lists the components of the MBaaS, their idle resource usage, default resource request, and default resource limit.

MBaaS componentIdle RAM usageRAM requestRAM limitIdle CPU usageCPU requestCPU limit

fh-mbaas

160 MiB

200 MiB

800 MiB

<1%

200 m

800 m

fh-messaging

160 MiB

200 MiB

400 MiB

<1%

200 m

400 m

fh-metrics

120 MiB

200 MiB

400 MiB

<1%

200 m

400 m

fh-statsd

75 MiB

200 MiB

400 MiB

<1%

200 m

400 m

mongodb

185 MiB

200 MiB

1000 MiB

<1%

200 m

1000 m

4.4. Adjusting Resource Usage of Cloud Apps

The resource requests and limits of Cloud Apps can be set the same way as for MBaaS components. There is no particular guideline for doing the adjustment in Cloud Apps.

4.4.1. Overview of Resource Usage of Cloud App Components

Cloud App componentIdle RAM usageRAM requestRAM limitIdle CPU usageCPU requestCPU limit

nodejs-frontend

125 MiB

90 MiB

250 MiB

<1%

100 m

500 m

redis

8 MiB

100 MiB

500 MiB

<1%

100 m

500 m

4.5. Setting Resource Requests and Limits

The procedure for setting the resource requests and limits is the same for both MBaaS components and Cloud App components.

  1. Identify the name of the deployment configuration.

  2. Open the editor for the deployment configuration, for example fh-mbaas:

    oc edit dc fh-mbaas
  3. Edit the requests and limits.

    The DeploymentConfig contains two resources sections with equivalent values: one in the spec.strategy section, and another in the spec.template.spec.containers section. Set the cpu and memory values of requests and limits as necessary, making sure the values stay equivalent between the two sections, and save the file.

    apiVersion: v1
    kind: DeploymentConfig
    metadata:
      name: fh-mbaas
      ...
    spec:
      ...
      strategy:
        resources:
          limits:
            cpu: 800m
            memory: 800Mi
          requests:
            cpu: 200m
            memory: 200Mi
          ...
        spec:
          containers:
            ...
            resources:
              limits:
                cpu: 800m
                memory: 800Mi
              requests:
                cpu: 200m
                memory: 200Mi
    Note

    Changing the deployment configuration triggers a new deployment. Once the deployment is complete, the resource limits are updated.

For more information on resources, see Deployment Resources in the OpenShift documentation.

4.6. Using Cluster Metrics to Visualize Resource Consumption

It is possible to view the immediate and historical resource usage of pods and containers in the form of donut charts and line charts using the Cluster Metrics deployment in OpenShift. Refer to Enabling Cluster Metrics in the OpenShift documentation for steps to enable cluster metrics.

Once cluster metrics are enabled, in the OpenShift web console, navigate to Browse > Pods and click on the component of interest. Click on the Metrics tab to see the visualizations.

metrics-donut-charts

Chapter 5. Setting Up SMTP for Cloud App Alerts

5.1. Overview

Each Cloud App can automatically send alerts by e-mail when specified events occur, such as when the Cloud App gets deployed, undeployed, or logs an error. See Alerts & Email Notifications for more information.

For the RHMAP 4.x MBaaS based on OpenShift 3, the e-mail function is not available immediately after installation. You must configure an SMTP server to enable e-mail support.

5.2. Prerequisites

  • An RHMAP 4.x MBaaS running in OpenShift Enterprise 3
  • An account on an SMTP server through which notification alerts can be sent
  • An email address where alerts should be sent
  • A deployed Cloud App

5.3. Configuring SMTP settings in fh-mbaas

The FH_EMAIL_SMTP and FH_EMAIL_ALERT_FROM environment variables in the fh-mbaas DeploymentConfig need to be set, using the below commands:

oc project <mbaas-project-id>
oc env dc/fh-mbaas FH_EMAIL_SMTP="smtps://username:password@localhost" FH_EMAIL_ALERT_FROM="admin@example.com"

After modifying the DeploymentConfig, a redeploy of the fh-mbaas pod should be triggered automatically. Once the pod is running again, you can verify the changes.

5.4. Verifying SMTP settings

  1. In the Studio, navigate to a deployed Cloud App.
  2. Go to the Notifications > Alerts section.
  3. Click Create An Alert .
  4. In the Emails field, enter your e-mail address.
  5. Click Test Emails.

You should receive an e-mail from the e-mail address set as FH_EMAIL_ALERT_FROM.

5.5. Troubleshooting

If the test email fails to send, verify the SMTP settings in the running fh-mbaas Pod.

oc env pod -l name=fh-mbaas --list | grep EMAIL

It may help to view the fh-mbaas logs while attempting to send an email, looking for any errors related to SMTP or email.

oc logs -f fh-mbaas-<deploy-uuid>

Ensure the Cloud App you are using to send a test mail with is running. If the test email sends OK, but fails to arrive, check it hasn’t been placed in your spam or junk folder.

Chapter 6. Backing up an MBaaS

You can back up an MBaaS by following this procedure. After completing the procedure and storing the backup data safely, you can restore an MBaaS to the state at the time of backup.

6.1. Requirements

  • A self-managed MBaaS installation on an OpenShift platform
  • A local installation of the oc binary
  • The oc binary has a logged in user on the platform you wish to back up
  • The oc binary has a logged in user with permission to run the oc get pc command

6.2. Expected Storage Requirements

As most of the data being backed up is the data stored in the platform’s Cloud Apps, the amount of backup storage space required is proportional to the amount of data stored by the Cloud Apps.

Other factors that have an impact on how much storage is required for backups include:

  • how often you backup
  • what compression is used
  • the length of time you store the backups

6.3. What Data is Backed Up

You must back up the following items to back up an MBaaS:

  • MongoDB replica set data
  • Nagios historical data

6.4. Backing up the MongoDB Data

Back up the MongoDB data using the mongodump command in combination with the oc exec command:

oc exec `oc get po --selector='deploymentconfig=mongodb-1' --template="{{(index .items 0).metadata.name}}"` bash -- -c '/opt/rh/rh-mongodb32/root/usr/bin/mongodump -u admin -p ${MONGODB_ADMIN_PASSWORD} --gzip --archive' > ./core7_mongodb.gz

6.5. Backing up the Nagios Data

Back up the Nagios data by copying the files from the Nagios pod using the oc exec command:

oc exec <nagios-pod-name> bash -- -c 'tar -zcf - /var/log/nagios/' > nagios-backup.tar.gz

For example, if the <nagios-pod-name> is nagios-1:

oc exec nagios-1 bash -- -c 'tar -zcf - /var/log/nagios/' > nagios-backup.tar.gz

6.6. Backup Frequency

Red Hat recommends backing up at least once per day, but you might decide to back up critical data more frequently.

6.7. Example Backup Script

The example script below shows how you can back up an MBaaS from a backup server, assuming that the user on the backup server has permission to execute commands via the oc binary:

ts=$(date +%y-%m-%d-%H-%M-%S)
# Backup the Mongo services
project=mbaas
for pod in $(oc get pods -n $project | grep mongodb-[0-9]-[0-9]-[0-9a-zA-Z] | awk '{print $1}'); do
	service=$(echo $pod | cut -f1,2 -d"-");
    oc exec $pod bash -- -c '/opt/rh/rh-mongodb32/root/usr/bin/mongodump -u admin -p ${MONGODB_ADMIN_PASSWORD} --gzip --archive' > $project-$service-$ts.gz
done

# Backup the Nagios service
oc exec -n $project <nagios-pod-name> bash -- -c 'tar -zcf - /var/log/nagios/' > nagios-backup-$ts.tar.gz

Chapter 7. Restoring an MBaaS Backup

7.1. Overview

You can back up an MBaaS by following the Backing up an MBaaS procedure. Once you have created the backup, you can restore a MBaaS to the state at the time of backup.

7.2. Requirements

  • A self-managed MBaaS installation on an OpenShift platform
  • The oc binary has a logged in user with permission to edit deployment configurations and view persistent volumes.
  • A backup of the data for mongodb and Nagios services as described in Backing up an MBaaS .

7.3. What Data is Restored

  • Mongodb replicaset data
  • Nagios historical data

7.4. Restoring Nagios Data

  1. Locate the Nagios persistent volume claim-name from the Nagios pod:

    oc describe pod <nagios-pod-name> | grep "ClaimName:"
    claimName: <claim-name>
  2. Determine the volume-name using the claim-name from step 1:

    oc describe pvc <claim-name> | grep "Volume:"
    Volume: <volume-name>
  3. Using the volume-name, run the following command:

    oc describe pv <volume-name>

    The output contains a Source section, similar to:

    Source:
        Type:      	NFS (an NFS mount that lasts the lifetime of a pod)
        Server:    	nfs-server.local
        Path:      	/path/to/nfs/directory
        ReadOnly:  	false

    This information describes where the data for the Nagios persistent volume is located.

  4. Stop the Nagios pod by editing the Nagios deployment config and set replicas to 0:

    oc edit dc nagios

    Change replicas to 0:

    spec:
      replicas: 0
  5. Restore the Nagios data by deleting the contents of the pod’s current persistent volume, then extract your backed up data into that directory. Ensure that after extraction the status.dat and related files are at the root of the persistent volume.
  6. Once the restore is complete, start the Nagios pod by editing the Nagios deployment config and set replicas to 1:

    oc edit dc nagios

    Change replicas back to 1:

    spec:
      replicas: 1

7.5. Restoring MongoDB Data

To restore the MongoDB data, the replicaset must be operational. If this is not the case, see the MongoDB doesn’t respond after repeated installation of the MBaaS section of this guide.

Should you encounter isses with MongoDB, see Troubleshooting MongoDB Issues.

  1. Locate the primary Mongodb node. You must restore using the primary member of the Mongodb replicaset, you can find this by logging into any of the Mongodb nodes via remote shell:

    oc rsh <mongodb-pod-name>

    Connect to the mongo shell in this pod:

    mongo admin -u admin -p ${MONGODB_ADMIN_PASSWORD}

    Check the replicaset status:

    rs.status()

    The node with the status set to PRIMARY is the node to use in the following restoration procedure.

  2. Upload MongoDB data to the primary pod using the oc rsync command:

    oc rsync /path/to/backup/directory/ <primary-mongodb-pod>:/tmp/backup
    Tip

    rsync copies everything in a directory. To save time put the MongoDB data files into an empty directory before using rsync.

  3. Import data into the Mongodb replicaset using the 'mongorestore' tool from inside the mongodb pod. Log into the pod:

    oc rsh <mongodb-pod-name>

    Restore the data:

    mongorestore -u admin -p <mongo-admin-password> --gzip --archive=mongodb-backup.gz

    For more information on this tool, see the mongorestore documentation.

7.6. Confirming Restoration

Log in to the Studio and ensure that all the projects, environments and MBaaS targets are correctly restored.

Chapter 8. Post Installation Proxy Set-up

After installing the MBaaS, configure it to use a proxy.

  1. Configure the RHMAP node proxy:

    oc edit configmap node-proxy

    This opens the node-proxy config map in your default editor.

    Edit the following values in the relevant data keys:

    http-proxy: "http://<proxy-host>:<proxy-port>"
    https-proxy: "http://<proxy-host>:<proxy-port>"
    Note

    Make sure that you only use the http protocol for the https-proxy setting.

    http-proxy: "http://<username>:<password>@<proxy-host>:<proxy-port>"
    https-proxy: "http://<username>:<password>@<proxy-host>:<proxy-port>"
    Note

    Make sure that you only use the http protocol for the https-proxy setting.

  2. Redeploy all the relevant pods to pick up the new configuration. The services that require a pod redeploy are listed below. Use the following command for each of the services.

    for i in fh-mbaas fh-statsd fh-metrics fh-messaging; do oc deploy ${i} --latest; done

    On OpenShift 3.4, use the following command:

    for i in fh-mbaas fh-statsd fh-metrics fh-messaging; do oc rollout latest ${i}; done

Chapter 9. Troubleshooting the RHMAP MBaaS

9.1. Overview

This document provides information on how to identify, debug, and resolve possible issues that can be encountered during installation and usage of the RHMAP MBaaS on OpenShift 3.

In Common Problems, you can see a list of resolutions for problems that might occur during installation or usage of the MBaaS.

9.2. Check the Health Endpoint of the MBaaS

The first step to check whether an MBaaS is running correctly is to see the output of its health endpoint. The HTTP health endpoint in the MBaaS reports the health status of the MBaaS and of its individual components.

From the command line, run the following command:

curl https://<MBaaS URL>/sys/info/health

To find the MBaaS URL:

  1. Navigate to the MBaaS that you want to check, by selecting Admin, then MBaaS Targets, and then choosing the MBaaS.
  2. Copy the value of the MBaaS URL field from the MBaaS details screen.

If the MBaaS is running correctly without any errors, the output of the command should be similar to the following, showing a "test_status": "ok" for each component:

{
 "status": "ok",
 "summary": "No issues to report. All tests passed without error",
 "details": [
  {
   "description": "Check fh-statsd running",
   "test_status": "ok",
   "result": {
    "id": "fh-statsd",
    "status": "OK",
    "error": null
   },
   "runtime": 6
  },
...
}

If there are any errors, the output will contain error messages in the result.error object of the details array for individual components. Use this information to identify which component is failing and to get information on further steps to resolve the failure.

You can also see a HTTP 503 Service Unavailable error returned from the health endpoint. This can happen in several situations:

Alternatively, you can see the result of this health check in the Studio. Navigate to the Admin > MBaaS Targets section, select your MBaaS target, and click Check the MBaaS Status.

If there is an error, you are presented with a screen showing the same information as described above. Use the provided links to OpenShift Web Console and the associated MBaaS Project in OpenShift to help with debugging of the issue on the OpenShift side.

OpenShift 3 MBaaS target failure

9.3. Analyze Logs

To see the logging output of individual MBaaS components, you must configure centralized logging in your OpenShift cluster. See Enabling Centralized Logging for a detailed procedure.

The section Identifying Issues in an MBaaS provides guidance on discovering MBaaS failures by searching and filtering its logging output.

9.4. Common Problems

9.4.1. A replica pod of mongodb-service is replaced with a new one

9.4.1.1. Summary

The replica set is susceptible to down time if the replica set members configuration is not up to date with the actual set of pods. There must be at least two members active at any time, in order for an election of a primary member to happen. Without a primary member, the MongoDB service won’t perform any read or write operations.

A MongoDB replica may get terminated in several situations:

  • A node hosting a MongoDB replica is terminated or evacuated.
  • A re-deploy is triggered on one of the MongoDB Deployment objects in the project – manually or by a configuration change.
  • One of the MongoDB deployments is scaled down to zero pods, then scaled back up to one pod.

To learn more about replication in MongoDB, see Replication in the official MongoDB documentation.

9.4.1.2. Fix

The following procedure shows you how to re-configure a MongoDB replica set into a fully operational state. You must synchronize the list of replica set members with the actual set of MongoDB pods in the cluster, and set a primary member of the replica set.

  1. Note the MongoDB endpoints.

    oc get ep | grep mongo

    Make note of the list of endpoints. It is used later to set the replica set members configuration.

    NAME              ENDPOINTS                                      AGE
    mongodb-1         10.1.2.152:27017                               17h
    mongodb-2         10.1.4.136:27017                               17h
    mongodb-3         10.1.5.16:27017                                17h
  2. Log in to the oldest MongoDB replica pod.

    List all the MongoDB replica pods.

    oc get po -l name=mongodb-replica

    In the output, find the pod with the highest value in the AGE field.

    NAME                READY     STATUS    RESTARTS   AGE
    mongodb-1-1-4nsrv   1/1       Running   0          19h
    mongodb-2-1-j4v3x   1/1       Running   0          3h
    mongodb-3-2-7tezv   1/1       Running   0          1h

    In this case, it is mongodb-1-1-4nsrv with an age of 19 hours.

    Log in to the pod using oc rsh.

    oc rsh mongodb-1-1-4nsrv
  3. Open a MongoDB shell on the primary member.

    mongo admin -u admin -p ${MONGODB_ADMIN_PASSWORD} --host ${MONGODB_REPLICA_NAME}/localhost
    MongoDB shell version: 2.4.9
    connecting to: rs0/localhost:27017/admin
    [...]
    Welcome to the MongoDB shell.
    For interactive help, type "help".
    For more comprehensive documentation, see
        http://docs.mongodb.org/
    Questions? Try the support group
        http://groups.google.com/group/mongodb-user
    rs0:PRIMARY>
  4. List the configured members.

    Run rs.conf() in the MongoDB shell.

    rs0:PRIMARY> rs.conf()
    {
        "_id" : "rs0",
        "version" : 56239,
        "members" : [
            {
                "_id" : 3,
                "host" : "10.1.0.2:27017"
            },
            {
                "_id" : 4,
                "host" : "10.1.1.2:27017"
            },
            {
                "_id" : 5,
                "host" : "10.1.6.4:27017"
            }
        ]
    }
  5. Ensure all hosts have either PRIMARY or SECONDARY status.

    Run the following command. It may take several seconds to complete.

    rs0:PRIMARY> rs.status().members.forEach(function(member) {print(member.name + ' :: ' + member.stateStr)})
    mongodb-1:27017  :: PRIMARY
    mongodb-2:27017  :: SECONDARY
    mongodb-3:27017  :: SECONDARY
    rs0:PRIMARY>

    There must be exactly one PRIMARY node. All the other nodes must be SECONDARY. If a member is in a STARTUP, STARTUP2, RECOVERING, or UNKNOWN state, try running the above command again in a few minutes. These states may signify that the replica set is performing a startup, recovery, or other procedure potentially resulting in an operational state.

9.4.1.3. Result

After applying the fix, all three MongoDB pods will be members of the replica set. If one of the three members terminates unexpectedly, the two remaining members are enough to keep the MongoDB service fully operational.

9.4.2. MongoDB doesn’t respond after repeated installation of the MBaaS

9.4.2.1. Summary

Note

The described situation can result from an attempt to create an MBaaS with the same name as a previously deleted one. We suggest you use unique names for every MBaaS installation.

If the mongodb-service is not responding after the installation of the MBaaS, it is possible that some of the MongoDB replica set members failed to start up. This can happen due to a combination of the following factors:

  • The most likely cause of failure in MongoDB startup is the presence of a mongod.lock lock file and journal files in the MongoDB data folder, left over from an improperly terminated MongoDB instance.
  • If a MongoDB pod is terminated, the associated persistent volumes transition to a Released state. When a new MongoDB pod replaces a terminated one, it may get attached to the same persistent volume which was attached to the terminated MongoDB instance, and thus get exposed to the files created by the terminated instance.

9.4.2.2. Fix

Note

SSH access and administrator rights on the OpenShift master and the NFS server are required for the following procedure.

Note

This procedure describes a fix only for persistent volumes backed by NFS. Refer to Configuring Persistent Storage in the official OpenShift documentation for general information on handling other volume types.

The primary indicator of this situation is the mongodb-initiator pod not reaching the Completed status.

Run the following command to see the status of mongodb-initiator:

oc get pod mongodb-initiator
NAME                READY     STATUS      RESTARTS   AGE
mongodb-initiator   1/1       Running     0          5d

If the status is any other than Completed, the MongoDB replica set is not created properly. If mongodb-initiator stays in this state too long, it may be a signal that one of the MongoDB pods has failed to start. To confirm whether this is the case, check logs of mongodb-initiator using the following command:

oc logs mongodb-initiator
=> Waiting for 3 MongoDB endpoints ...mongodb-1
mongodb-2
mongodb-3
=> Waiting for all endpoints to accept connections...

If the above message is the last one in the output, it signifies that some of the MongoDB pods are not responding.

Check the event log by running the following command:

oc get ev

If the output contains a message similar to the following, you should continue with the below procedure to clean up the persistent volumes:

  FailedMount   {kubelet ip-10-0-0-100.example.internal}   Unable to mount volumes for pod "mongodb-1-1-example-mbaas": Mount failed: exit status 32

The following procedure will guide you through the process of deleting contents of existing persistent volumes, creating new persistent volumes, and re-creating persistent volume claims.

  1. Find the NFS paths.

    On the OpenShift master node, execute the following command to find the paths of all persistent volumes associated with an MBaaS. Replace <mbaas-project-name> with the name of the MBaaS project in OpenShift.

    list=$(oc get pv | grep <mbaas-project-name> | awk '{ print $1}');
    for pv in ${list[@]} ; do
      path=$(oc describe pv ${pv} | grep Path: | awk '{print $2}' | tr -d '\r')
      echo ${path}
    done

    Example output:

    /nfs/exp222
    /nfs/exp249
    /nfs/exp255
  2. Delete all contents of the found NFS paths.

    Log in to the NFS server using ssh.

    Execute the following command to list contents of the paths. Replace <NFS paths> with the list of paths from the previous step, separated by spaces.

    for path in <NFS paths> ; do
      echo ${path}
      sudo ls -l ${path}
      echo " "
    done

    Example output:

    /nfs/exp222
    -rw-------. 1001320000 nfsnobody admin.0
    -rw-------. 1001320000 nfsnobody admin.ns
    -rw-------. 1001320000 nfsnobody fh-mbaas.0
    -rw-------. 1001320000 nfsnobody fh-mbaas.ns
    -rw-------. 1001320000 nfsnobody fh-metrics.0
    -rw-------. 1001320000 nfsnobody fh-metrics.ns
    -rw-------. 1001320000 nfsnobody fh-reporting.0
    -rw-------. 1001320000 nfsnobody fh-reporting.ns
    drwxr-xr-x. 1001320000 nfsnobody journal
    -rw-------. 1001320000 nfsnobody local.0
    -rw-------. 1001320000 nfsnobody local.1
    -rw-------. 1001320000 nfsnobody local.ns
    -rwxr-xr-x. 1001320000 nfsnobody mongod.lock
    drwxr-xr-x. 1001320000 nfsnobody _tmp
    
    /nfs/exp249
    drwxr-xr-x. 1001320000 nfsnobody journal
    -rw-------. 1001320000 nfsnobody local.0
    -rw-------. 1001320000 nfsnobody local.ns
    -rwxr-xr-x. 1001320000 nfsnobody mongod.lock
    drwxr-xr-x. 1001320000 nfsnobody _tmp
    
    /nfs/exp255
    drwxr-xr-x. 1001320000 nfsnobody journal
    -rw-------. 1001320000 nfsnobody local.0
    -rw-------. 1001320000 nfsnobody local.ns
    -rwxr-xr-x. 1001320000 nfsnobody mongod.lock
    drwxr-xr-x. 1001320000 nfsnobody _tmp
    Warning

    Make sure to back up all data before proceeding. The following operation may result in irrecoverable loss of data.

    If the listed contents of the paths resemble the output shown above, delete all contents of the found NFS paths. Replace <NFS paths> with the list of paths from step 1, separated by spaces.

    for path in <NFS paths>
    do
      if [ -z ${path+x} ]
      then
        echo "path is unset"
      else
        echo "path is set to '$path'"
        cd ${path} && rm -rf ./*
      fi
    done
  3. Re-create persistent volumes.

    Log in to the OpenShift master node using ssh.

    Navigate to the directory which contains the YAML files that were used to create the persistent volumes in the section Creating NFS-backed PersistentVolumes for the MongoDB replica set members of the guide Provisioning an MBaaS in Red Hat OpenShift Enterprise 3.

    Execute the following command to delete and re-create the persistent volumes. Replace <mbaas-project-name> with the name of the MBaaS project in OpenShift.

    list=$(oc get pv | grep <mbaas-project-name> | awk '{ print $1}');
    for pv in ${list}; do
      oc delete pv ${pv}
      oc create -f ${pv}.yaml
    done

    The persistent volumes are now re-created and in Available state.

    Note

    The re-created persistent volumes will not be used by OpenShift again for the same persistent volume claims. Make sure you have at least three additional persistent volumes in Available state.

  4. Re-create persistent volume claims for MongoDB.

    Create three JSON files, with the following names:

    • mongodb-claim-1.json
    • mongodb-claim-2.json
    • mongodb-claim-3.json

    Copy the following contents into each file. Change the metadata.name value to match the name of the file without the suffix. For example, the contents for the mongodb-claim-1.json file are as follows:

    {
      "kind": "PersistentVolumeClaim",
      "apiVersion": "v1",
      "metadata": {
        "name": "mongodb-claim-1"
      },
      "spec": {
        "accessModes": ["ReadWriteOnce"],
        "resources": {
          "requests": {
            "storage": "50Gi"
          }
        }
      }
    }

    Run the following command to re-create the persistent volume claims.

    for pvc in mongodb-claim-1 mongodb-claim-2 mongodb-claim-3; do
      oc delete pvc ${pvc}
      oc create -f ${pvc}.json
    done
  5. Verify that mongodb-initiator proceeds with initialization.

    Run the following command to see the logs of mongodb-initiator.

    oc logs mongodb-initiator -f

    After mongodb-initiator completes its work, the log output should contain the following message, indicating that the MongoDB replica set was successfully created.

    => Successfully initialized replSet

9.4.2.3. Result

The MongoDB service is fully operational with all three replicas attached to their persistent volumes. The persistent volumes left in Released state from the previous installation are now in the Available state, ready for use by other persistent volume claims.

9.4.3. MongoDB replica set stops replicating correctly

9.4.3.1. Summary

If some of the MBaaS components start to crash, this may be because they can not connect to a primary member in the MongoDB replica set. This usually indicates that the replica set configuration has become inconsistent. This can happen if a majority of the member pods get replaced and have new IP addresses. In this case, data cannot be written to or read from MongoDB replica set in the MBaaS project.

To verify the replica set state as seen by each member, run the following command in the shell of a user logged in to OpenShift with access to the MBaaS project:

for i in `oc get po -a | grep -e "mongodb-[0-9]\+"  | awk '{print $1}'`; do
  echo "## ${i} ##"
  echo mongo admin -u admin -p \${MONGODB_ADMIN_PASSWORD} --eval "printjson\(rs.status\(\)\)" |  oc rsh --shell='/bin/bash' $i
done

For a fully consistent replica set, the output for each member would contain a members object listing details about each member. If the output resembles the following, containing the "ok" : 0 value for some members, proceed to the fix in order to make the replica set consistent.

## mongodb-1-1-8syid ##
MongoDB shell version: 2.4.9
connecting to: admin
{
    "startupStatus" : 1,
    "ok" : 0,
    "errmsg" : "loading local.system.replset config (LOADINGCONFIG)"
}
## mongodb-2-1-m6ao1 ##
MongoDB shell version: 2.4.9
connecting to: admin
{
    "startupStatus" : 1,
    "ok" : 0,
    "errmsg" : "loading local.system.replset config (LOADINGCONFIG)"
}
## mongodb-3-2-e0a11 ##
MongoDB shell version: 2.4.9
connecting to: admin
{
    "startupStatus" : 1,
    "ok" : 0,
    "errmsg" : "loading local.system.replset config (LOADINGCONFIG)"
}

9.4.3.2. Fix

You can make the replica set consistent by forcing a re-deploy.

  1. Note the MongoDB endpoints which are in an error status.

    oc get po

    Example:

    ```bash
    NAME                   READY     STATUS      RESTARTS   AGE
    mongodb-1-1-pu0fz      1/1       Error          0        1h
    
    ```
  2. Force a deploy of this Pod

    oc deploy mongodb-1 --latest

9.4.3.3. Result

The replica starts replicating properly again and dependent MBaaS components start working again.

9.4.4. An MBaaS component fails to start because no suitable nodes are found

9.4.4.1. Summary

If some of the MBaaS components are not starting up after the installation, it may be the case that the OpenShift scheduler failed to find suitable nodes on which to schedule the pods of those MBaaS components. This means that the OpenShift cluster doesn’t contain all the nodes required by the MBaaS OpenShift template, or that those nodes don’t satisfy the requirements on system resources, node labels, and other parameters.

Read more about the OpenShift Scheduler in the OpenShift documentation.

To verify that this is the problem, run the following command to list the event log:

oc get ev

If the output contains one of the following messages, you are most likely facing this problem – the nodes in your OpenShift cluster don’t fulfill some of the requirements.

  • Failed for reason MatchNodeSelector and possibly others
  • Failed for reason PodExceedsFreeCPU and possibly others

9.4.4.2. Fix

To fix this problem, configure nodes in your OpenShift cluster to match the requirements of the MBaaS OpenShift template.

  • Apply correct labels to nodes.

    Refer to Apply Node Labels in the guide Provisioning an MBaaS in Red Hat OpenShift Enterprise 3 for details on what labels must be applied to nodes.

  • Make sure the OpenShift cluster has sufficient resources for the MBaaS components, Cloud Apps, and Cloud Services it runs.

    Configure the machines used as OpenShift nodes to have more CPU power and internal memory available, or add more nodes to the cluster. Refer to the guide on Overcommitting and Compute Resources in the OpenShift documentation for more information on how containers use system resources.

  • Clean up the OpenShift instance.

    Delete unused projects from the OpenShift instance.

Alternatively, it is also possible to correct the problem from the other side — change the deployment configurations in the MBaaS OpenShift template to match the setup of your OpenShift cluster.

Warning

Changing the deployment configurations may negatively impact the performance and reliability of the MBaaS. Therefore, this is not a recommended approach.

To list all deployment configurations, run the following command:

oc get dc
NAME           TRIGGERS       LATEST
fh-mbaas       ConfigChange   1
fh-messaging   ConfigChange   1
fh-metrics     ConfigChange   1
fh-statsd      ConfigChange   1
mongodb-1      ConfigChange   1
mongodb-2      ConfigChange   1
mongodb-3      ConfigChange   1

To edit a deployment configuration, use the oc edit dc <deployment> command. For example, to edit the configuration of the fh-mbaas deployment, run the following command:

oc edit dc fh-mbaas

You can modify system resource requirements in the resources sections.

...
resources:
  limits:
    cpu: 800m
    memory: 800Mi
  requests:
    cpu: 200m
    memory: 200Mi
...

Changing a deployment configuration triggers a deployment operation.

9.4.4.3. Result

If you changed the setup of nodes in the OpenShift cluster to match the requirements of the MBaaS OpenShift template, the MBaaS is now fully operational without any limitation to quality of service.

If you changed the deployment configuration of any MBaaS component, the cluster should now be fully operational, with a potential limitation to quality of service.

9.4.4.4. Result

You can now deploy your app to the RHMAP MBaaS.

Legal Notice

Copyright © 2017 Red Hat, Inc.
The text of and illustrations in this document are licensed by Red Hat under a Creative Commons Attribution–Share Alike 3.0 Unported license ("CC-BY-SA"). An explanation of CC-BY-SA is available at http://creativecommons.org/licenses/by-sa/3.0/. In accordance with CC-BY-SA, if you distribute this document or an adaptation of it, you must provide the URL for the original version.
Red Hat, as the licensor of this document, waives the right to enforce, and agrees not to assert, Section 4d of CC-BY-SA to the fullest extent permitted by applicable law.
Red Hat, Red Hat Enterprise Linux, the Shadowman logo, JBoss, OpenShift, Fedora, the Infinity logo, and RHCE are trademarks of Red Hat, Inc., registered in the United States and other countries.
Linux® is the registered trademark of Linus Torvalds in the United States and other countries.
Java® is a registered trademark of Oracle and/or its affiliates.
XFS® is a trademark of Silicon Graphics International Corp. or its subsidiaries in the United States and/or other countries.
MySQL® is a registered trademark of MySQL AB in the United States, the European Union and other countries.
Node.js® is an official trademark of Joyent. Red Hat Software Collections is not formally related to or endorsed by the official Joyent Node.js open source or commercial project.
The OpenStack® Word Mark and OpenStack logo are either registered trademarks/service marks or trademarks/service marks of the OpenStack Foundation, in the United States and other countries and are used with the OpenStack Foundation's permission. We are not affiliated with, endorsed or sponsored by the OpenStack Foundation, or the OpenStack community.
All other trademarks are the property of their respective owners.