MBaaS Administration and Installation Guide

Red Hat Mobile Application Platform 4.1

For Red Hat Mobile Application Platform 4.1

Red Hat Customer Content Services

Abstract

This document provides guides related to installation and administration of the RHMAP 4.x MBaaS on OpenShift 3.

Chapter 1. RHMAP 4.x MBaaS

1.1. Overview

Red Hat Mobile Application (RHMAP) 4.0 has a hybrid deployment model — the Core, the MBaaS, and the Build Farm are deployed in different locations.

  • Development and management of apps occurs in the multi-tenant cloud instance of the RHMAP Core hosted by Red Hat.
  • Application data, runtime, and integrations are deployed to the RHMAP MBaaS installed in a private or public instance of OpenShift Enterprise 3.
  • The Build Farm is deployed separately from the Core and the MBaaS and is shared between all instances of RHMAP. Third-party Linux, Windows, and Apple server hosting providers are used to support building client app binaries for all platforms.

The Mobile Backend-as-a-Service (MBaaS) is a core component of RHMAP – the back-end platform hosting containerized cloud applications in conjunction with database storage (MongoDB). The cloud applications deployed in an MBaaS can make use of RHMAP APIs, such as data synchronization, caching, or push notifications, and integrate with enterprise systems or other cloud services.

1.2. Architecture of the MBaaS

The RHMAP MBaaS 4.0 is built on top of several technologies, including OpenShift Enterprise 3, Kubernetes, Docker, and Red Hat Software Collections. The MBaaS consists of several components, each running in its own Docker container. Similarly, every cloud app deployed to the MBaaS runs in a Docker container. Those containers are deployed and orchestrated by Kubernetes.

MBaaS deployment diagram

In the MBaaS, the users can configure multiple isolated runtime and storage environments to support software development life-cycle stages, such as development, testing, and production. Each environment can host multiple cloud apps.

1.3. Security considerations

Since the MBaaS is not hosted in Red Hat’s public multi-tenant cloud, the data transmitted between the mobile device and the cloud app does not pass through any servers operated by Red Hat or any other third party. Private data from back-end systems is transmitted directly between mobile devices and the MBaaS.

The following data still resides in the RHMAP Core:

  • User names and passwords of RHMAP accounts
  • Master database of the Core, with entries for projects, apps, and their IDs
  • Git repositories hosting the source code of client and cloud apps
  • App store containing the built binaries of client apps

Chapter 2. Installing the MBaaS

This guide provides a detailed overview of the steps required to get from purchasing a Red Hat Mobile Application Platform (RHMAP) 4.x subscription to a working installation of an MBaaS connected to your RHMAP domain.

2.1. Installation Steps

After Red Hat receives the purchase order for a Red Hat Mobile Application Platform (RHMAP) 4.x subscription, a member of the sales team internally requests a new RHMAP domain for access to an instance of the RHMAP Core hosted by Red Hat.

Once the domain is created, a representative of the Red Hat Customer Enablement team will instruct you to install the MBaaS, which involves installation of Red Hat Enterprise Linux (RHEL), installation of OpenShift Enterprise, provisioning of the MBaaS in the OpenShift cluster, and installation of other optional components.

2.1.1. Getting Access to RHMAP Core

The following steps for getting access to RHMAP Core are performed by a representative of the Red Hat Customer Enablement team:

  1. Create a domain.

    The domain, such as customername.redhatmobile.com, hosts the RHMAP Core for a single customer.

  2. Create an administrator account.

    An RHMAP administrator account is created in the domain, and the customer’s technical contact receives an activation e-mail which allows access to the domain using the new account.

2.1.2. Preparing Nodes for MBaaS Installation

The following steps are covered in the Preparing Nodes for MBaaS Installation guide.

  1. Install RHEL on each cluster node as per the RHEL Installation Guide. You can Download RHEL from the Red Hat Customer Portal.
  2. Register each node with Red Hat Subscription Manager (RHSM).

    Follow step 2 in the Preparing Nodes for MBaaS Installation guide.

  3. Install OpenShift as per the OpenShift Installation and Configuration guide.

    Follow the considerations for infrastructure sizing to determine how many nodes to configure in your OpenShift cluster.

  4. Enable Docker on each node to access container images of RHMAP components hosted in the Red Hat Docker registry.

    Follow step 4 in the Preparing Nodes for MBaaS Installation guide.

2.1.3. Installing the MBaaS

After the nodes in the cluster have RHEL installed, OpenShift installed, and are registered with RHSM, you can proceed to the main installation step.

  1. Follow the Provisioning an RHMAP 4.x MBaaS in OpenShift 3 guide.

    As part of the provisioning process, OpenShift automatically downloads RHMAP container images from the Red Hat Docker registry.

  2. Adjust system resource usage of MBaaS components.

    If the MBaaS components are deployed on dedicated nodes in your cluster (separate from cloud apps), we strongly recommend that you adjust the resource limits of MBaaS components to take full advantage of the available system resources.

    Follow the Adjusting System Resource Usage of the MBaaS and Cloud Apps guide for detailed steps.

2.1.4. Installing Additional Features

Chapter 3. Preparing Nodes for MBaaS Installation

3.1. Overview

Before installation of the MBaaS, you must first register each node in the cluster with the Red Hat Subscription Manager (RHSM), and install OpenShift 3.

The registration enables OpenShift to access the Docker container images of RHMAP components hosted in the Red Hat Docker registry.

3.2. Prerequisites

  • Access to an RHMAP domain and an RHMAP administrator account. See Installing the MBaaS for more information.

3.3. Procedure

  1. Install RHEL.

    Install RHEL on each machine that will serve as a node in the OpenShift cluster backing the MBaaS. Follow the RHEL Installation Guide.

  2. Register all cluster nodes using RHSM and attach the nodes to the RHMAP subscription.

    Perform the following procedure for each node in the cluster.

    1. Register the node with RHSM.

      Replace <username> and <password> with the user name and password for your Red Hat account.

      sudo subscription-manager register --username=<username> --password=<password>
      Registering to: subscription.rhn.redhat.com:443/subscription
      The system has been registered with ID: abcdef12-3456-7890-1234-56789012abcd
    2. List the available subscriptions.

      sudo subscription-manager list --available
    3. Find the pool ID for an RHMAP subscription and attach it.

      sudo subscription-manager attach --pool=<pool_id>

      You will see output similar to the following:

      Successfully attached a subscription for: {ProductName}
  3. Install OpenShift.

    See the Installation and Configuration guide in the OpenShift documentation for detailed installation procedure.

    Follow the considerations for infrastructure sizing to determine how many nodes to configure in your OpenShift cluster.

    Note

    In the OpenShift Installation and Configuration guide:

    • Skip steps 1, 2 and 3 in section 2.2.4.1. Software Prerequisites – Registering the Hosts, which describe the same registration process that is already covered in this guide in step 1.
    • Choose the default, RPM-based installation method. See section 2.3. RPM vs. Containerized for more details.
  4. Enable Docker to access container images of RHMAP components.

    Perform the following procedure for each node in the cluster.

    1. Enable the rhel-7-server-optional-rpms repository.

      sudo subscription-manager repos --enable=rhel-7-server-optional-rpms
      Repository 'rhel-7-server-optional-rpms' is enabled for this system.
    2. Install the RHSM plugin subscription-manager-plugin-container.

      sudo yum install subscription-manager-plugin-container
    3. Run rhsmcertd-worker to refresh the local certificate store.

      rhsmcertd-worker must be run as the superuser, otherwise it may fail to work without a warning.

      sudo /usr/libexec/rhsmcertd-worker

      To verify that the certificates were downloaded, check the contents of the /etc/docker/certs.d/ directory.

      ls -l /etc/docker/certs.d/ | grep access.redhat.com

      /etc/docker/certs.d/ now contains directories access.redhat.com and registry.access.redhat.com.

      drwxr-xr-x. 2 root root   67 Jun 01 10:30 access.redhat.com
      drwxr-xr-x. 2 root root   67 Jun 01 10:30 registry.access.redhat.com

After registering each node with RHSM, downloading the entitlement certificates, and installing OpenShift, you can proceed to installation of the MBaaS.

3.4. Next Steps

Chapter 4. Provisioning an RHMAP 4.x MBaaS in OpenShift 3

4.1. Overview

An OpenShift 3 cluster can serve as an MBaaS target and host your Cloud Apps and Cloud Services. This guide provides detailed steps to deploy the RHMAP 4.x MBaaS on an OpenShift 3 cluster.

You can choose a simple automated installation to preview and test the MBaaS, or follow the manual installation steps for a fully supported production-ready MBaaS:

  • Automatic Installation

    You can quickly try the RHMAP 4.x MBaaS by choosing the automatic installation.

    The following limitations apply to the automatically installed MBaaS:

    • not suitable for production use
    • single replica for each MBaaS component
    • single MongoDB replica with no persistent storage
  • Manual Installation

    For production use, follow the manual installation procedure, which results in an MBaaS with the following characteristics:

    • suitable for production use
    • three replicas defined for each MBaaS component (with the exception of fh-statsd)
    • three MongoDB replicas with a 50GB persistent storage requirement each

      • nodeSelectors of mbaas_id=mbaas1, mbaas_id=mbaas2, and mbaas_id=mbaas3 for the MongoDB replicas

4.2. Prerequisites

This guide assumes several prerequisites are met before the installation:

  • All nodes in the cluster must be registered with the Red Hat Subscription Manager and have RHMAP entitlement certificates downloaded. See Preparing Nodes for MBaaS Installation for detailed steps.
  • An existing OpenShift Enterprise installation, version 3.2
  • The OpenShift master and router must be accessible from the RHMAP Core.
  • A wildcard DNS entry must be configured for the OpenShift router IP address.
  • A dedicated OpenShift user for MBaaS administration. This guide refers to this user as the MBaaS administrator. The user must have the authorization to:

    • create projects;
    • create any objects associated with a project (for example, Services, DeploymentConfigs, Routes).
  • An OpenShift user with the admin role is needed for the manual installation. For example, the default system:admin user. This guide refers to this user as the OpenShift administrator.

For information on installation and management of an OpenShift cluster and its users, see the official OpenShift documentation.

4.3. Automatic Installation

The automatic installation of an MBaaS in OpenShift 3 results in the MBaaS components being installed on nodes chosen by the OpenShift scheduler. Only a single instance of each component runs at any time and thus makes the MBaaS susceptible to downtime in case of failure of a single node. The data of the MongoDB database is not backed by any permanent storage and is therefore transient.

Warning

The automatic installation procedure must not be used in production environments. You should only use this procedure for evaluation purposes, since the provided template is not optimized for resiliency and stability required in production environments. Follow the manual installation steps for production use.

There are no setup steps required before the automatic installation. Refer to Creating an MBaaS Target to continue the installation.

Note

In order for automatic MBaaS installation to work, the OpenShift SDN must be configured to use the ovs-subnet SDN plugin (this is the default). If it is not set to this, refer to Network Configuration.

4.4. Manual Installation

The manual installation of an MBaaS in OpenShift 3 results in a resilient three-node cluster with MBaaS components spread across all three nodes, MongoDB replica set spread over three nodes, and the MongoDB data backed by persistent volumes.

The installation consists of several phases. Before the installation, you must prepare your OpenShift cluster:

  • Set up persistent storage - you need to create Persistent Volumes with specific parameters in OpenShift.
  • Label the nodes - nodes need to be labeled in a specific way, to match the node selectors expected by the OpenShift template of the MBaaS.
  • Network Configuration - configuring the SDN network plugin used in OpenShift so that Cloud Apps can communicate with MongoDB in the MBaaS.

After the OpenShift cluster is properly configured:

4.4.1. Before The Installation

The manual installation procedure poses certain requirements on your OpenShift cluster in order to guarantee fault tolerance and stability.

4.4.1.1. Network Configuration

Cloud Apps in an MBaaS communicate directly with a MongoDB replica set. In order for this to work, the OpenShift SDN must be configured to use the ovs-subnet SDN plugin. For more detailed information on configuring this, see Migrating Between SDN Plug-ins in the OpenShift Enterprise documentation.

4.4.1.1.1. Making Project Networks Global

Alternatively, if you cannot use the ovs-subnet SDN plugin, you will need to make the network of the MBaaS project global after installation. For details on how to do this with your MBaaS project, see Making Project Networks Global in the OpenShift Enterprise documentation.

4.4.1.2. Persistent Storage Setup

An MBaaS running on OpenShift 3 contains a MongoDB replica set. The replica set members use persistent storage for the directory where the database data is stored.

The OpenShift template that is used to create the project and resources for the MBaaS requires:

  • At least one PersistentVolume (PV) resource in an Available state for each of the three MongoDB replica members.
  • Each PersistentVolume has at least 50GB of space.

For detailed information on PersistentVolumes and how to create them, see Persistent Storage in the OpenShift Enterprise documentation.

4.4.1.3. Apply Node Labels

By applying labels to OpenShift nodes, you can control which nodes the MBaaS components, MongoDB replicas, and cloud apps will be deployed to.

This section describes the considerations for:

Cloud apps get deployed to nodes labeled with the default nodeSelector, which is usually set to type=compute (defined in the OpenShift master configuration).

You can skip this entire labeling section if your OpenShift cluster only has a single schedulable node. In such case, all MBaaS components, MongoDB replicas, and cloud apps will necessarily run on that single node.

4.4.1.3.1. Labelling for MBaaS components

It is recommended, but not required, to deploy the MBaaS components to dedicated nodes, separate from other applications (such as RHMAP cloud apps).

Refer to Infrastructure Sizing Considerations for Installation of RHMAP MBaaS for the recommended number of MBaaS nodes and cloud app nodes for your configuration.

For example, if you have 12 nodes, the recommendation is:

  • Dedicate three nodes to MBaaS and MongoDB.
  • Dedicate three nodes to cloud apps.

To achieve this, apply a label, such as type=mbaas to the three dedicated MBaaS nodes.

oc label node mbaas-1 type=mbaas
oc label node mbaas-2 type=mbaas
oc label node mbaas-3 type=mbaas

Then, when creating the MBaaS project, as described later in Section 4.4.2, “Installing the MBaaS”, set this label as the nodeSelector.

You can check what type labels are applied to all nodes with the following command:

oc get nodes -L type
NAME          STATUS                     AGE       TYPE
ose-master    Ready,SchedulingDisabled   27d       master
infra-1       Ready                      27d       infra
infra-2       Ready                      27d       infra
app-1         Ready                      27d       compute
app-2         Ready                      27d       compute
app-3         Ready                      27d       compute
mbaas-1       Ready                      27d       mbaas
mbaas-2       Ready                      27d       mbaas
mbaas-3       Ready                      27d       mbaas

In this example, the deployment would be as follows:

  • Cloud apps get deployed to the three dedicated cloud app nodes app-1, app-2, and app-3.
  • The MBaaS components get deployed to the three dedicated MBaaS nodes mbaas-1, mbaas-2, and mbaas-3 (if the nodeSelector is also set on the MBaaS Project).
4.4.1.3.2. Labelling for MongoDB replicas

In the production MBaaS template, the MongoDB replicas are spread over three MBaaS nodes. If you have more than three MBaaS nodes, any three of them can host the MongoDB replicas.

To apply the required labels (assuming the three nodes are named mbaas-1, mbaas-2, and mbaas-3):

oc label node mbaas-1 mbaas_id=mbaas1
oc label node mbaas-2 mbaas_id=mbaas2
oc label node mbaas-3 mbaas_id=mbaas3

You can verify the labels were applied correctly by running this command:

oc get nodes -L mbaas_id
NAME          STATUS                     AGE       MBAAS_ID
10.10.0.102   Ready                      27d       <none>
10.10.0.117   Ready                      27d       <none>
10.10.0.141   Ready                      27d       <none>
10.10.0.157   Ready                      27d       mbaas3
10.10.0.19    Ready,SchedulingDisabled   27d       <none>
10.10.0.28    Ready                      27d       mbaas1
10.10.0.33    Ready                      27d       <none>
10.10.0.4     Ready                      27d       <none>
10.10.0.99    Ready                      27d       mbaas2

See Updating Labels on Nodes in the OpenShift documentation for more information on how to apply labels to nodes.

4.4.1.3.2.1. Why are MongoDB replicas spread over multiple nodes?

Each MongoDB replica is scheduled to a different node to support failover.

For example, if an OpenShift node failed, data would be completely inaccessible if all three MongoDB replicas were scheduled on this failing node. Setting a different nodeSelector for each MongoDB DeploymentConfig, and having a corresponding OpenShift node in the cluster matching this label will ensure the MongoDB pods get scheduled to different nodes.

In the production MBaaS template, there is a different nodeSelector for each MongoDB DeploymentConfig:

  • mbaas_id=mbaas1 for mongodb-1
  • mbaas_id=mbaas2 for mongodb-2
  • mbaas_id=mbaas3 for mongodb-3

Excerpt of DeploymentConfig of mongodb-1

{
  "kind": "DeploymentConfig",
  "apiVersion": "v1",
  "metadata": {
    "name": "mongodb-1",
    "labels": {
      "name": "mongodb"
    }
  },
  "spec": {
    ...
    "template": {
      ...
      "spec": {
        "nodeSelector": {
          "mbaas_id": "mbaas1"
        }

4.4.2. Installing the MBaaS

In this step, you will provision the MBaaS to the OpenShift cluster from the command line, based on the MBaaS OpenShift template.

First, download the latest version of the MBaaS OpenShift template.

  1. In the Studio, navigate to the Admin > MBaaS Targets section. Click New MBaaS Target.
  2. Choose OpenShift 3 as Type.
  3. At the bottom of the page, click Download Template and save the template file fh-mbaas-template-3node.json. You may now close the New MBaaS Target screen.

Using the downloaded template, provision the MBaaS in the OpenShift cluster from the command line. For general information about the OpenShift CLI, see CLI Operations in the OpenShift Enterprise documentation.

  1. Create a new project.

    Log in as the MBaaS administrator. You will be prompted for credentials.

    oc login <public URL of the OpenShift master>

    Create the project:

    Warning

    The name of the OpenShift project chosen here must have the suffix -mbaas. The part of the name before -mbaas is used later in this guide as the ID of the MBaaS target associated with this OpenShift project. For example, if the ID of the MBaaS target is live, the OpenShift project name set here must be live-mbaas.

    oc new-project live-mbaas
  2. Set the node selector of the project to target MBaaS nodes.

    This ensures that all MBaaS components are deployed to the dedicated MBaaS nodes.

    Note

    If you’ve chosen not to have dedicated MBaaS nodes in Section 4.4.1.3.1, “Labelling for MBaaS components”, skip this step.

    Log in as the OpenShift administrator. You will be prompted for credentials.

    oc login <public URL of the OpenShift master>

    Set the openshift.io/node-selector annotation to type=mbaas in the project’s namespace:

    Note

    You may need to add this annotation if it is missing.

    oc edit ns live-mbaas
    apiVersion: v1
    kind: Namespace
    metadata:
     annotations:
       openshift.io/node-selector: type=mbaas
    ...

    Log back in as the MBaaS administrator if you had to switch users for the above steps. You will be prompted for credentials.

    oc login <public URL of the OpenShift master>
  3. Start the installation.

    Create all the MBaaS resources from the template.

    oc new-app -f fh-mbaas-template-3node.json

    After all the resources are created, you should see output similar to the following:

    --> Deploying template fh-mbaas for "fh-mbaas"
         With parameters:
          MONGODB_FHMBAAS_USER=u-mbaas
          ...
    
    --> Creating resources ...
        Service "fh-mbaas-service" created
        Service "fh-messaging-service" created
        Service "fh-metrics-service" created
        Service "fh-statsd-service" created
        Service "mongodb-1" created
        Service "mongodb-2" created
        Service "mongodb-3" created
        DeploymentConfig "fh-mbaas" created
        DeploymentConfig "fh-messaging" created
        DeploymentConfig "fh-metrics" created
        DeploymentConfig "fh-statsd" created
        PersistentVolumeClaim "mongodb-claim-1" created
        PersistentVolumeClaim "mongodb-claim-2" created
        PersistentVolumeClaim "mongodb-claim-3" created
        DeploymentConfig "mongodb-1" created
        DeploymentConfig "mongodb-2" created
        DeploymentConfig "mongodb-3" created
        Pod "mongodb-initiator" created
        Route "mbaas" created
    --> Success
        Run 'oc status' to view your app.

    It may take a minute for all the resources to get created and up to 10 minutes for all the components to get to a Running status.

4.4.3. Verifying The Installation

  1. Verify the Services.

    Each MBaaS component defines a Service, which load-balances and proxies traffic to the underlying pods.

    To verify that all the services of the MBaaS have been created, enter the following command:

    oc get svc

    The output should look similar to the following:

    NAME                   CLUSTER_IP       EXTERNAL_IP   PORT(S)             SELECTOR                 AGE
    fh-mbaas-service       172.30.208.89    <none>        8080/TCP            name=fh-mbaas            1m
    fh-messaging-service   172.30.194.171   <none>        8080/TCP            name=fh-messaging        1m
    fh-metrics-service     172.30.65.222    <none>        8080/TCP            name=fh-metrics          1m
    fh-statsd-service      172.30.161.128   <none>        8080/TCP,8081/UDP   name=fh-statsd           1m
    mongodb-1              None             <none>        27017/TCP           name=mongodb-replica-1   1m
    mongodb-2              None             <none>        27017/TCP           name=mongodb-replica-2   1m
    mongodb-3              None             <none>        27017/TCP           name=mongodb-replica-3   1m

    Verify that the output contains a service for all MBaaS components and a service for each MongoDB replica.

    Each service forwards traffic to one or more pods. During the MBaaS creation, some intermediate pods get created, which are responsible for deploying or setting up other pods. These intermediate pods have a -deploy or -build suffix. If these intermediate pods fail, the installation can not proceed. You can try viewing logs of the intermediate pods to identify the cause of the failure.

  2. Verify that the MongoDB replica set is configured correctly.

    1. Verify that the status of the mongodb-initiator pod is Completed.

      oc get pod mongodb-initiator
      NAME                READY     STATUS      RESTARTS   AGE
      mongodb-initiator   0/1       Completed   0          5d
    2. Verify that each MongoDB replica has all replica set members configured.

      Enter the following command to list the replica set members of each MongoDB replica:

      for j in `(for i in 1 2 3;do oc get po -l deploymentconfig=mongodb-$i -o name;done) | sed -e 's|pod/||'`; do echo "## $j ##" && echo mongo admin -u admin -p \${MONGODB_ADMIN_PASSWORD} --eval "printjson\(rs.conf\(\).members\)" | oc rsh --shell='/bin/bash' $j; done

      Each replica should have the same three members listed in its array of replica set members — mongodb-1, mongodb-2, and mongodb-3.

      If there are not exactly three members in the array of replica set members of each node, refer to the Common Problems section of the troubleshooting guide for help.

  3. Verify the pods.

    To verify that all the pods are running, enter the following command:

    oc get pods

    The output should look similar to the following:

    NAME                      READY     STATUS    RESTARTS   AGE
    fh-mbaas-1-h511r          1/1       Running   0          53m
    fh-mbaas-1-hg5ub          1/1       Running   0          54m
    fh-mbaas-1-uwpl6          1/1       Running   0          53m
    fh-messaging-1-3wxap      1/1       Running   0          53m
    fh-messaging-1-j5asf      1/1       Running   0          53m
    fh-messaging-1-yh8hn      1/1       Running   0          54m
    fh-metrics-1-f5ems        1/1       Running   0          53m
    fh-metrics-1-faihq        1/1       Running   0          53m
    fh-metrics-1-vleqs        1/1       Running   0          54m
    fh-statsd-1-36hw0         1/1       Running   0          54m
    mongodb-1-1-12l8b         1/1       Running   0          54m
    mongodb-2-1-hwmzx         1/1       Running   0          54m
    mongodb-3-1-bl12r         1/1       Running   0          54m
    mongodb-initiator         0/1       Completed 0          54m

    Verify that all Pods are in a Running state, with the exception of the mongodb-initator, which should be in a Completed state. If any Pod is in a different state, they may require more time, or may have an issue starting up. A stream of events for the namespace, including any issues with scheduling and creating pods, pulling images and any other potential issues, can be viewed using oc get events -w.

  4. Verify the Route

    To verify the MBaaS route is exposed, enter the following command:

    oc get routes mbaas

    The output should look similar to the following:

    NAME      HOST/PORT                PATH      SERVICE            LABELS    INSECURE POLICY   TLS TERMINATION
    mbaas     live-mbaas.example.com             fh-mbaas-service             Allow             edge
  5. Ping the health endpoint.

    If all services are created, all pods are running, and the route is exposed, the MBaaS health endpoint can be queried as follows:

    curl `oc get route mbaas --template "{{.spec.host}}"`/sys/info/health

    The endpoint responds with health information about the various MBaaS components and their dependencies. If there are no errors reported, the MBaaS is ready to be configured for use in the Studio. Successful output will resemble the following:

    {
      "status": "ok",
      "summary": "No issues to report. All tests passed without error",
      "details": [
        {
          "description": "Check Mongodb connection",
          "test_status": "ok",
          "result": {
            "id": "mongodb",
            "status": "OK",
            "error": null
          },
          "runtime": 33
        },
        {
          "description": "Check fh-messaging running",
          "test_status": "ok",
          "result": {
            "id": "fh-messaging",
            "status": "OK",
            "error": null
          },
          "runtime": 64
        },
        {
          "description": "Check fh-metrics running",
          "test_status": "ok",
          "result": {
            "id": "fh-metrics",
            "status": "OK",
            "error": null
          },
          "runtime": 201
        },
        {
          "description": "Check fh-statsd running",
          "test_status": "ok",
          "result": {
            "id": "fh-statsd",
            "status": "OK",
            "error": null
          },
          "runtime": 7020
        }
      ]
    }

After verifying that the MBaaS is installed correctly, you must create an MBaaS target for the new MBaaS in the Studio.

4.5. Creating an MBaaS Target

  1. In the Studio, navigate to the Admin > MBaaS Targets section. Click New MBaaS Target.
  2. As Type, choose OpenShift 3.
  3. Fill in the following information

    • MBaaS Id - a unique ID for the MBaaS, for example, live. The ID must be equal to the OpenShift project name chosen in the Installing the MBaaS section, without the -mbaas suffix.
    • OpenShift Master URL - the URL of the OpenShift master, for example, https://master.openshift.example.com:8443
    • OpenShift Username, OpenShift Password - username and password of an OpenShift user which can create projects and any objects associated with a project (for example, services, DeploymentConfigs, Routes). For the manual installation, enter the username and password of the dedicated MBaaS admin user.
    • OpenShift Router DNS - a wildcard DNS entry of the OpenShift router, for example, *.cloudapps.example.com

      If you’ve chosen the manual installation procedure, uncheck Automatic MBaas Installation and fill in these two additional fields:

    • MBaaS Service Key

      Equivalent to the value of the FHMBAAS_KEY environment variable, which is automatically generated during installation. To find out this value, enter the following command:

      oc env dc/fh-mbaas --list | grep FHMBAAS_KEY

      Alternatively, you can find the value in the OpenShift Console, in the Details tab of the fh-mbaas deployment, in the Env Vars section.

    • MBaaS URL

      A URL of the route exposed for the fh-mbaas-service, including the https protocol prefix. This can be retrieved from the OpenShift web console, or by running the following command:

      echo "https://"$(oc get route/mbaas -o template --template {{.spec.host}})
  4. Click Save MBaaS and you will be directed to the MBaaS Status screen. If you chose an automatic installation, it can take several minutes before the status is reported back. For a manual installation, the status should be reported back in less than a minute.

Once the process of creating the MBaaS has succesfully completed, you can see the new MBaaS in the list of MBaaS targets.

OpenShift 3 MBaaS target

In your OpenShift account, you can see the MBaaS represented by a project.

OpenShift 3 MBaaS target

4.6. Next Steps

Chapter 5. Adjusting System Resource Usage of the MBaaS and Cloud Apps

5.1. Overview

In the RHMAP 4.x MBaaS based on OpenShift 3, each cloud app and and each MBaaS component runs in its own container. This architecture allows for granular control of CPU and memory consumption. A fine level of control of system resources helps to ensure efficient use of nodes, and to guarantee uninterrupted operation of critical services.

An application can be prepared for various situations, such as high peak load or sustained load, by making decisions about the resource limits of individual components. For example, you could decide that MongoDB must keep working at all times, and assign it high, guaranteed amount of resources. At the same time, if the availability of a front-end Node.js server is less critical, the server can be assigned less initial resources, with the possibility to use more resources when available.

5.2. Prerequisites

The system resources of MBaaS components and cloud apps in the MBaaS can be regulated using the mechanisms available in OpenShift – resource requests, limits, and quota. Before proceeding with the instructions in this guide, we advise you to read the Quotas and Limit Ranges section in the OpenShift documentation.

5.3. Adjusting Resource Usage of the MBaaS

The RHMAP MBaaS is composed of several components, each represented by a single container running in its own pod. Each container has default resource requests and limits assigned in the MBaaS OpenShift template. See the section Overview of Resource Usage of MBaaS Components for a complete reference of the default values.

Depending on the deployment model of the MBaaS, you may have to adjust the resource limits and requests to fit your environment. If the MBaaS components are deployed on the same nodes as the cloud apps, there is no adjustment required.

However, when the MBaaS components are deployed on nodes dedicated to running the MBaaS only, it is strongly recommended to adjust the resource limits to take full advantage of the available resources on the dedicated nodes.

5.3.1. Calculating the Appropriate Resource Requests and Limits

Note

This section refers to CPU resources in two different terms – the commonly used term vCPU (virtual CPU), and the term millicores used in OpenShift documentation. The unit of 1 vCPU is equal to 1000 m (millicores), which is equivalent to 100% of the time of one CPU core.

The resource limits must be set accordingly for your environment and depend on the characteristics of load on your cloud apps. However, the following rules can be used as a starting point for adjustments of resource limits:

  • Allow 2 GiB of RAM and 1 vCPU for the underlying operating system.
  • Split the remainder of resources in equal parts amongst the MBaaS Components.

5.3.1.1. Example

Given a virtual machine with 16 GiB of RAM and 4 vCPUs, we allow 2 GiB of RAM and 1 vCPU for the operating system. This leaves 14GB RAM and 3 vCPUs (equal to 3000 m) to distribute amongst the 5 MBaaS components.

14 GiB / 5 = 2.8 GiB of RAM per component
3000 m / 5 = 600 m per component

In this example, the resource limit for each MBaaS component would be 2.8 GiB of RAM and 600 millicores of CPU. Depending on the desired level of quality of service of each component, set the resource request values as described in the section Quality of service tiers in the OpenShift documentation.

5.3.2. Overview of Resource Usage of MBaaS Components

The following table lists the components of the MBaaS, their idle resource usage, default resource request, and default resource limit.

MBaaS componentIdle RAM usageRAM requestRAM limitIdle CPU usageCPU requestCPU limit

fh-mbaas

160 MiB

200 MiB

800 MiB

<1%

200 m

800 m

fh-messaging

160 MiB

200 MiB

400 MiB

<1%

200 m

400 m

fh-metrics

120 MiB

200 MiB

400 MiB

<1%

200 m

400 m

fh-statsd

75 MiB

200 MiB

400 MiB

<1%

200 m

400 m

mongodb

185 MiB

200 MiB

1000 MiB

<1%

200 m

1000 m

5.4. Adjusting Resource Usage of Cloud Apps

The resource requests and limits of cloud apps can be set the same way as for MBaaS components. There is no particular guideline for doing the adjustment in cloud apps.

5.4.1. Overview of Resource Usage of Cloud App Components

Cloud app componentIdle RAM usageRAM requestRAM limitIdle CPU usageCPU requestCPU limit

nodejs-frontend

125 MiB

500 MiB

1 GiB

<1%

100 m

500 m

redis

8 MiB

100 MiB

500 MiB

<1%

100 m

500 m

5.5. Setting Resource Requests and Limits

The procedure for setting the resource requests and limits is the same for both MBaaS components and cloud app components.

Open the DeploymentConfig of a component, for example fh-mbaas:

oc edit dc fh-mbaas

The DeploymentConfig contains two resources sections with equivalent values: one in the spec.strategy section, and another in the spec.template.spec.containers section. Set the cpu and memory values of requests and limits as necessary, making sure the values stay equivalent between the two sections, and save the file.

apiVersion: v1
kind: DeploymentConfig
metadata:
  name: fh-mbaas
  ...
spec:
  ...
  strategy:
    resources:
      limits:
        cpu: 800m
        memory: 800Mi
      requests:
        cpu: 200m
        memory: 200Mi
      ...
    spec:
      containers:
        ...
        resources:
          limits:
            cpu: 800m
            memory: 800Mi
          requests:
            cpu: 200m
            memory: 200Mi

5.6. Using Cluster Metrics to Visualize Resource Consumption

It is possible to view the immediate and historical resource usage of pods and containers in the form of donut charts and line charts using the Cluster Metrics deployment in OpenShift. Refer to Enabling Cluster Metrics in the OpenShift documentation for steps to enable cluster metrics.

Once cluster metrics are enabled, in the OpenShift web console, navigate to Browse > Pods and click on the component of interest. Click on the Metrics tab to see the visualizations.

metrics-donut-charts

Chapter 6. Setting Up SMTP for Cloud App Alerts

6.1. Overview

Each cloud app can automatically send alerts by e-mail when specified events occur, such as when the cloud app gets deployed, undeployed, or logs an error. See Alerts & Email Notifications for more information.

For the RHMAP 4.x MBaaS based on OpenShift 3, the e-mail function is not available immediately after installation. You must configure an SMTP server to enable e-mail support.

6.2. Prerequisites

  • An RHMAP 4.x MBaaS running in OpenShift Enterprise 3
  • An account on an SMTP server through which notification alerts can be sent
  • An email address where alerts should be sent
  • A deployed Cloud App

6.3. Configuring SMTP settings in fh-mbaas

The FH_EMAIL_SMTP and FH_EMAIL_ALERT_FROM environment variables in the fh-mbaas DeploymentConfig need to be set, using the below commands:

oc project <mbaas-project-id>
oc env dc/fh-mbaas FH_EMAIL_SMTP="smtps://username:password@localhost" FH_EMAIL_ALERT_FROM="admin@example.com"

After modifying the DeploymentConfig, a redeploy of the fh-mbaas pod should be triggered automatically. Once the pod is running again, you can verify the changes.

6.4. Verifying SMTP settings

  1. In the Studio, navigate to a deployed Cloud App.
  2. Go to the Notifications > Alerts section.
  3. Click Create An Alert .
  4. In the Emails field, enter your e-mail address.
  5. Click Test Emails.

You should receive an e-mail from the e-mail address set as FH_EMAIL_ALERT_FROM.

6.5. Troubleshooting

If the test email fails to send, verify the SMTP settings in the running fh-mbaas Pod.

oc env pod -l name=fh-mbaas --list | grep EMAIL

It may help to view the fh-mbaas logs while attempting to send an email, looking for any errors related to SMTP or email.

oc logs -f fh-mbaas-<deploy-uuid>

Ensure the Cloud App you are using to send a test mail with is running. If the test email sends OK, but fails to arrive, check it hasn’t been placed in your spam or junk folder.

Chapter 7. Using MongoDB in an RHMAP 4.x MBaaS

7.1. Overview

In an RHMAP 4.x MBaaS based on OpenShift 3, all components of the MBaaS run within a single OpenShift project, together with a shared MongoDB replica set. Depending on how the MBaaS was installed, the replica set runs either on a single node, or on multiple nodes, and may be backed by persistent storage. The recommended production-grade MongoDB setup for an MBaaS has 3 replicas, each backed by persistent storage.

Each cloud app deployed to the MBaaS has its own OpenShift project. However, the database of a cloud app is created in the shared MongoDB instance. Therefore, all management operations on the persistent data of cloud apps and the MBaaS, such as backup, or replication can be centralized. At the same time, the data of individual cloud apps is isolated in separate databases.

7.2. Accessing data in the MongoDB in the MBaaS

A simple way to store data is using the $fh.db API, which provides methods for create, read, update, delete, and list operations. See the $fh.db API documentation for more information.

If you need the full capability of a native MongoDB driver, or want to use another library to access the data, such as Mongoose, you can use the connectionString method of the $fh.db API to retrieve the connection string to the MongoDB instance:

$fh.db({
 "act" : "connectionString"
}, function(err, connectionString){
  console.log('connectionString=', connectionString);
});
Note

To avoid concurrency issues, we recommend using either the $fh.db API or a direct connection to the database, but not both at the same time.

Chapter 8. Centralized Logging for MBaaS Components

8.1. Overview

Logging output from RHMAP MBaaS components can be aggregated and accessed through a web console when using an MBaaS backed by OpenShift Enterprise 3 (OSEv3). Aggregated logging is enabled by deploying an EFK logging stack to your OSEv3 instance, which consists of the following components:

  • Elasticsearch indexes log output collected by Fluentd and makes it searchable.
  • Fluentd collects standard output of all containers.
  • Kibana is a web console for querying and visualizing data from Elasticsearch.

To enable this functionality, follow the official OpenShift guide Aggregating Container Logs.

8.2. Accessing Logs Through Kibana Web Console

The Kibana web console is where logs gathered by Fluentd and indexed by Elasticsearch can be viewed and queried. You can access the Kibana web console via the OpenShift web console, or directly by its URL configured through the KIBANA_HOSTNAME in the deployment procedure.

8.2.1. Viewing Logs of a Single Pod

If you have configured loggingPublicURL in step 8 of the deployment procedure, the OpenShift web console allows you to view the log archive of a particular pod.

  1. In the OpenShift web console, select a project, and look for the deployment named fh-mbaas.
  2. Click on the Pods circle.
  3. Choose one of the pods to inspect.
  4. Click on the Logs tab.
  5. Click on the View Archive button at the top right corner to access the logs of the chosen pod in the Kibana web console.
Note

By default, Kibana’s time filter shows the last 15 minutes of data. If you don’t see any values, adjust the Time filter setting to a broader time interval.

8.2.2. Accessing Kibana Directly

You can access the Kibana web console directly at https://KIBANA_HOSTNAME, where KIBANA_HOSTNAME is the host name you set in step 4 of the deployment procedure.

8.2.3. Configuring an Index Pattern

When accessing the Kibana web console directly for the first time, you are presented with the option to configure an index pattern. You can also access this configuration screen in the Settings tab. By default, there is an index pattern in the format <MBaaS ID>-mbaas.*, matching the ID of the deployed MBaaS target.

To make queries more efficient, you can restrict the index pattern by date and time.

  1. Select the Use event times to create index names
  2. Enter the following pattern in the Index name or pattern input text field. For example:

    [onprem-mbaas.]YYYY.MM.DD
  3. You will see output similar to the following below the input field

    Pattern matches 100% of existing indices and aliases
    onprem-mbaas.2016.02.04
    onprem-mbaas.2016.02.05
  4. Click Create to create the index based on this pattern.
  5. You can now select this newly created index in the Discover tab when doing searches, as well as in other parts, such as the Visualizations tab.

8.3. Identifying Issues in an MBaaS

If you suspect that an error of an MBaaS component may be the cause of an issue, you can use Kibana’s Discover tab to find the root of the problem. The following steps describe the general procedure you can follow to identify issues.

  1. Select the index for the MBaaS target you are interested in

    Use the dropdown just below the input bar in the Discover view to list all available indices. An index is similar to a database in relational database systems. Select which index your searches will be performed against.

  2. Select a time interval for your search

    Click the Time Filter (clock icon) and adjust the time interval. Initially, try a broader search.

  3. Perform a simple search

    To search for all error events, perform a simple search for error in the Discovery field. This will return the number of hits within the chosen time interval.

  4. Select the msg or message field to be displayed

    On the left hand side of the Discover view is a list of fields. From this list you can select fields to display in the document data section. Selecting a field replaces the _source field in the document data view. This enables you to see any error messages and might help you refine your original search if needed. You can also select more fields to help you locate the issue.

  5. Narrow down the time interval

    The histogram shows search hits returned in the chosen time interval. To narrow down the search in time you have the following options:

    • Click on a bar in the histogram to narrow down the search to that bar’s time interval.
    • Select a time window in the date histogram by clicking and dragging between the start/end time you are interested in.
  6. Inspect the document data

    Once you narrow down the search, you can inspect the document data items. Apart from the msg and message fields, you might be interested in kubernetes_pod_name to see the pod a message originates from.

8.3.1. Viewing All Debug Logs for an MBaaS Component

If searching for error messages doesn’t help, you can try looking into debug logs of individual MBaaS components.

  1. Select the index for the MBaaS target that you are interested in
  2. Start a new search

    Click on the New Search button to the left of the search input bar, which looks like a document with a plus sign.

  3. Search an MBaaS component for all debug messages

    For example, to search for all debug messages of the fh-messaging component, enter the following query:

    type: bunyan && level: 20 && kubernetes_container_name: "fh-messaging"

    If you know some part of the error message, you can specify that as part of the search:

    type: bunyan && level: 20 && kubernetes_container_name: "fh-messaging" && "Finished processing"

    You can narrow down your search further by time, as described in step 5 above.

    As a reference, the following are the Bunyan log levels:

    TRACE = 10;
    DEBUG = 20;
    INFO = 30;
    WARN = 40;
    ERROR = 50;
    FATAL = 60;

Chapter 9. Monitoring the MBaaS with Cockpit

9.1. Overview

System resources of nodes and containers in the MBaaS on OpenShift 3 can be monitored and managed using Cockpit.

Cockpit is a system administration tool, that provides insights into how nodes and containers are performing. It lets you monitor current values and adjust limits on system resources, control lifecycle of container instances, and manipulate container images. For more information about Cockpit, refer to the official web site of the Cockpit Project and its Documentation.

9.2. Installation

For most OpenShift 3 instances, Cockpit is most likely already installed on all nodes. This is not the case if your nodes use the RHEL Atomic Host, where Cockpit needs to be installed manually.

To check whether Cockpit is installed in your OpenShift cluster, try visiting the URL of the Cockpit web interface:

  http://<master node host>:9090

If there’s no response to the request, Cockpit is most likely not installed.

9.2.1. Installing Cockpit Manually

  1. Install Cockpit on nodes.

    The following three steps must be repeated for each node you wish to monitor in your OpenShift cluster.

  2. Log in to the node.

    ssh <node host>
  3. Install Cockpit packages.

    yum install cockpit cockpit-docker
  4. Enable and start the Cockpit service.

    systemctl enable cockpit.socket
    systemctl start cockpit.socket
  5. Create a Cockpit system user on master.

    To log in to the Cockpit web interface, you will have to provide the username and password of an operating system user existing on the OpenShift master node. This guide refers to this user as the Cockpit system user. To allow Cockpit to access system resources, perform operations on Docker containers and Kubernetes resources, the Cockpit system user must:

    • be in the docker group;
    • be able to log in to other nodes using ssh;
    • be able to perform Kubernetes operations.

    Create the Cockpit system user on the master node, or modify an existing user to have the necessary privileges.

9.3. Viewing the Containers on an Openshift Node

Navigate to the Cockpit dashboard for a node in a web browser (port 9090 by default) and log in as the Cockpit system user. To see all containers deployed on that node, click Containers in the left-hand side menu.

View containers

You can filter the list to only display running containers, using the dropdown menu above the list of containers. This view lets you see the RAM and CPU usage of all running containers.

If you select an MBaaS node, you will see the containers for all MBaaS components. Clicking on a container will show the current logs, CPU shares, and RAM usage. In the Tools menu on the left hand side, you can get terminal access into the node for further investigation.

9.4. Viewing Multiple Hosts Simultaneously

Cockpit can connect to multiple hosts from a single Cockpit session. This can be useful to compare resource usage of two or more machines in the same dashboard. See Multiple Machines in the Cockpit documentation for more information.

Chapter 10. Monitoring the MBaaS with Nagios

10.1. Overview

To monitor the status of the MBaaS and its components, you can deploy the Nagios monitoring software to your OpenShift cluster. This guide provides steps for deployment and usage of Nagios.

10.2. Prerequisites

  • An RHMAP MBaaS running in Openshift Enterprise 3
  • An account on an SMTP server through which notification alerts can be sent
  • An email address where alerts should be sent
  • A PersistentVolume in Openshift, with 1Gi storage. Refer to the Persistent Storage Setup section of the MBaaS installation guide, and Persistent Storage in the OpenShift documentation for information on how to achieve this.

10.3. Deploying Nagios to OpenShift

Using the oc command, change to the MBaaS project in OpenShift by running the following after changing <mbaas-project-id> to your MBaaS project:

oc project <mbaas-project-id>

The following command will create the necessary resources. Edit the environment variable parameters based on your SMTP server configuration and other details:

oc new-app -f \
https://raw.githubusercontent.com/feedhenry/mbaas-monitoring/4.0.8-8/nagios.yml \
-p SMTP_SERVER=localhost,\
SMTP_USERNAME=username,\
SMTP_PASSWORD=password,\
SMTP_TLS=auto,\
SMTP_FROM_ADDRESS=admin@example.com,\
MBAAS_ADMIN_EMAIL=root@localhost,\
NAGIOS_USER=nagiosadmin,\
NAGIOS_IMAGE_NAME=<nagios-image-name>,\
NAGIOS_IMAGE_VERSION=<nagios-image-version>,\
MBAAS_ROUTER_DNS=$(oc get route mbaas -o template --template {{.spec.host}}),\
NAGIOS_HOST=$(printf "nagios-%s" `oc get route mbaas -o template --template {{.spec.host}}`)

To learn more about the available template parameters, enter the following command:

oc process --parameters -f https://raw.githubusercontent.com/feedhenry/mbaas-monitoring/4.0.8-8/nagios.yml

For the MBAAS_ROUTER_DNS and NAGIOS_HOST parameters, the above example is recommended to get sane default values.

The NAGIOS_IMAGE_NAME and NAGIOS_IMAGE_VERSION are optional properties to the template and allow for overriding the default image name and version if required.

After the command finishes, you can check the status of the deployment to see if Nagios is ready for use – the status of the Nagios pod must be Running.

oc get pods
NAME             READY     STATUS              RESTARTS   AGE
nagios-1-g7u1t   1/1       Running             0          29m

10.4. Using Nagios

Navigate to the Nagios web console, which is exposed at NAGIOS_HOST set during creation. For example:

  https://nagios.apps.feedhenry.io

The username and password for login are the values of the NAGIOS_USER and NAGIOS_PASSWORD environment variables respectively in the 'nagios' DeploymentConfig.

Once the Nagios web console loads in a web browser, the status of the MBaaS components can be viewed by navigating to the Services tab in the panel on the left.

For further details on using Nagios, see the online Nagios documentation.

Chapter 11. Troubleshooting the RHMAP 4.x MBaaS

11.1. Overview

This document provides information on how to identify, debug, and resolve possible issues that can be encountered during installation and usage of the RHMAP MBaaS 4.0 on OpenShift 3.

In Common Problems, you can see a list of resolutions for problems that might occur during installation or usage of the MBaaS.

Note

See also Known Issues for a list of workarounds for issues that currently exist in the RHMAP MBaaS 4.0 and will be fixed in upcoming versions.

11.2. Check the Health Endpoint of the MBaaS

The first step to check whether an MBaaS is running correctly is to see the output of its health endpoint. The HTTP health endpoint in the MBaaS reports the health status of the MBaaS and of its individual components.

From the command line, enter the following command:

curl https://<MBaaS URL>/sys/info/health

If the MBaaS is running correctly without any errors, the output of the command should be similar to the following, showing a "test_status": "ok" for each component:

{
 "status": "ok",
 "summary": "No issues to report. All tests passed without error",
 "details": [
  {
   "description": "Check fh-statsd running",
   "test_status": "ok",
   "result": {
    "id": "fh-statsd",
    "status": "OK",
    "error": null
   },
   "runtime": 6
  },
...
}

If there are any errors, the output will contain error messages in the result.error object of the details array for individual components. Use this information to identify which component is failing and to get information on further steps to resolve the failure.

You can also see a HTTP 503 Service Unavailable error returned from the health endpoint. This can happen in several situations:

Alternatively, you can see the result of this health check in the Studio. Navigate to the Admin > MBaaS Targets section, select your MBaaS target, and click Check the MBaaS Status.

If there is an error, you are presented with a screen showing the same information as described above. Use the provided links to OpenShift Web Console and the associated MBaaS Project in OpenShift to help with debugging of the issue on the OpenShift side.

OpenShift 3 MBaaS target failure

11.3. Analyze Logs

To see the logging output of individual MBaaS components, you must configure centralized logging in your OpenShift cluster. See Centralized Logging for MBaaS Components for a detailed procedure.

The section Identifying Issues in an MBaaS provides guidance on discovering MBaaS failures by searching and filtering its logging output.

11.4. Common Problems

11.4.1. A replica pod of mongodb-service is replaced with a new one

11.4.1.1. Summary

The replica set is susceptible to down time if the replica set members configuration is not up to date with the actual set of pods. There must be at least two members active at any time, in order for an election of a primary member to happen. Without a primary member, the MongoDB service won’t perform any read or write operations.

A MongoDB replica may get terminated in several situations:

  • A node hosting a MongoDB replica is terminated or evacuated.
  • A re-deploy is triggered on one of the MongoDB Deployment objects in the project – manually or by a configuration change.
  • One of the MongoDB deployments is scaled down to zero pods, then scaled back up to one pod.

To learn more about replication in MongoDB, see Replication in the official MongoDB documentation.

11.4.1.2. Fix

The following procedure shows you how to re-configure a MongoDB replica set into a fully operational state. You must synchronize the list of replica set members with the actual set of MongoDB pods in the cluster, and set a primary member of the replica set.

  1. Note the MongoDB endpoints.

    oc get ep | grep mongo

    Make note of the list of endpoints. It is used later to set the replica set members configuration.

    NAME              ENDPOINTS                                      AGE
    mongodb-1         10.1.2.152:27017                               17h
    mongodb-2         10.1.4.136:27017                               17h
    mongodb-3         10.1.5.16:27017                                17h
  2. Log in to the oldest MongoDB replica pod.

    List all the MongoDB replica pods.

    oc get po -l name=mongodb-replica

    In the output, find the pod with the highest value in the AGE field.

    NAME                READY     STATUS    RESTARTS   AGE
    mongodb-1-1-4nsrv   1/1       Running   0          19h
    mongodb-2-1-j4v3x   1/1       Running   0          3h
    mongodb-3-2-7tezv   1/1       Running   0          1h

    In this case, it is mongodb-1-1-4nsrv with an age of 19 hours.

    Log in to the pod using oc rsh.

    oc rsh mongodb-1-1-4nsrv
  3. Open a MongoDB shell on the primary member.

    mongo admin -u admin -p ${MONGODB_ADMIN_PASSWORD} --host ${MONGODB_REPLICA_NAME}/localhost
    MongoDB shell version: 2.4.9
    connecting to: rs0/localhost:27017/admin
    [...]
    Welcome to the MongoDB shell.
    For interactive help, type "help".
    For more comprehensive documentation, see
        http://docs.mongodb.org/
    Questions? Try the support group
        http://groups.google.com/group/mongodb-user
    rs0:PRIMARY>
  4. List the configured members.

    Run rs.conf() in the MongoDB shell.

    rs0:PRIMARY> rs.conf()
    {
        "_id" : "rs0",
        "version" : 56239,
        "members" : [
            {
                "_id" : 3,
                "host" : "10.1.0.2:27017"
            },
            {
                "_id" : 4,
                "host" : "10.1.1.2:27017"
            },
            {
                "_id" : 5,
                "host" : "10.1.6.4:27017"
            }
        ]
    }
  5. Ensure all hosts have either PRIMARY or SECONDARY status.

    Enter the following command. It may take several seconds to complete.

    rs0:PRIMARY> rs.status().members.forEach(function(member) {print(member.name + ' :: ' + member.stateStr)})
    mongodb-1:27017  :: PRIMARY
    mongodb-2:27017  :: SECONDARY
    mongodb-3:27017  :: SECONDARY
    rs0:PRIMARY>

    There must be exactly one PRIMARY node. All the other nodes must be SECONDARY. If a member is in a STARTUP, STARTUP2, RECOVERING, or UNKNOWN state, try running the above command again in a few minutes. These states may signify that the replica set is performing a startup, recovery, or other procedure potentially resulting in an operational state.

11.4.1.3. Result

After applying the fix, all three MongoDB pods will be members of the replica set. If one of the three members terminates unexpectedly, the two remaining members are enough to keep the MongoDB service fully operational.

11.4.2. MongoDB doesn’t respond after repeated installation of the MBaaS

11.4.2.1. Summary

Note

The described situation can result from an attempt to create an MBaaS with the same name as a previously deleted one. We suggest you use unique names for every MBaaS installation.

If the mongodb-service is not responding after the installation of the MBaaS, it is possible that some of the MongoDB replica set members failed to start up. This can happen due to a combination of the following factors:

  • The most likely cause of failure in MongoDB startup is the presence of a mongod.lock lock file and journal files in the MongoDB data folder, left over from an improperly terminated MongoDB instance.
  • If a MongoDB pod is terminated, the associated persistent volumes transition to a Released state. When a new MongoDB pod replaces a terminated one, it may get attached to the same persistent volume which was attached to the terminated MongoDB instance, and thus get exposed to the files created by the terminated instance.

11.4.2.2. Fix

Note

SSH access and administrator rights on the OpenShift master and the NFS server are required for the following procedure.

Note

This procedure describes a fix only for persistent volumes backed by NFS. Refer to Configuring Persistent Storage in the official OpenShift documentation for general information on handling other volume types.

The primary indicator of this situation is the mongodb-initiator pod not reaching the Completed status.

Enter the following command to see the status of mongodb-initiator:

oc get pod mongodb-initiator
NAME                READY     STATUS      RESTARTS   AGE
mongodb-initiator   1/1       Running     0          5d

If the status is any other than Completed, the MongoDB replica set is not created properly. If mongodb-initiator stays in this state too long, it may be a signal that one of the MongoDB pods has failed to start. To confirm whether this is the case, check logs of mongodb-initiator using the following command:

oc logs mongodb-initiator
=> Waiting for 3 MongoDB endpoints ...mongodb-1
mongodb-2
mongodb-3
=> Waiting for all endpoints to accept connections...

If the above message is the last one in the output, it signifies that some of the MongoDB pods are not responding.

Check the event log by running the following command:

oc get ev

If the output contains a message similar to the following, you should continue with the below procedure to clean up the persistent volumes:

  FailedMount   {kubelet ip-10-0-0-100.example.internal}   Unable to mount volumes for pod "mongodb-1-1-example-mbaas": Mount failed: exit status 32

The following procedure will guide you through the process of deleting contents of existing persistent volumes, creating new persistent volumes, and re-creating persistent volume claims.

  1. Find the NFS paths.

    On the OpenShift master node, execute the following command to find the paths of all persistent volumes associated with an MBaaS. Replace <mbaas-project-name> with the name of the MBaaS project in OpenShift.

    list=$(oc get pv | grep <mbaas-project-name> | awk '{ print $1}');
    for pv in ${list[@]} ; do
      path=$(oc describe pv ${pv} | grep Path: | awk '{print $2}' | tr -d '\r')
      echo ${path}
    done

    Example output:

    /nfs/exp222
    /nfs/exp249
    /nfs/exp255
  2. Delete all contents of the found NFS paths.

    Log in to the NFS server using ssh.

    Execute the following command to list contents of the paths. Replace <NFS paths> with the list of paths from the previous step, separated by spaces.

    for path in <NFS paths> ; do
      echo ${path}
      sudo ls -l ${path}
      echo " "
    done

    Example output:

    /nfs/exp222
    -rw-------. 1001320000 nfsnobody admin.0
    -rw-------. 1001320000 nfsnobody admin.ns
    -rw-------. 1001320000 nfsnobody fh-mbaas.0
    -rw-------. 1001320000 nfsnobody fh-mbaas.ns
    -rw-------. 1001320000 nfsnobody fh-metrics.0
    -rw-------. 1001320000 nfsnobody fh-metrics.ns
    -rw-------. 1001320000 nfsnobody fh-reporting.0
    -rw-------. 1001320000 nfsnobody fh-reporting.ns
    drwxr-xr-x. 1001320000 nfsnobody journal
    -rw-------. 1001320000 nfsnobody local.0
    -rw-------. 1001320000 nfsnobody local.1
    -rw-------. 1001320000 nfsnobody local.ns
    -rwxr-xr-x. 1001320000 nfsnobody mongod.lock
    drwxr-xr-x. 1001320000 nfsnobody _tmp
    
    /nfs/exp249
    drwxr-xr-x. 1001320000 nfsnobody journal
    -rw-------. 1001320000 nfsnobody local.0
    -rw-------. 1001320000 nfsnobody local.ns
    -rwxr-xr-x. 1001320000 nfsnobody mongod.lock
    drwxr-xr-x. 1001320000 nfsnobody _tmp
    
    /nfs/exp255
    drwxr-xr-x. 1001320000 nfsnobody journal
    -rw-------. 1001320000 nfsnobody local.0
    -rw-------. 1001320000 nfsnobody local.ns
    -rwxr-xr-x. 1001320000 nfsnobody mongod.lock
    drwxr-xr-x. 1001320000 nfsnobody _tmp
    Warning

    Make sure to back up all data before proceeding. The following operation may result in irrecoverable loss of data.

    If the listed contents of the paths resemble the output shown above, delete all contents of the found NFS paths. Replace <NFS paths> with the list of paths from step 1, separated by spaces.

    for path in <NFS paths>
    do
      if [ -z ${path+x} ]
      then
        echo "path is unset"
      else
        echo "path is set to '$path'"
        cd ${path} && rm -rf ./*
      fi
    done
  3. Re-create persistent volumes.

    Log in to the OpenShift master node using ssh.

    Navigate to the directory which contains the YAML files that were used to create the persistent volumes in the section Creating NFS-backed PersistentVolumes for the MongoDB replica set members of the guide Provisioning an MBaaS in Red Hat OpenShift Enterprise 3.

    Execute the following command to delete and re-create the persistent volumes. Replace <mbaas-project-name> with the name of the MBaaS project in OpenShift.

    list=$(oc get pv | grep <mbaas-project-name> | awk '{ print $1}');
    for pv in ${list}; do
      oc delete pv ${pv}
      oc create -f ${pv}.yaml
    done

    The persistent volumes are now re-created and in Available state.

    Note

    The re-created persistent volumes will not be used by OpenShift again for the same persistent volume claims. Make sure you have at least three additional persistent volumes in Available state.

  4. Re-create persistent volume claims for MongoDB.

    Create three JSON files, with the following names:

    • mongodb-claim-1.json
    • mongodb-claim-2.json
    • mongodb-claim-3.json

    Copy the following contents into each file. Change the metadata.name value to match the name of the file without the suffix. For example, the contents for the mongodb-claim-1.json file are as follows:

    {
      "kind": "PersistentVolumeClaim",
      "apiVersion": "v1",
      "metadata": {
        "name": "mongodb-claim-1"
      },
      "spec": {
        "accessModes": ["ReadWriteOnce"],
        "resources": {
          "requests": {
            "storage": "50Gi"
          }
        }
      }
    }

    Enter the following command to re-create the persistent volume claims.

    for pvc in mongodb-claim-1 mongodb-claim-2 mongodb-claim-3; do
      oc delete pvc ${pvc}
      oc create -f ${pvc}.json
    done
  5. Verify that mongodb-initiator proceeds with initialization.

    Enter the following command to see the logs of mongodb-initiator.

    oc logs mongodb-initiator -f

    After mongodb-initiator completes its work, the log output should contain the following message, indicating that the MongoDB replica set was successfully created.

    => Successfully initialized replSet

11.4.2.3. Result

The MongoDB service is fully operational with all three replicas attached to their persistent volumes. The persistent volumes left in Released state from the previous installation are now in the Available state, ready for use by other persistent volume claims.

11.4.3. MongoDB replica set stops replicating correctly

11.4.3.1. Summary

If some of the MBaaS components start to crash, this may be because they can not connect to a primary member in the MongoDB replica set. This usually indicates that the replica set configuration has become inconsistent. This can happen if a majority of the member pods get replaced and have new IP addresses. In this case, data cannot be written to or read from MongoDB replica set in the MBaaS project.

To verify the replica set state as seen by each member, enter the following command in the shell of a user logged in to OpenShift with access to the MBaaS project:

for i in `oc get po -a | grep -e "mongodb-[0-9]\+"  | awk '{print $1}'`; do
  echo "## ${i} ##"
  echo mongo admin -u admin -p \${MONGODB_ADMIN_PASSWORD} --eval "printjson\(rs.status\(\)\)" |  oc rsh --shell='/bin/bash' $i
done

For a fully consistent replica set, the output for each member would contain a members object listing details about each member. If the output resembles the following, containing the "ok" : 0 value for some members, proceed to the fix in order to make the replica set consistent.

## mongodb-1-1-8syid ##
MongoDB shell version: 2.4.9
connecting to: admin
{
    "startupStatus" : 1,
    "ok" : 0,
    "errmsg" : "loading local.system.replset config (LOADINGCONFIG)"
}
## mongodb-2-1-m6ao1 ##
MongoDB shell version: 2.4.9
connecting to: admin
{
    "startupStatus" : 1,
    "ok" : 0,
    "errmsg" : "loading local.system.replset config (LOADINGCONFIG)"
}
## mongodb-3-2-e0a11 ##
MongoDB shell version: 2.4.9
connecting to: admin
{
    "startupStatus" : 1,
    "ok" : 0,
    "errmsg" : "loading local.system.replset config (LOADINGCONFIG)"
}

11.4.3.2. Fix

You can make the replica set consistent by forcing a re-deploy.

  1. Note the MongoDB endpoints which are in an error status.

    oc get po

    Example:

    ```bash
    NAME                   READY     STATUS      RESTARTS   AGE
    mongodb-1-1-pu0fz      1/1       Error          0        1h
    
    ```
  2. Force a deploy of this Pod

    oc deploy mongodb-1 --latest

11.4.3.3. Result

The replica starts replicating properly again and dependent MBaaS components start working again.

11.4.4. An MBaaS component fails to start because no suitable nodes are found

11.4.4.1. Summary

If some of the MBaaS components are not starting up after the installation, it may be the case that the OpenShift scheduler failed to find suitable nodes on which to schedule the pods of those MBaaS components. This means that the OpenShift cluster doesn’t contain all the nodes required by the MBaaS OpenShift template, or that those nodes don’t satisfy the requirements on system resources, node labels, and other parameters.

Read more about the OpenShift Scheduler in the OpenShift documentation.

To verify that this is the problem, enter the following command to list the event log:

oc get ev

If the output contains one of the following messages, you are most likely facing this problem – the nodes in your OpenShift cluster don’t fulfill some of the requirements.

  • Failed for reason MatchNodeSelector and possibly others
  • Failed for reason PodExceedsFreeCPU and possibly others

11.4.4.2. Fix

To fix this problem, configure nodes in your OpenShift cluster to match the requirements of the MBaaS OpenShift template.

  • Apply correct labels to nodes.

    Refer to Apply Node Labels in the guide Provisioning an MBaaS in Red Hat OpenShift Enterprise 3 for details on what labels must be applied to nodes.

  • Make sure the OpenShift cluster has sufficient resources for the MBaaS components, cloud apps, and cloud services it runs.

    Configure the machines used as OpenShift nodes to have more CPU power and internal memory available, or add more nodes to the cluster. Refer to the guide on Overcommitting and Compute Resources in the OpenShift documentation for more information on how containers use system resources.

  • Clean up the OpenShift instance.

    Delete unused projects from the OpenShift instance.

Alternatively, it is also possible to correct the problem from the other side — change the deployment configurations in the MBaaS OpenShift template to match the setup of your OpenShift cluster.

Warning

Changing the deployment configurations may negatively impact the performance and reliability of the MBaaS. Therefore, this is not a recommended approach.

To list all deployment configurations, enter the following command:

oc get dc
NAME           TRIGGERS       LATEST
fh-mbaas       ConfigChange   1
fh-messaging   ConfigChange   1
fh-metrics     ConfigChange   1
fh-statsd      ConfigChange   1
mongodb-1      ConfigChange   1
mongodb-2      ConfigChange   1
mongodb-3      ConfigChange   1

To edit a deployment configuration, use the oc edit dc <deployment> command. For example, to edit the configuration of the fh-mbaas deployment, enter the following command:

oc edit dc fh-mbaas

You can modify system resource requirements in the resources sections.

...
resources:
  limits:
    cpu: 800m
    memory: 800Mi
  requests:
    cpu: 200m
    memory: 200Mi
...

Changing a deployment configuration triggers a deployment operation.

11.4.4.3. Result

If you changed the setup of nodes in the OpenShift cluster to match the requirements of the MBaaS OpenShift template, the MBaaS is now fully operational without any limitation to quality of service.

If you changed the deployment configuration of any MBaaS component, the cluster should now be fully operational, with a potential limitation to quality of service.

Chapter 12. Known Issues in the RHMAP 4.0 MBaaS

12.1. Overview

This document describes issues that currently exist in the RHMAP 4.0 MBaaS and will be fixed in upcoming releases.

12.2. Deleting OpenShift SSH Key breaks cloud app deployments

For each cloud app, an SSH key pair is created in RHMAP so that OpenShift can use it to clone the project and perform a build. If the key is deleted, then cloud apps can no longer be deployed from the Studio.

12.3. Cloud App analytics data does not update

Data in the Reporting section of a project and in the Aggregated Analytics section of the Studio may stop updating intermittently.

To fix this issue:

  1. Navigate to the Admin > MBaaS Targets section.
  2. Select the MBaaS hosting the cloud apps which manifest this problem.
  3. Click Check the MBaaS Status.
  4. Click MBaaS Project in OpenShift.
  5. In the OpenShift project screen, find the fh-messaging-service, and restart its deployment by scaling it down to zero pods, and back up to one pod.

After the pod restarts, the analytics data is updated.

12.4. Incorrect deployment status indicated by progress bar for successful deployments

The progress bar of the deployment status for cloud apps and MBaaS services sometimes incorrectly indicates ongoing operation — showing blue color and moving bars instead of a solid green color — even after the cloud app or MBaaS service is already succesfully deployed to OpenShift.

To ensure the cloud app or service is fully deployed and running correctly, visit the Current Host URL found in the Deploy section of the cloud app or service and verify the status manually.

12.5. MongoDB doesn’t work after a restart

If two or more of the MongoDB pods are shut down, the MongoDB service will stop working until the replica set is manually corrected. Refer to the section MongoDB doesn’t respond after repeated installation of the MBaaS of the document Troubleshooting the RHMAP MBaaS 4.0 for the manual correction steps. After applying the procedure, the MongoDB should be fully operational.

12.6. MongoDB pod starting in REMOVED state

When a MongoDB pod is restarted, it can sometimes fail to resolve the host name of its associated service. As a result, the MongoDB instance fails to join the replica set and enters the REMOVED state.

The log of the MongoDB pod contains a message similar to the following:

Locally stored replica set configuration does not have a valid entry for the current node;
waiting for reconfig or remote heartbeat;
Got "NodeNotFound: No host described in new configuration 3 for replica set rs0 maps to this node" while validating { ... replicaset config json omitted ... }

To fix this issue:

  1. Log into the master node of the OpenShift cluster.

    oc rsh $(oc get pods | grep "mongodb-1" | grep -v "deploy" | awk '{print $1}')
  2. Connect to MongoDB.

    mongo -u admin -p ${MONGODB_ADMIN_PASSWORD} admin
  3. Enter the following command in the MongoDB shell:

    rs.reconfig(rs.config(), {force: true});
    Warning

    Running the command can lead to rollback of committed writes.

The replica set reconfiguration may take up to several minutes. Afterwards, all replica set members return to the PRIMARY or SECONDARY state.

Legal Notice

Copyright © 2017 Red Hat, Inc.
The text of and illustrations in this document are licensed by Red Hat under a Creative Commons Attribution–Share Alike 3.0 Unported license ("CC-BY-SA"). An explanation of CC-BY-SA is available at http://creativecommons.org/licenses/by-sa/3.0/. In accordance with CC-BY-SA, if you distribute this document or an adaptation of it, you must provide the URL for the original version.
Red Hat, as the licensor of this document, waives the right to enforce, and agrees not to assert, Section 4d of CC-BY-SA to the fullest extent permitted by applicable law.
Red Hat, Red Hat Enterprise Linux, the Shadowman logo, JBoss, OpenShift, Fedora, the Infinity logo, and RHCE are trademarks of Red Hat, Inc., registered in the United States and other countries.
Linux® is the registered trademark of Linus Torvalds in the United States and other countries.
Java® is a registered trademark of Oracle and/or its affiliates.
XFS® is a trademark of Silicon Graphics International Corp. or its subsidiaries in the United States and/or other countries.
MySQL® is a registered trademark of MySQL AB in the United States, the European Union and other countries.
Node.js® is an official trademark of Joyent. Red Hat Software Collections is not formally related to or endorsed by the official Joyent Node.js open source or commercial project.
The OpenStack® Word Mark and OpenStack logo are either registered trademarks/service marks or trademarks/service marks of the OpenStack Foundation, in the United States and other countries and are used with the OpenStack Foundation's permission. We are not affiliated with, endorsed or sponsored by the OpenStack Foundation, or the OpenStack community.
All other trademarks are the property of their respective owners.