Chapter 5. Provisioning an RHMAP 4.x Core in OpenShift Container Platform

Depending on your RHMAP subscription, you must either:

5.1. Prerequisites

This guide assumes several prerequisites are met before the installation:

  • All nodes in the cluster must be registered with the Red Hat Subscription Manager. See Chapter 2, Preparing Infrastructure for Installation for detailed steps.
  • RHMAP RPMs are installed as described in Section 2.1, “Install the RHMAP OpenShift Templates”.
  • Many Core components require direct outbound internet access to operate, make sure that all nodes have outbound internet access before installation. If you use a proxy for outbound internet access, note the proxy IP address and port, you will require both for configuration during the installation.
  • Ansible version 2.2 is installed on a management node which has SSH access to the OpenShift cluster. See Section 2.2, “Configure Ansible for installing RHMAP components.” for more information.
  • An existing OpenShift Container Platform installation, version 3.3, 3.4 or 3.5.
  • A wildcard DNS entry must be configured for the OpenShift router IP address.
  • A trusted wildcard certificate must be configured for the OpenShift router. See Using Wildcard Certificates in OpenShift documentation.
  • Administrative access to the OpenShift cluster via the oc cli tool. This user must be able to:

    • Create a Project, and any resource typically found in a Project (e.g. DeploymentConfig, Service, Route)
    • Edit a Namespace definition
    • Add a Role to a User
    • Manage Nodes, specifically labels
  • The rhmap-installer will run a number of pre-req checks which must pass before proceeding with the installation. See RHMAP Installer Pre-Requisite Checks for details.

For information on installation and management of an OpenShift cluster and its users, see the official OpenShift documentation.

5.2. Installation

The installation of a Core in OpenShift Container Platform results in all Core components running in Replication Controller backed Pods, with Persistent Volumes for Core data.

The installation consists of several phases. Before the installation, you must prepare your OpenShift cluster:

  • Set up persistent storage — you need to create Persistent Volumes to cover the Persistent Volume requirements of the Core.
  • Label the nodes — nodes can be labeled if the Core components are to run on specific nodes.

After the OpenShift cluster is properly configured:

  • Install the Core
  • Verify the installation

5.2.1. Setting Up Persistent Storage

Note

Ensure that the persistent volumes are configured according to the OpenShift documentation for configuring PersistentVolumes. If you are using NFS, see the Troubleshooting NFS Issues section for more information.

The Core requires a number of persistent volumes to exist before installation. As a minimum, make sure your OpenShift cluster has the following persistent volumes in an Available state, with at least the amount of free space listed below:

ComponentMinimum recommended size (Default)

MongoDB

25Gi

Metrics Data Backup

5Gi

FH SCM

25Gi

GitLab Shell

5Gi

MySQL

5Gi

Nagios

1Gi

To change the default storage requirements of a component:

  1. Update the persistent volume claims in the Core OpenShift templates as described in the Persistent Volume documentation.

    The following example JSON object definition shows how to create a 25GB persistent volume with ReadWriteOnce access mode:

    {
      "kind": "PersistentVolume",
      "apiVersion": "v1",
      "metadata": {
        "name": "examplePV"
      },
      "spec": {
        "capacity": {
          "storage": "25Gi"
        },
        "accessModes": [
          "ReadWriteOnce"
        ],
        "persistentVolumeReclaimPolicy": "Recycle",
        "nfs": {
          "path": "/path/to/examplePV",
          "server": "172.17.0.2"
        }
      }
    }
    Note

    For more information on the types of Access Modes read the Persistent Volume Access Modes documentation.

  2. Review the persistent volume reclaim policy as described in the Persistent Volume Reclaim Policy documentation to decide which policy suits your requirements. This policy affects how your data is handled if a persistent volume claim is removed.

5.2.1.1. Root Squash Recommendations

Both GlusterFS and NFS have a root squash option available, however they do not function in the same manner, here are some further details regarding this setting:

5.2.1.1.1. GlusterFS

Enabling root-squash prevents remote super-users from having super-user privileges on the storage system.

Recommended setting: Off

5.2.1.1.2. NFS

root_squash prevents remote super-users from changing other user’s files on the storage system.

Recommended setting: On

5.2.1.2. Persistent Storage Recommendations

It is recommended by OpenShift to only use HostPath storage for single node testing. Instead of HostPath storage, use another driver such as GlusterFS or NFS when installing the Core. For more information on types of persistent volume read the Types of Persistent Volumes documentation.

For detailed information on persistent volumes and how to create them, see Persistent Storage in the OpenShift Container Platform documentation.

5.2.2. Applying Node Labels

You can skip this entire labeling section if your OpenShift cluster only has a single schedulable node. In such case, all Core components will run on that single node.

Red Hat recommends you deploy the Core components to dedicated nodes, separated from other applications (such as the RHMAP MBaaS and Cloud Apps). However this deployment structure is not required.

To use an example, if you have two nodes where you would like the Core components to be deployed to, these two nodes should have a specific label e.g. type=core. You can check what type labels are applied to all nodes with the following command:

oc get nodes -L type
NAME          STATUS                     AGE       TYPE
ose-master    Ready,SchedulingDisabled   27d       master
infra-1       Ready                      27d       infra
infra-2       Ready                      27d       infra
app-1         Ready                      27d       compute
app-2         Ready                      27d       compute
core-1        Ready                      27d
core-2        Ready                      27d
mbaas-1       Ready                      27d       mbaas
mbaas-2       Ready                      27d       mbaas
mbaas-3       Ready                      27d       mbaas

To add a type label to the core-1 and core-2 nodes, use the following command:

oc label node core-1 type=core
oc label node core-2 type=core

5.2.3. Installing the Core Using Ansible

5.2.3.1. Setting Variables

The variables required for installation of RHMAP Core are set in /opt/rhmap/4.4/rhmap-installer/roles/deploy-core/defaults/main.yml. This file will allow you to configure the RHMAP Core project for your own environment.

5.2.3.2. Configure Monitoring Components

Setup the monitoring parameters with SMTP server details. This is required to enable email alerting via Nagios when a monitoring check fails. If you don’t require email alerting or want to set it up at a later time, the sample values can be used.

monitoring:
  smtp_server: "localhost"
  smtp_username: "username"
  smtp_password: "password"
  smtp_from_address: "nagios@example.com"
  rhmap_admin_email: "root@localhost"

5.2.3.3. Configure Front End Components

  • SMTP server parameters

The platform sends emails for user account activation, password recovery, form submissions, and other events. Set the following variables as appropriate for your environment:

frontend:
  smtp_server: "localhost"
  smtp_username: "username"
  smtp_password: "password"
  smtp_port: "25"
  smtp_auth: "false"
  smtp_tls: "false"
  email_replyto: "noreply@localhost"
  • Git External Protocol

The default protocol is https. Change this to http if you use a self-signed certificate.

frontend:
  git_external_protocol: "https"
  • Build Farm Configuration

To determine the values for the builder_android_serviced_host and builder_iphone_serviced_host variables, contact Red Hat Support asking for the RHMAP Build Farm URLs that are appropriate for your region.

frontend:
  builder_android_service_host: "https://androidbuild.feedhenry.com"
  builder_ios_service_host: "https://iosbuild.feedhenry.com"

5.2.4. Running the Playbook to deploy RHMAP Core

To deploy the Core run the following command from /opt/rhmap/4.4/rhmap-installer, referencing your own inventory file:

ansible-playbook -i my-inventory-file playbooks/core.yml

The installer will run through all the tasks required to create the RHMAP Core project. It may take some time for all the Pods to start. Once all Pods are running correctly, you should see output similar to the following:

NAME                   READY     STATUS      RESTARTS   AGE
fh-aaa-1-ey0kd         1/1       Running     0          3h
fh-appstore-1-ok76a    1/1       Running     0          6m
fh-messaging-1-isn9f   1/1       Running     0          3h
fh-metrics-1-cnfxm     1/1       Running     0          3h
fh-ngui-1-mosqj        1/1       Running     0          6m
fh-scm-1-c9lhd         1/1       Running     0          3h
fh-supercore-1-mqgph   1/1       Running     0          3h
gitlab-shell-1-wppga   2/2       Running     0          3h
memcached-1-vvt7c      1/1       Running     0          4h
millicore-1-pkpwv      3/3       Running     0          6m
mongodb-1-1-fnf7z      1/1       Running     0          4h
mysql-1-iskrf          1/1       Running     0          4h
nagios-1-mtg31         1/1       Running     0          5h
redis-1-wwxzw          1/1       Running     0          4h
ups-1-mdnjt            1/1       Running     0          4m

Once all Pods are running the Ansible installer will run a Nagios checks against the RHMAP Core project to ensure it is healthy.

You can also access and view the Nagios dashboard at this point. The status of these checks can be useful if something has gone wrong during installation and needs troubleshooting.

To access Nagios, follow the Accessing the Nagios Dashboard section in the Operations Guide.

See the Troubleshooting guide if any Pods are not in the correct state or the installation has failed prior to this.

5.2.5. Verifying The Installation

  1. Log in to the Studio

    To retrieve the URL for the Core Studio, use the following command:

    oc get route rhmap --template "https://{{.spec.host}}"

    The Admin username and password are set in the millicore DeploymentConfig. To view them use this command:

    oc env dc/millicore --list| grep FH_ADMIN
    FH_ADMIN_USER_PASSWORD=password
    FH_ADMIN_USER_NAME=rhmap-admin@example.com

    See the Troubleshooting guide if you are unable to login to the Studio.

5.3. Post-Installation Steps

  • Adjusting System Resource Usage of the Core - we strongly recommend that you adjust the system resource usage of Core components as appropriate for your production environment.
  • Optional: Set up centralized logging - see the Enabling Centralized Logging section for imformation about how to deploy a centralized logging solution based on ElasticSearch, Fluentd, and Kibana.

5.3.1. Accessing the Hosted Core

After Red Hat receives the purchase order for an RHMAP 4.4 subscription with a hosted Core, a member of the sales team internally requests a new RHMAP domain for accessing an instance of the RHMAP Core hosted by Red Hat.

Once the domain is created, a representative of the Red Hat Customer Enablement team will instruct you how to install the MBaaS.

The following steps for getting access to RHMAP Core are performed by a representative of the Red Hat Customer Enablement team:

  1. Creating a domain.

    The domain, such as customername.redhatmobile.com, hosts the RHMAP Core for a single customer.

  2. Creating an administrator account.

    An RHMAP administrator account is created in the domain, and the customer’s technical contact receives an activation e-mail which allows access to the domain using the new account.