Chapter 4. Creating the Environment
4.1. Installation
4.1.1. Prerequisites
4.1.1.1. Overview
This reference architecture can be deployed in either a production or a trial environment. In both cases, it is assumed that rhmap-master1 refers to one (or the only) OpenShift master host and that the environment includes six OpenShift schedulable hosts with the host names of rhmap-node1 to rhmap-node6. Production environments would have at least 3 master hosts to provide High Availability (HA) resource management.
It is further assumed that OpenShift Container Platform has been installed by the root user and that a regular user has been created with basic access to the host machine, as well as access to OpenShift through its identity providers.
4.1.1.2. Sizing
Red Hat provides a sizing tool for Red Hat Mobile Application Platform. For the purpose of installing and configuring an environment, this reference application is assumed to be a Business to Employee (B2E) internal application, deployed on an on-premise platform that includes both the core and MBaaS components. We further assume that it qualifies as a single application for the use of up to 200 employees. Based on these parameters, the sizing tool suggests:
For your B2E subscription for 1 apps and Up to 200 employees, we recommend 3 nodes for your OpenShift master and 0 nodes for your OpenShift infrastructure, 0 nodes for your RHMAP MBaaS and 3 App nodes.
Each VM for those nodes must a minimum configuration of:
vCPUS: 2
RAM(GB): 8
Storage(GB): 16
For full RHMAP installation the Core MAP component for B2E subscription for 1 apps and Up to 200 employees, we recommend 3 nodes.
Each VM for those Core MAP nodes must have a minimum configuration of:
vCPUS: 4
RAM(GB): 8
Storage(GB): 100
4.1.1.3. Infrastructure
Set up six virtual or physical machines to host the nodes of the OpenShift cluster, as suggested by the results of the sizing tool.
In the reference architecture environment, rhmap-node1, rhmap-node2, and rhmap-node3 are used for MBaaS. The next three nodes, rhmap-node4, rhmap-node5, and rhmap-node6 deploy the core components.
Use root or another user with the required privileges to create a Linux user that will run the Ansible provisioning scripts. For example, to create the user, assign a password, and give it superuser privileges:
# useradd rhmapAdmin # passwd rhmapAdmin # usermod -aG wheel rhmapAdmin
Allow members of the wheel group to execute privileged actions without typing the password.
# visudoUncomment the line that specifies a NOPASSWD policy:
%wheel ALL=(ALL) NOPASSWD: ALL
Repeat this process on all masters and nodes.
Next, generate SSH keys for the new user. The keys will be used to allow the user to access all the nodes from the master, without having to provide a password. Press enter to accept default values or provide empty values for the prompts:
# su - rhmapAdmin $ ssh-keygen Generating public/private rsa key pair. Enter file in which to save the key (/home/rhmapAdmin/.ssh/id_rsa): Created directory '/home/rhmapAdmin/.ssh'. Enter passphrase (empty for no passphrase): Enter same passphrase again: Your identification has been saved in /home/rhmapAdmin/.ssh/id_rsa. Your public key has been saved in /home/rhmapAdmin/.ssh/id_rsa.pub. The key fingerprint is: d0:d2:d9:ed:31:63:28:d0:c0:4c:91:8b:34:d5:03:b8 rhmapAdmin@rhmap-master1.xxx.example.com The key's randomart image is: +--[ RSA 2048]----+ | *BB | | + +++o o | | . +o.=.o * | | E .o . o + | | S . | | | | | | | | | +-----------------+
Now that the keys are generated, copy the public key to all machines to allow the holder of the private key to log in:
$ for host in rhmap-master1 rhmap-node1 rhmap-node2 rhmap-node3 rhmap-node4 rhmap-node5 rhmap-node6; \ do ssh-copy-id -i ~/.ssh/id_rsa.pub $host; \ done
4.1.1.5. OpenShift Configuration
Create an OpenShift user, optionally with the same name, to use for the provisioning. Assuming the use of HTPasswd as the authentication provider:
$ sudo htpasswd -c /etc/origin/master/htpasswd rhmapAdmin New password: PASSWORD Re-type new password: PASSWORD Adding password for user rhmapAdmin
Grant OpenShift admin and cluster admin roles to this user:
$ sudo oadm policy add-cluster-role-to-user admin rhmapAdmin $ sudo oadm policy add-cluster-role-to-user cluster-admin rhmapAdmin
At this point, the new OpenShift user can be used to sign in to the cluster through the master server:
$ oc login -u rhmapAdmin -p PASSWORD --server=https://rhmap-master1.xxx.example.com:8443
Login successful.4.1.1.5.1. Wildcard Certificate
Simple installations of Red Hat OpenShift Container Platform 3 typically include a default HAProxy router service. Use a commercially signed wildcard certificate, or generate a self-signed certificate, and configure a new router to use it.
To generate a self-signed certificate using the certificate included in the OpenShift Container Platform installation:
$ CA=/etc/origin/master $ sudo oadm ca create-server-cert --signer-cert=$CA/ca.crt \ --signer-key=$CA/ca.key --signer-serial=$CA/ca.serial.txt \ --hostnames='.rhmap.xxx.example.com' \* --cert=cloudapps.crt --key=cloudapps.key $ sudo cat cloudapps.crt cloudapps.key $CA/ca.crt > cloudapps.router.pem
Delete the previously configured router:
$ oc delete all -l router=router -n default $ oc delete secret router-certs -n default
Recreate the HAProxy router with the newly generated certificate:
$ sudo oadm router --default-cert=cloudapps.router.pem --service-account=router4.1.2. Red Hat Mobile Application Platform
4.1.2.1. Preparations
Configure and run the provisioning steps for Red Hat Mobile Application Platform from the rhmap-master1 machine.
Enable the yum repository for Red Hat Mobile Application Platform 4.4:
$ sudo subscription-manager repos --enable="rhel-7-server-rhmap-4.4-rpms"Install the OpenShift templates for Red Hat Mobile Application Platform, as well as the Ansible playbooks and configuration files through its package:
$ sudo yum install rhmap-fh-openshift-templatesConfigure an Ansible inventory file for the reference architecture environment. Several templates are provided as part of the package installed in the previous step. For this environment, use /opt/rhmap/4.4/rhmap-installer/inventories/templates/multi-node-example as a starting point. The inventory file used for this environment is provided as an appendix to this document.
The cluster_hostname variable is set to the OpenShift cluster subdomain. The domain_name value is a name that is used to build the address to the studio web console.
The specified user for the ansible_ssh_user variable is used to log in to the required machines during installation. The OpenShift user and password are also specified in this file.
To run the provided ansible books, change to the /opt/rhmap/4.4/rhmap-installer/ directory. The roles directory is expected in the current directory when running the playbooks. Save the modified host inventory file detailed in the appendix in this directory as rhmap_hosts.
Download the required docker images to make the platform setup process faster, and avoid timeouts. Run the two following playbooks to download the images:
$ ansible-playbook -i rhmap-hosts playbooks/seed-images.yml -e "project_type=core" -e "rhmap_version=4.4" $ ansible-playbook -i rhmap-hosts playbooks/seed-images.yml -e "project_type=mbaas" -e "rhmap_version=4.4"
4.1.2.2. Core Installation
4.1.2.2.1. Persistent Volumes
Create an OpenShift persistent volume for each of the mounts designated for the core component. This can be done by first creating a yaml or json file for each volume, for example:
{
"kind": "PersistentVolume",
"apiVersion": "v1",
"metadata": {
"name": "git-data"
},
"spec": {
"capacity": {
"storage": "5Gi"
},
"accessModes": [
"ReadWriteOnce"
],
"persistentVolumeReclaimPolicy": "Recycle",
"nfs": {
"path": "/mnt/rhmapcore/gitlabshell",
"server": "10.19.137.71"
}
}
}Create a similar file for each logical volume, with the corresponding storage capacity and mount address. The name is arbitrary and the association between the claiming component and the persistent volume is not enforced.
Once created, to create the persistent volume based on a file, simply use the create command of the oc utility:
$ oc create -f git-data-pv.json4.1.2.2.2. Node Labels
Label the nodes intended for provision the core components accordingly. These labels can be used by OpenShift commands to filter out the nodes applicable for a given operation.
$ for i in {4..6}; do oc label node rhmap-node$i.xxx.example.com type=core; done;4.1.2.2.3. Configuration
Configure the Red Hat Mobile Application Platform core platform by editing the Ansible file. This file is located at /opt/rhmap/4.4/rhmap-installer/roles/deploy-core/defaults/main.yml.
When using a self-signed certificate, this file must be edited to use http as the git protocol.
git_external_protocol: "http"
Also note that the Build Farm Configuration has sample values that do not function. To determine the values for the builder_android_serviced_host and builder_iphone_serviced_host variables, contact Red Hat Support asking for the RHMAP Build Farm URLs that are appropriate for your region.
builder_android_service_host: "https://androidbuild.feedhenry.com"
builder_iphone_service_host: "https://iosbuild.feedhenry.com"
4.1.2.2.4. Provisioning
With the configuration in place, simply run the playbook to provision the Red Hat Mobile Application Platform core platform:
$ ansible-playbook -i ~/setup/rhmap-hosts playbooks/core.ymlThe provisioning script tries to verify the router certificate and match the subject name against the provided cluster name. The grep command may not function as expected in some shell environments. The user is given the option to disregard a failed check and continue the installation.
The Ansible provisioning script creates a test project with a mysql pod to verify that the persistent volume can be used. This project is subsequently deleted and the persistent volume is recycled and made available for the actual pod. However, the recycling of the persistent volumes can take time and if it does not happen fast enough, the pods will have one fewer persistent volumes than expected and the one that does not bind may end up requesting a higher capacity than the previously busy volume is able to provide. The script already includes a 20 second pause in /opt/rhmap/4.4/rhmap-installer/roles/setup/vars/core.yml but this duration may not be sufficient for some environments, in which case it can be increased in the file.
4.1.2.2.5. Verification
Verify that the RHMAP core has been provisioned successfully, by first listing the pods in the project and ensuring that they are all in the running state:
$ oc get pods -n rhmap-core
NAME READY STATUS RESTARTS AGE
fh-aaa-1-35cm3 1/1 Running 0 1m
fh-appstore-1-94t68 1/1 Running 0 1m
fh-messaging-1-jtxb5 1/1 Running 0 1m
fh-metrics-1-qvmbp 1/1 Running 0 1m
fh-ngui-1-gkznl 1/1 Running 0 1m
fh-scm-1-hth5b 1/1 Running 0 1m
fh-supercore-1-sl81q 1/1 Running 0 1m
gitlab-shell-1-q90xt 2/2 Running 1 1m
memcached-1-pbsc6 1/1 Running 0 1m
millicore-1-hccdv 3/3 Running 0 1m
mongodb-1-1-c9x66 1/1 Running 0 1m
mysql-1-kv8n4 1/1 Running 0 1m
nagios-1-38pv1 1/1 Running 0 1m
redis-1-8gtcp 1/1 Running 0 1m
ups-1-3t6js 1/1 Running 0 1m
For more confidence that all core services are running probably, use the included Nagios console. Find the URL to the nagios console by querying the created OpenShift route:
$ oc get route nagios -n rhmap-core
NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD
nagios nagios-rhmap-core.rhmap.xxx.example.com nagios <all> edge/Allow NonePoint your browser to this address. The service using basic authentication and while the username for accessing nagios is statically set to nagiosadmin, the password is generated by OpenShift during deployment. Query this password from the pod environment:
$ oc env dc nagios --list -n rhmap-core | grep NAGIOS
NAGIOS_USER=nagiosadmin
NAGIOS_PASSWORD=7Ppw0t2l78
Once logged in, use the left panel menu and click on services. Verify that the status of all services is green and OK.
Figure 4.1. Nagios admin console, RHMAP Core service health

4.1.2.3. MBaaS Installation
4.1.2.3.1. Persistent Volumes
Create an OpenShift persistent volume for each of the mounts designated for the MBaaS component. Again, create a yaml or json file for each volume, for example:
{
"kind": "PersistentVolume",
"apiVersion": "v1",
"metadata": {
"name": "mbaas-mongodb-claim-1"
},
"spec": {
"capacity": {
"storage": "50Gi"
},
"accessModes": [
"ReadWriteOnce"
],
"persistentVolumeReclaimPolicy": "Recycle",
"nfs": {
"path": "/mnt/mbaas/mongo1",
"server": "10.19.137.71"
}
},
"claimRef": {
"namespace": "hosted-mbaas",
"name": "mongodb-claim-1"
}
}Similar to before, to create the persistent volume based on a file, simply use the create command of the oc utility:
$ oc create -f mbaas-mongo1-pv.json
4.1.2.3.2. Node Labels
Label the nodes intended for provision the MBaaS components accordingly. These labels can be used by OpenShift commands to filter out the nodes applicable for a given operation. Also label the nodes sequentially as node #1, #2 and #3 for mbaas:
$ for i in {1..3}; do oc label node rhmap-node$i.xxx.example.com type=mbaas; done; $ for i in {1..3}; do oc label node rhmap-node$i.xxx.example.com mbaas_id=mbaas$i; done;
4.1.2.3.3. Configuration
Configure the mbaas platform by editing the Ansible file. This file is located at /opt/rhmap/4.4/rhmap-installer/roles/deploy-mbaas/defaults/main.yml and contains SMTP and similar configuration.
4.1.2.3.4. Provisioning
With the configuration in place, simply run the playbook to provision the Red Hat Mobile Application Platform mbaas platform:
$ ansible-playbook -i ~/setup/rhmap-hosts playbooks/3-node-mbaas.ymlOnce again, refer to the notes above for some potential provisioning issues.
4.1.2.3.5. Verification
Verify that the RHMAP MBaaS components have been provisioned successfully, by first listing the pods in the project and ensuring that they are all in the running state, except for the mongodb-initiator, which only exists to initiate the mongo databases and completes its task after doing so:
$ oc get pods -n rhmap-3-node-mbaas
NAME READY STATUS RESTARTS AGE
fh-mbaas-1-6f3vq 1/1 Running 4 1m
fh-mbaas-1-gn7b6 1/1 Running 5 1m
fh-mbaas-1-xcr05 1/1 Running 4 1m
fh-messaging-1-654nk 1/1 Running 4 1m
fh-messaging-1-7qtns 1/1 Running 4 1m
fh-messaging-1-805vc 1/1 Running 4 1m
fh-metrics-1-7g7gd 1/1 Running 4 1m
fh-metrics-1-pdz40 1/1 Running 5 1m
fh-metrics-1-w2zs5 1/1 Running 5 1m
fh-statsd-1-hhn6k 1/1 Running 0 1m
mongodb-1-1-cj059 1/1 Running 0 1m
mongodb-2-1-m1fg0 1/1 Running 0 1m
mongodb-3-1-d83dh 1/1 Running 0 1m
mongodb-initiator 0/1 Completed 0 1m
nagios-1-93vp1 1/1 Running 0 1mYou can also use nagios again. Find the URL to the nagios console by querying the created OpenShift route:
$ oc get route nagios -n rhmap-3-node-mbaas
NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD
nagios nagios-rhmap-3-node-mbaas.rhmap.xxx.example.com nagios <all> edge/Allow NoneThe credentials, once again, are available from the pod environment:
$ oc env dc nagios --list -n rhmap-3-node-mbaas | grep NAGIOS
NAGIOS_USER=nagiosadmin
NAGIOS_PASSWORD=cYxMr1taIUSimilar to the nagios console for RHMAP core services, all 15 services should be in a green OK status.
The much better way of verifying the health of MBaaS services is through a REST call:
$ curl oc get route mbaas -n rhmap-3-node-mbaas --template "{{.spec.host}}"/sys/info/health
{
"status": "ok",
"summary": "No issues to report. All tests passed without error.",
"details": [
{
"description": "Check fh-statsd running",
"test_status": "ok",
"result": {
"id": "fh-statsd",
"status": "OK",
"error": null
},
"runtime": 11
},
{
"description": "Check fh-messaging running",
"test_status": "ok",
"result": {
"id": "fh-messaging",
"status": "OK",
"error": null
},
"runtime": 24
},
{
"description": "Check fh-metrics running",
"test_status": "ok",
"result": {
"id": "fh-metrics",
"status": "OK",
"error": null
},
"runtime": 27
},
{
"description": "Check Mongodb connection",
"test_status": "ok",
"result": {
"id": "mongodb",
"status": "OK",
"error": null
},
"runtime": 407
}
]
}Verify that all 4 services return an OK status.
4.1.2.4. Creating an Environment
Before creating a project and starting with development, an environment and an MBaaS target are required. Discover the web address of the RHMAP studio by querying its associated OpenShift route:
$ oc get route rhmap -n rhmap-core
NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD
rhmap app.rhmap.xxx.example.com rhmap-proxy server-http-port edge/Allow NoneThe password for the studio is generated by OpenShift during deployment. It can be retrieved from the millicore deployment config environment:
$ oc env dc millicore --list -n rhmap-core | grep FH_ADMIN
FH_ADMIN_USER_PASSWORD=3QlIf7RBqwmRa4yrH2rNYTKOSC
FH_ADMIN_USER_NAME=rhmap-admin@example.comUse these credentials to log in to the studio. Once signed in, navigate through the Admin menu to MBaaS Targets:
Figure 4.2. MBaaS Targets

Click the button to create an MBaaS target. This requires providing a number of values, some of which are discretionary names and labels, but others are generated by OpenShift during deployment and should be looked up.
Find the values for MBaaS Service Key, MBaaS URL, and Nagios URL:
$ oc env dc fh-mbaas --list -n rhmap-3-node-mbaas | grep FHMBAAS_KEY FHMBAAS_KEY=rpmAV2Mr0RCNegEBb6MM0xv3wq2QW7TWmHBXUHPG $ echo "https://"$(oc get route/mbaas -n rhmap-3-node-mbaas -o template --template {{.spec.host}}) https://mbaas-rhmap-3-node-mbaas.rhmap.xxx.example.com $ echo "https://"$(oc get route/nagios -n rhmap-3-node-mbaas -o template --template {{.spec.host}}) https://nagios-rhmap-3-node-mbaas.rhmap.xxx.example.com
Use the above, and assign other valid values, to fill out the form to create a new MBaaS target:
Figure 4.3. Create MBaaS Target

Once an MBaaS target is successfully created, proceed to create an environment that references it:
Figure 4.4. Environments

Each environment is associated with an OpenShift project, that must be created through a user’s credentials. While logged in to OpenShift as rhmapAdmin, use the following command to discover and note the user’s security token:
$ oc whoami -t
rPTnAjyYFxczPMd5vGXyaOjALn1cYEdCiSVCzGrWnhA
Use this token to create the environment:
Figure 4.5. Create Environment

Verify that an OpenShift environment has been created for this environment:
$ oc get projects
NAME DISPLAY NAME STATUS
default Active
kube-system Active
logging Active
management-infra Active
openshift Active
openshift-infra Active
rhmap-3-node-mbaas Active
rhmap-app-dev RHMAP Environment: app-Development Environment Active
rhmap-core Active4.2. Deployment
4.2.1. E-Commerce Services OpenShift Project
4.2.1.1. Create Project
To start, a suite of back-end logic microservices will be installed in a secondary OpenShift Container Platform project native to the cluster housing the Mobile Application Platform installation.
Utilize remote or direct terminal access to log in to the OpenShift environment as the user who will create, and have ownership of the new project:
$ oc login -u ocuserCreate the new project which will house the microservices suite:
$ oc new-project ecom-services --display-name="E-Commerce Services Suite" --description="Back-end logic microservice suite"4.2.1.2. Template Population
Within the new project, execute the provided YAML template to configure and instantiate the full services suite:
$ oc new-app -f https://raw.githubusercontent.com/RHsyseng/FIS2-MSA/ecom-svcs/project-template.yamlNothing further is required of the user following execution of the template to complete installation. As a whole, necessary builds and deployments can take from 10-20 minutes to complete.
4.2.1.3. Persistence Data Population
Once all services are deployed and running, instantiate the e-commerce data set via a GET request to the gateway service route:
$ curl -i http://ecom.rhmap.xxx.example.com/demo/testApi
HTTP/1.1 200 OK
...4.2.2. E-Commerce Mobile App Platform Project
4.2.2.1. Prerequisites
Before beginning, download the following artifact. The archive contains a series of source code zip artifacts, which are used to import the various apps and services. Unzip the top-level artifact, but don’t unzip the child artifacts contained within.
$ curl -O -L https://github.com/RHsyseng/RHMAP/raw/master/import-artifacts.zip $ unzip import-artifacts.zip
4.2.2.2. Parent Project
Within App Studio, the Mobile Application Platform web interface, select the Projects widget:
Figure 4.6. App Studio

Press the New Project button, then select Empty Project. Provide project name 'ecom-refarch' and then hit Create.
Figure 4.7. Empty Project Creation

Figure 4.8. Project Creation Complete

Use the Finish button after completion to navigate to the project dashboard.
Figure 4.9. Project Dashboard

4.2.2.3. MBaaS Health Monitor
From the top of the screen, select Services and APIs to start service creation. Once there, click the Provision MBaaS Service/API button to start the creation process.
Next, select Import Existing Service, name the service ecom-svcs-health, and click Next.
For Import From, select Zip File, provide the ecom-status.zip file downloaded in previous steps, then click Next, followed by Finish.
From the new service app’s Details page, scroll down and mark Make this Service Public to all Projects and Services, then Save Service.
Figure 4.10. Marking Service Public

If prompted, confirm pushing of environment variables, then from the left-hand Navigation panel, select Deploy.
Use the Deploy Cloud App button to instantiate a new build and deployment of the service.
Figure 4.11. Deploy Cloud App

After initiating deployment, use the left-hand navigation to return to Details.
The status of the project will change from Stopped to Running once the triggered build and deployment complete.
Once running, copy the 'Current Host' URL value and open a new tab, append /admin to the copied value, and navigate to the service page.
Figure 4.12. Health Monitor Service Details

If /admin is not appended to the URL value, the interface will disable the Add New button and restrict functionality of existing checks to read-only. To create new checks or kick off existing ones, the admin context is required.
Use the Add New button in the upper right-hand corner, configure the modal prompt as shown below, then click Create:
Figure 4.13. Gateway Service Check Configuration

Use the Add New button once again to create a check for Billing Service, this time utilizing the HTTP(s) protocol option instead of TCP, as shown below:
Figure 4.14. Billing Service Check Configuration

Create 3 additional new services in the same fashion for the sales-service, product-service, and warehouse-service, altering the HTTP(s) URL to match the service destination for each.
Figure 4.15. Health Monitor Service

At this point, click checkbox next to all 5 services and then click the Summary Url button. Copy the URL somewhere close at hand, as it will be required further on.
Lastly, use the Projects link in the upper navigation bar to navigate to the ecom-refarch project dashboard. Click the plus button in the upper right corner of the MBaaS Services widget and click the service to associate it to your project.
Figure 4.16. Associating the Service

Association of services is not limited to a single project, but rather allows applications within linked projects to call the service via the fh.mbaas() function of the cloud API. The cloud application built below uses the service’s exposed URL value rather than a direct API call, thus the MBaaS service does not require association to the project. Since this is atypical of most MBaaS service access, the link is still established as part of the example.
4.2.2.4. Cloud Application
Return to the project dashboard, and use the plus button to add a new Cloud Code App.
Just as with the Health Monitor service creation, choose to Import Existing App, name the app ecom-cloud, provide the previously acquired ecom-cloud.zip file, and follow the prompts until reaching the application’s dashboard.
Using the left-hand navigation, go to Environment Variables, then click the Add Variable button.
Provide a new variable with the name of MBAAS_HEALTH_URL, and use the URL copied from the Monitoring service in the previous step, as the variable value. Click Push Environment Variables.
Navigate to Deploy and start a new deployment of the cloud app. Monitor the application dashboard Details screen for results.
Once deployment is complete, it’s possible to directly reach your cloud application via browser at the given Current Host address, however, since the cloud application is not intended as a public-facing tool, the page will simply indicate the general availability of the application.
4.2.2.5. Mobile Client Application
Return to the project dashboard and use the plus button on the Apps widget.
As before, choose to Import Existing App of type Cordova, name the application ecom-shop, and use the provided artifact with the same name as the import source. The various steps for integration are presented and can be reviewed, but no action is necessary. Follow the prompts until taken to the application Details page as before.
Figure 4.17. App Import from Zip

When following the prompted steps, the user may encounter a suggested link for integration of the SDK which resolves as a 404 error. At the time of writing, the issue has been road-mapped for correction. For the purpose of this document, the intended reference material is informative, although not essential, as the steps suggested have already been completed prior to assembling the source code artifacts.
From Details, select the Connections link found in the horizontal navigation bar, then click the ecom-shop link inside the Client App column. Copy the SDK configuration JSON contents in the bottom box to clipboard, then hit Cancel to close the modal window.
Figure 4.18. Connection Details

Return to the ecom-shop dashboard, navigate to Editor on the left-hand side, expand the www directory, then select the fhconfig.json file. Replace the contents of the file with the clipboard contents attained from the Connections window and then click File > Save.
By default, RHMAP Studio will complete the above modification to fhconfig.json when a build occurs. Details have been included here so that the reader is made aware of the necessary change and source of information required.
Navigate to Build from the left-hand bar, select the platform of your choice, and use Build to create a native application artifact for the ecom-shop project.
Figure 4.19. Build Screen

After the build completes, a modal window presents both QR code and direct download link for fetching of the resulting artifact.
Figure 4.20. Download Artifact

4.2.2.6. Web Portal Client Application
Once more, return to the project dashboard and add a new client application under Apps.
As with the ecom-shop application, begin the import process for the provided ecom-portal artifact file, this time using the Web App type instead of Cordova. As before, follow the prompts to completion. Lastly, visit Connections to capture the ecom-portal details, which will differ from those provided for ecom-shop, and replace the contents of the src/fhconfig.json file with the new connection information.
Once finished, navigate to Deploy and start a new deployment for the cloud portal app. Monitor the Details page for application state change.
Once complete, the Current Host URL can be used to navigate to the portal landing page, showing a list of featured products. Successful loading and visibility of this page indicates that all cloud and client application setup is complete and properly functioning.

Where did the comment section go?
Red Hat's documentation publication system recently went through an upgrade to enable speedier, more mobile-friendly content. We decided to re-evaluate our commenting platform to ensure that it meets your expectations and serves as an optimal feedback mechanism. During this redesign, we invite your input on providing feedback on Red Hat documentation via the discussion platform.