Chapter 3. Creating the Environment
3.1. Overview
This reference architecture can be deployed on either a production or trial environment. In both cases, it is assumed that middleware-master refers to one (or only) OpenShift master host and that the environment includes two OpenShift node hosts with names middleware-node1 and middleware-node2. It is further assumed that OpenShift has been installed by the root user and that a regular user has been created with basic access to the host machine, as well as access to OpenShift through its identity providers. A user with the cluster-admin role binding will also be required for building and utilizing the aggregated logging interface.
More information on prerequisite installation and configuration of the Red Hat Enterprise Linux Operating System, OpenShift Container Platform, and more can be found in Section 3: Creating the Environment of a previous Reference Architecture, Building JBoss EAP 7 Microservices on OpenShift Container Platform.
3.2. Build and Deploy
3.2.1. Creating a New Project
Utilize remote or direct terminal access to log in to the OpenShift environment as the user who will have ownership of the new project:
# oc login -u ocuserCreate the new project, specifying the name, display name (that which will be seen in the OCP web console), and a meaningful description:
# oc new-project fis2-msa --display-name="FIS2 Microservice Architecture" --description="Event-driven and RESTful microservices via microservices API Gateway pattern on OpenShift Container Platform"3.3. MySQL Images
This reference architecture uses version 5.6 of the supported MySQL image to build two services providing persistence to the product and sales services.
To deploy the database services, use the new-app command and provide a number of required and optional environment variables along with the desired service name.
To deploy the product database service:
$ oc new-app -e MYSQL_USER=product -e MYSQL_PASSWORD=password -e MYSQL_DATABASE=product -e MYSQL_ROOT_PASSWORD=passwd mysql --name=product-db
To deploy the sales database service:
$ oc new-app -e MYSQL_USER=sales -e MYSQL_PASSWORD=password -e MYSQL_DATABASE=sales -e MYSQL_ROOT_PASSWORD=passwd mysql --name=sales-db
Database images created with this simple command are ephemeral and result in data loss in the case of a pod restart. Run the image with mounted volumes to enable persistent storage for the database. The data directory where MySQL stores database files is located at /var/lib/mysql/data.
Refer to OpenShift Container Platform 3 documentation to configure persistent storage.
Enabling clustering for database images is currently in Technology Preview and not intended for production use.
These commands create two OpenShift services, each running MySQL in its own container. In each case, a MySQL user is created with the value specified by the MYSQL_USER attribute and the associated password. The MYSQL_DATABASE attribute results in a database being created and set as the default user database.
To monitor the provisioning of the services, use oc status. You can use oc get events for further information and troubleshooting.
$ oc status
In project FIS2 Microservice Architecture (fis2-msa) on server https://middleware-master.hostname.example.com:8443
svc/product-db - 172.30.192.152:3306
dc/product-db deploys openshift/mysql:5.6
deployment #1 deployed 2 minutes ago - 1 pod
svc/sales-db - 172.30.216.251:3306
dc/sales-db deploys openshift/mysql:5.6
deployment #1 deployed 37 seconds ago - 1 pod
2 warnings identified, use 'oc status -v' to see details.The warnings can be further inspected by using the -v flag. In this case, they simply refer to the fact that the image has no readiness probe to verify pods are ready to accept traffic or ensure deployment is successful.
Make sure the database services are successfully deployed before moving on to building other services that depend on them. The service log indicates when the database has been successfully deployed. Use tab to complete the pod name, for example:
$ oc logs product-db-1-3drkp
---> 21:21:50 Processing MySQL configuration files ...
---> 21:21:50 Initializing database ...
---> 21:21:50 Running mysql_install_db ...
...omitted...
2016-06-04 21:21:58 1 [Note] /opt/rh/rh-mysql56/root/usr/libexec/mysqld: ready for connections.
Version: '5.6.30' socket: '/var/lib/mysql/mysql.sock' port: 3306 MySQL Community Server (GPL)3.4. JBoss EAP 7 xPaaS Images
The Sales, Product and Presentation services rely on OpenShift S2I for Java EE applications and use the Red Hat xPaaS EAP Image. You can verify the presence of this image stream in the openshift project (utilizing a user with access to the openshift namespace):
# oc get imagestreams -n openshift
NAME DOCKER REPO
...omitted...
jboss-eap70-openshift registry.access.redhat.com/jboss-eap-7/eap70-openshift 1.4-30,1.4-28,1.4-26 + 2 more... 2 weeks ago
...omitted...3.5. Fuse Integration Services 2.0 xPaaS Images
The Gateway, Billing and Warehouse services rely on the Red Hat JBoss Fuse Integration Services 2.0 image. As before, you can verify the presence of this image stream by inspecting the openshift namespace:
# oc get imagestreams -n openshift
NAME DOCKER REPO
...
fis-java-openshift registry.access.redhat.com/jboss-fuse-6/fis-java-openshift 2.0,2.0-3,1.0 + 2 more... 2 weeks ago
...3.6. Building the Services
The microservice application source code for this reference architecture is made available in a public git repository at https://github.com/RHsyseng/FIS2-MSA. This includes six distinct services, provided as subdirectories within the repository: Gateway, Billing, Product, Sales, Presentation and Warehouse. Templates which configure and deploy each of these modules have been included in a seventh directory, yaml-templates. In order to proceed, a copy of each template will be needed locally on the master node. Either use curl to fetch all the files, as seen below, or curl then execute the fetch-templates.sh script which will establish and populate a local copy of the yaml-templates directory.
$ mkdir yaml-templates $ cd yaml-templates $ curl -O -L https://raw.githubusercontent.com/RHsyseng/FIS2-MSA/master/yaml-templates/logging-deployer.yaml $ curl -O -L https://raw.githubusercontent.com/RHsyseng/FIS2-MSA/master/yaml-templates/billing-template.yaml $ curl -O -L https://raw.githubusercontent.com/RHsyseng/FIS2-MSA/master/yaml-templates/gateway-template.yaml $ curl -O -L https://raw.githubusercontent.com/RHsyseng/FIS2-MSA/master/yaml-templates/messaging-template.yaml $ curl -O -L https://raw.githubusercontent.com/RHsyseng/FIS2-MSA/master/yaml-templates/presentation-template.yaml $ curl -O -L https://raw.githubusercontent.com/RHsyseng/FIS2-MSA/master/yaml-templates/product-template.yaml $ curl -O -L https://raw.githubusercontent.com/RHsyseng/FIS2-MSA/master/yaml-templates/sales-template.yaml $ curl -O -L https://raw.githubusercontent.com/RHsyseng/FIS2-MSA/master/yaml-templates/warehouse-template.yaml $ curl -O -L https://raw.githubusercontent.com/RHsyseng/FIS2-MSA/master/yaml-templates/logging-accounts.sh $ chmod +x logging-accounts.sh
Alternatively, using the fetch-templates.sh script:
$ cd ~ $ curl -O -L https://raw.githubusercontent.com/RHsyseng/FIS2-MSA/master/fetch-templates.sh $ chmod +x fetch-templates.sh $ ./fetch-templates.sh $ ls -la total 52 drwxrwxr-x. 2 jary jary 272 May 23 01:18 . drwx------. 10 jary jary 4096 May 23 01:18 .. -rw-rw-r--. 1 jary jary 2855 May 23 01:18 billing-template.yaml -rw-rw-r--. 1 jary jary 2855 May 23 01:18 gateway-template.yaml -rwxrwxr-x. 1 jary jary 2251 May 23 01:18 logging-accounts.sh -rw-rw-r--. 1 jary jary 9168 May 23 01:18 logging-deployer.yaml -rw-rw-r--. 1 jary jary 5118 May 23 01:18 messaging-template.yaml -rw-rw-r--. 1 jary jary 3097 May 23 01:18 presentation-template.yaml -rw-rw-r--. 1 jary jary 2889 May 23 01:18 product-template.yaml -rw-rw-r--. 1 jary jary 2847 May 23 01:18 sales-template.yaml -rw-rw-r--. 1 jary jary 2897 May 23 01:18 warehouse-template.yaml
3.6.1. A-MQ Messaging Broker
The first component of the application to be deployed is the Messaging component, which is based on the amq62-basic imageStream. The AMQ_USER and AMQ_PASSWORD environment variables are available for overriding on the messaging, gateway, billing, and warehouse templates. Default values may be used by omitting the two -p arguments from the oc process command. If values are overridden, the remaining 3 modules should be provided with the same credentials via the process command as shown below.
$ oc process -f messaging-template.yaml -p AMQ_USER=mquser -p AMQ_PASSWORD=password | oc create -f - serviceaccount "mqserviceaccount" created rolebinding "mqserviceaccount-view-role" created deploymentconfig "broker-amq" created service "broker-amq-mqtt" created service "broker-amq-stomp" created service "broker-amq-tcp" created service "broker-amq-amqp" created
This template will create a service account required by A-MQ, assign the new account the 'view' role, then initiate 4 services, all related to different protocols available to A-MQ. The service configurations within the template link each A-MQ protocol deployment together as dependencies so that they display and scale as a unified service:
annotations:
description: The broker's AMQP port.
openshift.io/generated-by: OpenShiftNewApp
service.alpha.openshift.io/dependencies: '[{"name":"broker-amq-stomp","namespace":"","kind":"Service"},{"name":"broker-amq-mqtt","namespace":"","kind":"Service"},{"name":"broker-amq-tcp","namespace":"","kind":"Service"}]'Figure 3.1. Unified A-MQ Deployments in Web console

3.6.2. FIS 2.0 Services
Once the A-MQ deployments are complete, initialize the first module to utilize the Fuse Integration Services 2.0 image, the Billing Service, via template:
$ oc process -f billing-template.yaml | oc create -f - buildconfig "billing-service" created imagestream "billing-service" created deploymentconfig "billing-service" created service "billing-service" create
THE oc status command can be used to monitor the progress of the operation. To monitor the build and deployment process more closely, find the running build and follow the build log:
$ oc get builds NAME TYPE FROM STATUS STARTED DURATION billing-service-1 Source Git Running 1 seconds ago 1s $ oc logs -f bc/billing-service-1
Next, initialize the Warehouse Service:
$ oc process -f warehouse-template.yaml | oc create -f - buildconfig "warehouse-service" created imagestream "warehouse-service" created deploymentconfig "warehouse-service" created service "warehouse-service" created
Finally, initialize the Gateway Service:
$ oc process -f gateway-template.yaml | oc create -f - buildconfig "gateway-service" created imagestream "gateway-service" created deploymentconfig "gateway-service" created service "gateway-service" created
3.6.3. EAP Services
Begin by deploying the Product and Sales services, bearing in mind that both have a persistence dependency on the aforementioned database services. If credentials were altered from those shown when creating the database services earlier, override the template defaults by providing the MYSQL_USER and MYSQL_PASSWORD environment variables.
To deploy the Product service:
$ oc process -f product-template.yaml | oc create -f - buildconfig "product-service" created imagestream "product-service" created deploymentconfig "product-service" created service "product-service" created
To deploy the Sales service:
$ oc process -f sales-template.yaml | oc create -f - buildconfig "sales-service" created imagestream "sales-service" created deploymentconfig "sales-service" created service "sales-service" created
Once again, oc status can be used to monitor progress. To monitor the build and deployment process more closely, find the running build and follow the build log:
$ oc get builds NAME TYPE FROM STATUS STARTED DURATION sales-service-1 Source Git Running 1 seconds ago 1s $ oc logs -f bc/sales-service-1
Finally, deploy the Presentation service, which exposes a web tier that will interact with the microservices API Gateway. Specify the ROUTE_URL parameter required for route definition as an environment variable:
$ oc process -f presentation-template.yaml -p ROUTE_URL=presentation.bxms.ose | oc create -f - buildconfig "presentation" created imagestream "presentation" created deploymentconfig "presentation" created service "presentation" created route "presentation" created
Note that the Maven build file for this project specifies a war file name of ROOT, which results in this application being deployed to the root context of the server. A route is also created so that the service is accessible externally via ROUTE_URL. The route tells the deployed router to load balance any requests with the given host name among the replicas of the presentation service. This host name should map to the IP address of the hosts where the router has been deployed, not necessarily where the service is hosted. For clients outside of this network and for testing purposes, simply modify your /etc/hosts file to map this host name to the IP address of the master host.
3.7. Populating Databases
Once all services have been instantiated and are successfully running, you can utilize the demo.jsp page included in the Presentation module to run a one-time initializing script which will populate the product and sales databases with the needed schemas and seed data to make the eCommerce example runnable:
Example: http://presentation.bxms.ose/demo.jsp
Figure 3.2. Presentation Landing Page After Population

The system is now ready to accept new user registrations and transactions.
3.8. Replicas
Describe the deployment configuration of your services and verify how many instances have been configured and deployed. For example:
$ oc describe dc product-service
Name: product-service
Created: 4 days ago
Labels: app=product-service
Latest Version: 1
Triggers: Config, Image(product-service@latest, auto=true)
Strategy: Rolling
Template:
Selector: app=product-service,deploymentconfig=product-service
Replicas: 1
Containers:
NAME IMAGE ENV
...
Replicas: 1 current / 1 desired
Selector: app=product-service,deployment=product-service-1,deploymentconfig=product-service
Labels: app=product-service,openshift.io/deployment-config.name=product-service
Pods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed
No events.Based on this default configuration, each service will have at most one replica, meaning the service will be backed by a single pod. In the event of a service container failure or node failure, a new service pod will be deployed to a healthy node as long as there is an active master node. However, it is often desirable to balance load between multiple pods of a service to avoid lengthy downtimes while failed containers are replaced.
Refer to the OpenShift Developer Guide for deployment configuration details and guidance on properly specifying the number of desired replicas.
To manually scale a service and verify this feature, use the oc scale command and provide the number of desired replicas for a given deployment configuration:
$ oc scale dc product-service --replicas=3 deploymentconfig "product-service" scaled
There will now be 3 separate pods running this service. To verify, query all pods configured for product-service:
$ oc get pods -l app=product-service NAME READY STATUS RESTARTS AGE product-service-1-8ag1z 1/1 Running 0 40s product-service-1-mhyfu 1/1 Running 0 40s product-service-1-mjis5 1/1 Running 0 5m
Requests received via exposed route or internal call from another OpenShift by its service/host name (eg. Presentation calls to gateway-service) are handled by an internal proxy in such a way that the load is balanced between available replicas, failing over when necessary.
3.9. Aggregated Logging
In order to view all container logs in a central location for ease of analytics and troubleshooting, utilize the Logging template to instantiate an EFK stack.
Begin by creating a new project to host your logging services:
$ oc new-project logging --display-name="Aggregated Container Logging" --description="Aggregated EFK Logging Stack"
Next, ensure the template required for logging has been installed during Ansible-based OpenShift installation:
$ oc describe template logging-deployer-template -n openshift
If the template is available, you’ll see a large amount of output about the template. If the template is not available, you’ll see the following error:
$ Error from server (NotFound): templates "logging-deployer-template" not found
Should the template not be available, create it from the template which was fetched as part of the yaml-templates operations earlier:
$ cd ~/yaml-templates $ oc create -f logging-deployer.yaml
Prior to using the template to start up the deployment assistant pod, several account-oriented components need to created. To simplify this process, a bash script has been provided in the yaml-templates directory. Running the script as a user with cluster-admin privileges is required. Specifying the name of the newly created logging project as an argument:
$ cd ~/yaml-templates $ ./logging-accounts.sh logging
Note that the output of the script may have some Error lines, indicating that attempts to remove some components that don’t exist were made in order to properly handle incremental executions. These errors are not indicative of a script failure. After executing the script, use the previously mentioned template to create the deployment pod, substituting the KIBANA_HOSTNAME and PUBLIC_MASTER_URL as needed:
oc new-app logging-deployer-template --param KIBANA_HOSTNAME=efk-kibana.bxms.ose --param ES_CLUSTER_SIZE=2 --param PUBLIC_MASTER_URL=https://middleware-master.cloud.lab.eng.bos.redhat.com:8443
The deployer pod has now been created and should be monitored for completion:
$ oc get pods NAME READY STATUS RESTARTS AGE logging-deployer-rpqz8 1/1 Running 0 45s $ oc logs -f pod/logging-deployer-rpqz8
Once the deployer has completed it’s function, there will be a "Success!" entry in the log, followed by a description of several commands to follow next. For this project’s configuration, the next and only additional required step is to label each node used to host Fluentd, in order to gather container logs:
$ oc label node/middleware-node1.cloud.lab.eng.bos.redhat.com:8443 logging-infra-fluentd=true $ oc label node/middleware-node2.cloud.lab.eng.bos.redhat.com:8443 logging-infra-fluentd=true
Once all pods have started, the aggregated logging interface can be reached via the parameterized URL provided earlier. If you are following the trial environment, be sure to add the address to your /etc/hosts file pointing to the cluster master. Given the scope of cluster information available via the EFK Stack, a user with the cluster-admin role is required to log in to the console.
Figure 3.3. Kibana Web console


Where did the comment section go?
Red Hat's documentation publication system recently went through an upgrade to enable speedier, more mobile-friendly content. We decided to re-evaluate our commenting platform to ensure that it meets your expectations and serves as an optimal feedback mechanism. During this redesign, we invite your input on providing feedback on Red Hat documentation via the discussion platform.