Chapter 3. Creating the Enviornment
3.1. Overview
This reference architecture may be deployed with either a single, or three master hosts. In both cases, it is assumed that ocp-master1 refers to one (or the only) Red Hat OpenShift master host and that the environment includes two Red Hat OpenShift node hosts with the host names of ocp-node1 and ocp-node2.
It is further assumed that Red Hat OpenShift has been installed by the root user and that a regular user has been created with basic access to the host machine, as well as access to Red Hat OpenShift through its identity providers.
3.2. Build and Deploy
3.2.1. Creating a New Project
Log in to a master host as the root user and create a new project, assigning the administrative rights of the project to the previously created Red Hat OpenShift user:
# oadm new-project msa \ --display-name="OpenShift 3 MSA" \ --description="This is a polyglot microservice architecture environment built on on OCP v3.4" \ --admin=ocuser
Created project msa3.2.2. Red Hat OpenShift Login
Once the project has been created, all remaining steps can be performed as the regular Red Hat OpenShift user. Log in to the master host machine or switch the user, and use the oc utility to authenticate against Red Hat OpenShift:
# su - ocuser $ oc login -u ocuser --certificate-authority=/etc/origin/master/ca.crt \ --server=https://ocp-master1.hostname.example.com:8443 Authentication required for https://ocp-master1.hostname.example.com:8443 (openshift) Username: ocuser Password: PASSWORD Login successful. Using project "msa". Welcome! See 'oc help' to get started.
The recently created msa project is the only project for this user, which is why it is automatically selected as the default working project of the user.
3.2.3. MySQL Images
This reference architecture includes two database services built on the supported MySQL image. This reference architecture uses version 5.6 of the MySQL image.
To deploy the database services, use the new-app command and provide a number of required and optional environment variables along with the desired service name.
To deploy the product database service:
$ oc new-app -e MYSQL_USER=product -e MYSQL_PASSWORD=password -e MYSQL_DATABASE=product -e MYSQL_ROOT_PASSWORD=passwd mysql --name=product-dbTo deploy the sales database service:
$ oc new-app -e MYSQL_USER=sales -e MYSQL_PASSWORD=password -e MYSQL_DATABASE=sales -e MYSQL_ROOT_PASSWORD=passwd mysql --name=sales-dbDatabase images created with this simple command are ephemeral and result in data loss in the case of a pod restart. Run the image with mounted volumes to enable persistent storage for the database. The data directory where MySQL stores database files is located at /var/lib/mysql/data.
Refer to OpenShift Container Platform 3 by Red Hat documentation to configure persistent storage.
Enabling clustering for database images is currently in Technology Preview and not intended for production use.
These commands create two Red Hat OpenShift services, each running MySQL in its own container. In each case, a MySQL user is created with the value specified by the MYSQL_USER attribute and the associated password. The MYSQL_DATABASE attribute results in a database being created and set as the default user database.
To monitor the provisioning of the services, use oc status. You can use oc get events for further information and troubleshooting.
$ oc status
In project OpenShift 3 MSA on EAP 7 (msa) on server https://ocp-master1.hostname.example.com:8443
svc/product-db - 172.30.192.152:3306
dc/product-db deploys openshift/mysql:5.6
deployment #1 deployed 2 minutes ago - 1 pod
svc/sales-db - 172.30.216.251:3306
dc/sales-db deploys openshift/mysql:5.6
deployment #1 deployed 37 seconds ago - 1 pod
2 warnings identified, use 'oc status -v' to see details.The warnings can be further inspected by using the -v flag. In this case, they simply refer to the fact that the image has no readiness probe to verify pods are ready to accept traffic or ensure deployment is successful.
Make sure the database services are successfully deployed before deploying other services that may depend on them. The service log clearly shows if the database has been successfully deployed. Use tab to complete the pod name, for example:
$ oc logs product-db-1-3drkp
---> 21:21:50 Processing MySQL configuration files ...
---> 21:21:50 Initializing database ...
---> 21:21:50 Running mysql_install_db ...
...omitted...
2016-06-04 21:21:58 1 [Note] /opt/rh/rh-mysql56/root/usr/libexec/mysqld: ready for connections.
Version: '5.6.30' socket: '/var/lib/mysql/mysql.sock' port: 3306 MySQL Community Server (GPL)3.2.3.1. Creating tables for the Product service
While the Sales service uses Hibernate to create the required tables in the MySQL database, the Product service uses SQL statements to access the database, so the tables also need to be manually created using the following SQL statements.
Log into the product-db image from the MySQL client and use the following SQL statements to create three tables.
USE product; CREATE TABLE Product (SKU BIGINT NOT NULL AUTO_INCREMENT, DESCRIPTION VARCHAR(255), HEIGHT NUMERIC(5,2) NOT NULL, LENGTH NUMERIC(5,2) NOT NULL, NAME VARCHAR(255), WEIGHT NUMERIC(5,2) NOT NULL, WIDTH NUMERIC(5,2) NOT NULL, FEATURED BOOLEAN NOT NULL, AVAILABILITY INTEGER NOT NULL, IMAGE VARCHAR(255), PRICE NUMERIC(7,2) NOT NULL, PRIMARY KEY (SKU)) AUTO_INCREMENT = 10001; CREATE TABLE Keyword (KEYWORD VARCHAR(255) NOT NULL, PRIMARY KEY (KEYWORD)); CREATE TABLE PRODUCT_KEYWORD (ID BIGINT NOT NULL AUTO_INCREMENT, KEYWORD VARCHAR(255) NOT NULL, SKU BIGINT NOT NULL, PRIMARY KEY (ID)); ALTER TABLE PRODUCT_KEYWORD ADD INDEX FK_PRODUCT_KEYWORD_PRODUCT (SKU), add constraint FK_PRODUCT_KEYWORD_PRODUCT FOREIGN KEY (SKU) REFERENCES Product (SKU); ALTER TABLE PRODUCT_KEYWORD ADD INDEX FK_PRODUCT_KEYWORD_KEYWORD (KEYWORD), add constraint FK_PRODUCT_KEYWORD_KEYWORD FOREIGN KEY (KEYWORD) REFERENCES Keyword (KEYWORD);
3.2.4. JBoss EAP 7 xPaaS Images
The Sales and Presentation services rely on Red Hat OpenShift S2I for Java EE applications and use the Red Hat xPaaS EAP Image. You can verify the presence of this image stream in the openshift project as the root user, but first switch from the default to the openshift project as the root user:
# oc project openshift
Now using project "openshift" on server "https://ocp-master1.hostname.example.com:8443".Query the configured image streams for the project:
# oc get imagestreams
NAME DOCKER REPO
...omitted...
jboss-eap70-openshift registry.access.redhat.com/jboss-eap-7/eap70-openshift latest,1.4,1.3 + 2 more... 3 weeks ago
...omitted...3.2.5. Ruby Images
The Billing service relies on Red Hat OpenShift S2I for Ruby applications and uses the Ruby image. This reference architecture uses version 2.3 of the Ruby image.
3.2.6. Node.js Images
The Product service relies on Red Hat OpenShift S2I for Node.js applications and uses the Node.js image. This reference architecture uses version 4.0 of the Node.js image.
3.2.7. Building the Services
The microservice application for this reference architecture is made available in a public git repository at https://github.com/RHsyseng/MSA-Polyglot-OCP. This includes four distinct services, provided as subdirectories of this repository: Billing, Product, Sales, and Presentation.
Sales, and Presentation are implemented in Java and use the JBoss EAP 7 xPaaS Images. The Billing service is implemented in Ruby, and the Product service is implemented in Node.js.
Start by building and deploying the Billing service, which has no dependencies on either a database or another service. Switch back to the regular user with the associated msa project and run:
$ oc new-app ruby~https://github.com/RHsyseng/MSA-Polyglot-OCP.git --context-dir=ruby_billing --name=billing-service
--> Found image dc510ba (10 weeks old) in image stream "ruby" in project "openshift" under tag "2.3" for "ruby"
Ruby 2.3
--------
Platform for building and running Ruby 2.3 applications
Tags: builder, ruby, ruby23, rh-ruby23
* A source build using source code from https://github.com/RHsyseng/MSA-Polyglot-OCP.git will be created
* The resulting image will be pushed to image stream "billing-service:latest"
* Use 'start-build' to trigger a new build
* This image will be deployed in deployment config "billing-service"
* Port 8080/tcp will be load balanced by service "billing-service"
* Other containers can access this service through the hostname "billing-service"
--> Creating resources with label app=billing-service ...
imagestream "billing-service" created
buildconfig "billing-service" created
deploymentconfig "billing-service" created
service "billing-service" created
--> Success
Build scheduled, use 'oc logs -f bc/billing-service' to track its progress.
Run 'oc status' to view your app.Once again, oc status can be used to monitor the progress of the operation. To monitor the build and deployment process more closely, find the running build and follow the build log:
$ oc get builds NAME TYPE FROM STATUS STARTED DURATION billing-service-1 Source Git Running 1 seconds ago 1s $ oc logs -f bc/billing-service
Once this service has successfully deployed, use similar commands to deploy the Product and Sales services, bearing in mind that both have a database dependency and rely on previous MySQL services. Change any necessary default database parameters by passing them as environment variables.
To deploy the Product service:
$ oc new-app -e MYSQL_USER=product -e MYSQL_PASSWORD=password nodejs~https://github.com/RHsyseng/MSA-Polyglot-OCP.git --context-dir=nodejs_product --name=product-serviceTo deploy the Sales service.
$ oc new-app -e MYSQL_USER=sales -e MYSQL_PASSWORD=password jboss-eap70-openshift~https://github.com/RHsyseng/MSA-Polyglot-OCP.git --context-dir=Sales --name=sales-serviceFinally, deploy the Presentation service, which exposes a web tier and an aggregator that uses the three previously deployed services to fulfill the business request:
$ oc new-app jboss-eap70-openshift~https://github.com/RHsyseng/MSA-Polyglot-OCP.git --context-dir=Presentation --name=presentationNote that the Maven build file for this project specifies a war file name of ROOT, which results in this application being deployed to the root context of the server.
Once all four services have successfully deployed, the presentation service can be accessed through a browser to verify application functionality. First create a route to expose this service to clients outside the Red Hat OpenShift environment:
$ oc expose service presentation --hostname=msa.example.com
NAME HOST/PORT PATH SERVICE LABELS TLS TERMINATION
presentation msa.example.com presentation app=presentationThe route tells the deployed router to load balance any requests with the given host name among the replicas of the presentation service. This host name should map to the IP address of the hosts where the router has been deployed, not necessarily where the service is hosted. For clients outside of this network and for testing purposes, simply modify your /etc/hosts file to map this host name to the IP address of the master host.
3.2.8. Replicas
Describe the deployment configuration of your services and verify how many instances have been configured and deployed. For example:
$ oc describe dc product-service
Name: product-service
Created: 4 days ago
Labels: app=product-service
Latest Version: 1
Triggers: Config, Image(product-service@latest, auto=true)
Strategy: Rolling
Template:
Selector: app=product-service,deploymentconfig=product-service
Replicas: 1
Containers:
NAME IMAGE ENV
...omitted...
Replicas: 1 current / 1 desired
Selector: app=product-service,deployment=product-service-1,deploymentconfig=product-service
Labels: app=product-service,openshift.io/deployment-config.name=product-service
Pods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed
No events.Based on this default configuration, each service will not have any more than one replica, which means the Red Hat OpenShift service will be backed by a single pod. In the event of the failure of a service container or Red Hat OpenShift node, as long as there is an active master host, a new pod for the service will be deployed to a healthy node. However, it is often desirable to balance load between multiple pods of a service and also avoid a lengthy downtime while a failed pod is replaced.
Refer to the Red Hat OpenShift Developer Guide for deployment configuration details and properly specifying the number of desired replicas.
To manually scale a service and verify this feature, use the oc scale command and provide the number of desired replicas for a given deployment configuration, for example:
$ oc scale dc product-service --replicas=3
deploymentconfig "product-service" scaled
There will now be 3 separate pods running this service. To verify, query all pods configured for product-service:
$ oc get pods -l app=product-service
NAME READY STATUS RESTARTS AGE
product-service-1-8ag1z 1/1 Running 0 40s
product-service-1-mhyfu 1/1 Running 0 40s
product-service-1-mjis5 1/1 Running 0 5mTraffic that is received through an exposed route or from an existing Red Hat OpenShift service referencing another service by its service / host name (as is the case with the presentation service calling the other services) is handled by an internal proxy and balances the load between available replicas, while failing over when necessary.
3.3. Running the Application
3.3.1. Browser Access
To use the application, simply point your browser to the address exposed by the route. This address should ultimately resolve to the IP address of the Red Hat OpenShift host where the router is deployed.
Figure 3.1. Application Homepage before initialization

At this stage, the database tables are still empty and content needs to be created for the application to function properly.
3.3.2. Sample Data
The application includes a demo page that when triggered, populates the database with sample data. To use this page and populate the sample data, point your browser to http://msa.example.com/demo.jsp:
Figure 3.2. Trigger sample data population

3.3.3. Featured Product Catalog
After populating the product database, the demo page redirects your browser to the route address, but this time you will see the featured products listed:
Figure 3.3. Application Homepage after initialization

3.3.4. User Registration
Anonymous users are constrained to browsing inventory, viewing featured products and searching the catalog. To use other features that result in calls to the Sales and Billing services, a valid customer must be logged in.
To register a customer and log in, click on the Register button in the top-right corner of the screen and fill out the registration form:
Figure 3.4. Customer Registration Form

After registration, the purchase button allows customers to add items to their shopping cart, to subsequently visit the shopping cart and check out, and review their order history.
For details on application functionality and how each feature is implemented by the provided services and exposes through their REST API, refer to the previously published reference architecture on Building microservices with JBoss EAP 7.

Where did the comment section go?
Red Hat's documentation publication system recently went through an upgrade to enable speedier, more mobile-friendly content. We decided to re-evaluate our commenting platform to ensure that it meets your expectations and serves as an optimal feedback mechanism. During this redesign, we invite your input on providing feedback on Red Hat documentation via the discussion platform.