Chapter 4. Design and Development
4.1. Overview
Red Hat OpenShift provides an ideal platform for deploying, hosting and managing microservices. By deploying each service as an individual Docker container, Red Hat OpenShift helps isolate each service and decouples its lifecycle and deployment from that of other services. Red Hat OpenShift can configure the desired number of replicas for each service and provide intelligent scaling to respond to varying load.
This sample application uses the Source-to-Image (S2I) mechanism to build and assemble reproducible container images from the application source and on top of supported Red Hat OpenShift images.
This reference architecture builds on previously published reference architecture papers. Both the Sales, and Presentation services in this reference architecture remain unchanged; please refer to Building microservices with JBoss EAP 7 for further details on the design and implementation of these two services. The Billing and Product services are replaced, with Ruby and Node.js used as the technology to implement the two services respectively. Further information about the design and implementation of these two services follows.
4.2. Application Structure
The source code for the sample application is checked in to a public GitHub repository. An aggregation POM file is provided at the top level of the root directory to build the two Java projects (Sales and Presentation) if needed, although this build file is neither required nor used by Red Hat OpenShift.
4.3. Customizing the Application Server
The Product and Sales services each have a database dependency and use their respective MySQL database to store and retrieve data. For the Sales service, the supported xPaaS Middleware Images bundles MySQL JDBC drivers, but the driver would have to be declared in the server configuration file and a datasource would need to be described in order for the application to access the database through a connection pool.
To make customizations to a server, provide an updated server configuration file. The replacement configuration file should be named standalone-openshift.xml and placed in a directory called configuration at the root of the project.
Some configuration can be performed by simply providing descriptive environment variables. For example, supplying the DB_SERVICE_PREFIX_MAPPING variable and value instructs the script to add MySQL and/or PostgreSQL datasources to the EAP instance. Refer to the documentation for Red Hat OpenShift images for details.
In order to make the required changes to the correct baseline, obtain the latest server configuration file. The supported image is located at registry.access.redhat.com/jboss-eap-7/eap70-openshift. To view the original copy of the file, you can run this Docker container directly:
# docker run -it registry.access.redhat.com/jboss-eap-7/eap70-openshift \ cat /opt/eap/standalone/configuration/standalone-openshift.xml
<?xml version="1.0" ?>
<server xmlns="urn:jboss:domain:4.0">
<extensions>
...Declare the datasource with parameterized variables for the database credentials. For example, to configure the product datasource for the product service:
<subsystem xmlns="urn:jboss:domain:datasources:1.2">
<datasources>
<datasource jndi-name="java:jboss/datasources/ProductDS" enabled="true"
use-java-context="true" pool-name="ProductDS">
<connection-url>
jdbc:mysql://${env.DATABASE_SERVICE_HOST:product-db}
:${env.DATABASE_SERVICE_PORT:3306}/${env.MYSQL_DATABASE:product}
</connection-url>
<driver>mysql</driver>
<security>
<user-name>${env.MYSQL_USER:product}</user-name>
<password>${env.MYSQL_PASSWORD:password}</password>
</security>
</datasource>
The datasource simply refers to the database driver as mysql. Declare the driver class in the same section after the datasource:
…
</datasource>
<drivers>
<driver name="mysql" module="com.mysql">
<xa-datasource-class>com.mysql.jdbc.jdbc2.optional.MysqlXADataSource</xa-datasource-class>
</driver>
With the above configuration, environment variables are substituted to specify connection details and the database host name is resolved to the name of the Red Hat OpenShift service hosting the database.
The default EAP welcome application is disabled in the Red Hat xPaaS EAP image. To deploy the Presentation application to the root context, rename the warName to ROOT in the Maven pom file:
<build>
<finalName>${project.artifactId}</finalName>
<plugins>
<plugin>
<artifactId>maven-war-plugin</artifactId>
<version>${version.war.plugin}</version>
<configuration>
<warName>ROOT</warName>
<!-- Java EE 7 doesn’t require web.xml, Maven needs to catch up! -->
<failOnMissingWebXml>false</failOnMissingWebXml>
</configuration>
</plugin>
</plugins>
</build>
4.4. Billing Service in Ruby
The implementation of the Billing service in Ruby consists of the following files:

- config.ru is the entry point and contains the business logic for the Billing service
- transcation.rb is a simple class providing the required fields for a purchase transaction
- result.rb provides the required fields to transport the response data in JSON format
- Gemfile contains project gem dependencies
- Gemfile.lock captures a complete snapshot of all the gems in Gemfile along with their associated dependencies
4.5. Billing Service details
The content of transcation.rb is as follows:
class Transcation attr_accessor :creditCardNumber, :expMonth, :expYear, :verificationCode, :billingAddress, :customerName, :orderNumber, :amount end
In addition to fields, result.rb also provides a to_json method which returns a JSON string representing the object:
class Result
attr_accessor :status, :name, :orderNumber, :transactionDate, :transactionNumber
def to_json(*a)
{
'status' => @status,
'name' => @name,
'orderNumber' => @orderNumber,
'transactionDate' => @transactionDate,
'transactionNumber' => @transactionNumber
}.to_json(*a)
end
endThe dependencies declared in Gemfile include Puma, a Ruby web server built for concurrency, and Sinatra, a light Ruby web application framework used for REST web services:
source 'https://rubygems.org' gem 'sinatra' gem 'puma'
The package configuration declared in Gemfile.lock is as follows:
GEM
remote: https://rubygems.org/
specs:
puma (3.4.0)
rack (1.6.4)
rack-protection (1.5.3)
rack
sinatra (1.4.7)
rack (~> 1.5)
rack-protection (~> 1.4)
tilt (>= 1.3, < 3)
tilt (2.0.5)
PLATFORMS
ruby
DEPENDENCIES
puma
sinatraThe business logic for the service, within config.ru, has 2 sections. The first section sets up the required libraries, and ensures the content type is JSON:
require 'bundler/setup' require 'json' require_relative 'lib/result' require_relative 'lib/transcation' Bundler.require(:default) class Application < Sinatra::Base before do content_type 'application/json' end
The second section provides methods to handle the business request.
The refund method is a simple reply message simulating the refund process:
post '/billing/refund/:id' do
"refund for transcation number #{params[:id]}"
end
The process method validates the credit card expiration date, then returns a JSON response:
post '/billing/process' do
begin
post_data = JSON.parse request.body.read
if post_data.nil? or !post_data.has_key?('creditCardNumber') or !post_data.has_key?('verificationCode')
puts "ERROR, no credit card number or verification code!"
else
transcation = Transcation.new
transcation.creditCardNumber = post_data['creditCardNumber']
transcation.expMonth = post_data['expMonth']
transcation.expYear = post_data['expYear']
transcation.verificationCode = post_data['verificationCode']
transcation.billingAddress = post_data['billingAddress']
transcation.customerName = post_data['customerName']
transcation.orderNumber = post_data['orderNumber']
transcation.amount = post_data['amount']
puts "creditCardNumber #{transcation.creditCardNumber}"
puts "expMonth #{transcation.expMonth}"
puts "expYear #{transcation.expYear}"
puts "billingAddress #{transcation.billingAddress}"
puts "customerName #{transcation.customerName}"
puts "orderNumber #{transcation.orderNumber}"
puts "amount #{transcation.amount}"
result = Result.new
result.name = transcation.customerName
result.orderNumber = transcation.orderNumber
result.transactionDate = DateTime.now
result.transactionNumber = 9000000 + rand(1000000)
if transcation.expYear.to_i < Time.now.year.to_i or (transcation.expYear.to_i == Time.now.year.to_i and transcation.expMonth.to_i <= Time.now.month.to_i)
result.status = "FAILURE"
else
result.status = "SUCCESS"
end
result.to_json
end
end
end4.6. Product Service in Node.js
The structure of the Product service, written in Node.js, is as follows:

The implementation for the Node.js service is provided in the product.js file. The additional package.json file simply contains metadata about the module, including the list of dependencies to install from npm when running npm install.
4.7. Product Service details
The package.json serves as documentation for packages this project depends on, and also specifies the version of packages this project is using:
{
"name": "productWs",
"version": "0.0.1",
"description": "Product service implemented in Node.js with mySQL",
"main": "product.js",
"dependencies": {
"express": "~4.14.0",
"mysql": "2.11.1",
"body-parser": "1.15.2"
},
"scripts": {
"start": "node product.js"
},
"repository": {
"type": "git",
"url": "https://github.com/RHsyseng/MSA-Polyglot-OCP.git"
},
"author": "Calvin Zhu",
"license": "BSD",
"bugs": {
"url": "https://github.com/RHsyseng/MSA-Polyglot-OCP/issues"
},
"homepage": "https://github.com/RHsyseng/MSA-Polyglot-OCP"
}The business logic for Product service is implemented in the form of several business methods, to respond to REST Web Service calls from the Presentation service.
The first part sets up the required libraries, also setting up the http body parser for JSON input.
var express = require('express');
var app = express();
var bodyParser = require('body-parser');
app.use(bodyParser.json());The following section deals with the MySQL database connection pool setup. The database parameters are passed from Openshift environment variables.
const dbUser = process.env.MYSQL_USER;
const dbPassword = process.env.MYSQL_PASSWORD;
var dbHost = process.env.MYSQL_HOST;
var dbDatabase = process.env.MYSQL_DATABASE;
if (dbHost == null) dbHost = 'product-db';
if (dbDatabase == null) dbDatabase = 'product';
const mysql = require('mysql');
const pool = mysql.createPool({
connectionLimit : 5,
host : dbHost,
user : dbUser,
password : dbPassword,
database : dbDatabase
});They match the variables set up in the image creation script earlier:
MYSQL_USER=product,MYSQL_PASSWORD=password
The first method responds to HTTP GET requests and searches the Product table, based on either the featured column, or keyword, through a join.
//get either featured products or products with keyword
app.get('/product/products', function(req, httpRes) {
if(req.query.featured == null && req.query.keyword == null) {
httpRes.statusCode = 400;
return httpRes.send('All products cannot be returned, need to provide a search condition');
}
pool.getConnection(function(err, conn) {
if (req.query.featured != null) {
conn.query('select sku, availability, description, featured=1 as featured, height, image, length, name, price, weight, width from Product where featured=true', function(err, records) {
if(err) throw err;
httpRes.json(records);
});
} else if (req.query.keyword != null){
conn.query('select sku, availability, description, featured=1 as featured, height, image, length, name, price, weight, width from Product where SKU in (select SKU from PRODUCT_KEYWORD where Keyword = ?)', req.query.keyword, function(err, records) {
if(err) throw err;
httpRes.json(records);
});
}
conn.release();
});
});Another GET method is provided to search the Product table based on the SKU number.
//get based on sku #
app.get('/product/products/:sku', function(req, httpRes) {
pool.getConnection(function(err, conn) {
conn.query('select sku, availability, description, featured=1 as featured, height, image, length, name, price, weight, width from Product where SKU = ? ', req.params.sku, function(err, records) {
if(err) throw err;
httpRes.json(records[0]);
});
conn.release();
});
});Individual methods are provided to add new keywords and new products respectively, in response to HTTP POST request. The second method needs to insert rows into two tables, and therefore requires transaction setup:
//add keyword through post
app.post('/product/keywords', function(req, httpRes) {
const record= { KEYWORD: req.body.keyword};
pool.getConnection(function(err, conn) {
conn.query('INSERT INTO Keyword SET ?', record, function(err, records) {
if(err) throw err;
const result = {
keyword : req.body.keyword,
products : null}
httpRes.json(result);
});
conn.release();
});
});
//add product through post
app.post('/product/products', function(req, httpRes) {
//To use "let" need strict mode in node version 4.*
"use strict";
pool.getConnection(function(err, dbconn) {
// Begin transaction
dbconn.beginTransaction(function(err) {
if (err) { throw err; }
let featured = 0;
if (req.body.featured = 'true')
featured = 1;
let record= { DESCRIPTION: req.body.description, HEIGHT: req.body.height, LENGTH: req.body.length, NAME: req.body.name, WEIGHT: req.body.weight, WIDTH: req.body.width, FEATURED: featured, AVAILABILITY: req.body.availability, IMAGE: req.body.image, PRICE: req.body.price};
dbconn.query('INSERT INTO Product SET ?', record, function(err,dbRes){
if (err) {
dbconn.rollback(function() {
throw err;
});
}
const tmpSku = dbRes.insertId;
record = {KEYWORD: req.body.image, SKU: tmpSku};
dbconn.query('INSERT INTO PRODUCT_KEYWORD SET ?', record, function(err,dbRes){
if (err) {
dbconn.rollback(function() {
throw err;
});
}
console.log('record insert into PRODUCT_KEYWORD table');
dbconn.commit(function(err) {
if (err) {
dbconn.rollback(function() {
throw err;
});
}
console.log('inserted into both Product and PRODUCT_KEYWORD tables in one transcation ');
const result = {
sku : tmpSku,
name : req.body.name,
description : req.body.description,
length : req.body.length,
width : req.body.width,
height : req.body.height,
weight : req.body.weight,
featured : req.body.featured,
availability : req.body.availability,
price : req.body.price,
image : req.body.image
};
httpRes.json(result);
}); //end commit
}); //end 2nd query
}); //end 1st query
});
// End transaction
dbconn.release();
});//end pool.getConnection
});To reduce inventory as part of the checkout process, another method is implemented:
//reduce product through post, this is for the checkout process
app.post('/product/reduce', function(req, httpRes) {
"use strict";
let sendReply = false;
pool.getConnection(function(err, conn) {
for (let i = 0; i < req.body.length; i++) {
if(!req.body[i].hasOwnProperty('sku') || !req.body[i].hasOwnProperty('quantity')) {
httpRes.statusCode = 400;
return httpRes.send('Error 400: need to have valid sku and quantity.');
}
const tmpSku = req.body[i]['sku'];
const tmpQuantity = req.body[i]['quantity'];
const sqlStr = 'update Product set availability = availability - ' + tmpQuantity + ' where sku = ' + tmpSku + ' and availability - ' + tmpQuantity + ' > 0';
console.log('reduce tmpSku:' + tmpSku);
console.log('reduce tmpQuantity:' + tmpQuantity);
console.log('reduce sqlStr: ' + sqlStr);
conn.query(sqlStr, function(err, result) {
if(err) throw err;
if (result.affectedRows > 0) {
console.log('reduced from Product ' + result.affectedRows + ' rows');
} else {
const result = [
{ message : 'Insufficient availability for ' + tmpSku,
details : null}
];
if (sendReply == false) {
httpRes.json(result);
sendReply = true;
}
}
});
}
conn.release();
});
return httpRes.send('');;
});To delete a product based on the provided product SKU, a method is implemented to respond to HTTP DELETE requests. This method also interacts with two different tables and therefore requires a transaction:
//delete based on sku #
app.delete('/product/products/:sku', function(req, httpRes) {
pool.getConnection(function(err, dbconn) {
// Begin transaction
dbconn.beginTransaction(function(err) {
if (err) { throw err; }
dbconn.query('DELETE FROM PRODUCT_KEYWORD where SKU = ?', req.params.sku, function(err, result){
if (err) {
dbconn.rollback(function() {
throw err;
});
}
console.log('deleted from PRODUCT_KEYWORD ' + result.affectedRows + ' rows');
dbconn.query('DELETE FROM Product where SKU = ?', req.params.sku, function(err, result){
if (err) {
dbconn.rollback(function() {
throw err;
});
}
console.log('deleted from Product ' + result.affectedRows + ' rows');
dbconn.commit(function(err) {
if (err) {
dbconn.rollback(function() {
throw err;
});
}
console.log('Transaction Complete.');
httpRes.json('deleted from both Product and PRODUCT_KEYWORD tables in one transcation');
}); //end commit
}); //end 2nd query
}); //end 1st query
});
// End transaction
dbconn.release();
});//end pool.getConnection
});Finally, an update method is provided for both full updates through HTTP PUT and partial updates through HTTP PATCH, based on the SKU number. Both methods delegate to a single implementation.
//put (update) based on sku #
app.put('/product/products/:sku', function(req, res) {
updateProduct(req.params.sku, req, res);
});
//patch (update) based on sku #
app.patch('/product/products/:sku', function(req, res) {
updateProduct(req.params.sku, req, res);
});
//real update function works for both put and patch request
function updateProduct(skuIn, req, httpRes) {
"use strict";
let sqlStr = '';
if (req.body.DESCRIPTION != null){
sqlStr = 'DESCRIPTION = \'' + req.body.DESCRIPTION + "\'";
}
if (req.body.HEIGHT != null){
if (sqlStr !='') sqlStr = sqlStr + " , ";
sqlStr = sqlStr + 'HEIGHT = \'' + req.body.HEIGHT + "\'";
}
if (req.body.LENGTH != null){
if (sqlStr !='') sqlStr = sqlStr + " , ";
sqlStr = sqlStr + 'LENGTH = \'' + req.body.LENGTH + "\'";
}
if (req.body.NAME != null){
if (sqlStr !='') sqlStr = sqlStr + " , ";
sqlStr = sqlStr + 'NAME = \'' + req.body.NAME + "\'";
}
if (req.body.WEIGHT != null){
if (sqlStr !='') sqlStr = sqlStr + " , ";
sqlStr = sqlStr + 'WEIGHT = \'' + req.body.WEIGHT + "\'";
}
if (req.body.WIDTH != null){
if (sqlStr !='') sqlStr = sqlStr + " , ";
sqlStr = sqlStr + 'WIDTH = \'' + req.body.WIDTH + "\'";
}
if (req.body.FEATURED != null){
if (sqlStr !='') sqlStr = sqlStr + " , ";
sqlStr = sqlStr + 'FEATURED = \'' + req.body.FEATURED + "\'";
}
if (req.body.AVAILABILITY != null){
if (sqlStr !='') sqlStr = sqlStr + " , ";
sqlStr = sqlStr + 'AVAILABILITY = \'' + req.body.AVAILABILITY + "\'";
}
if (req.body.IMAGE != null){
if (sqlStr !='') sqlStr = sqlStr + " , ";
sqlStr = sqlStr + 'IMAGE = \'' + req.body.IMAGE + "\'";
}
if (req.body.PRICE != null){
if (sqlStr !='') sqlStr = sqlStr + " , ";
sqlStr = sqlStr + 'PRICE = \'' + req.body.PRICE + "\'";
}
sqlStr = 'UPDATE Product SET ' + sqlStr + ' WHERE SKU = ?';
console.log('!!!!!SQL ready to be executed: ' + sqlStr);
pool.getConnection(function(err, conn) {
conn.query(sqlStr, skuIn, function(err, result) {
if(err) throw err;
console.log('update Product table' + result.affectedRows + ' rows');
});
conn.release();
});
httpRes.json('Update Product table');
}

Where did the comment section go?
Red Hat's documentation publication system recently went through an upgrade to enable speedier, more mobile-friendly content. We decided to re-evaluate our commenting platform to ensure that it meets your expectations and serves as an optimal feedback mechanism. During this redesign, we invite your input on providing feedback on Red Hat documentation via the discussion platform.