Chapter 3. Build and Deploy
3.1. Source Deployment
3.1.1. Overview
Source-to-Image (S2I) is a framework that makes it easy to write images that take application source code as input, and produce a new image that runs the assembled application as output.
The main reasons one might be interested in using source builds are:
- Speed – with S2I, the assemble process can perform a large number of complex operations without creating a new layer at each step, resulting in a fast process.
- Patchability – S2I allows you to rebuild the application consistently if an underlying image needs a patch due to a security issue.
- User efficiency – S2I prevents developers from performing arbitrary yum install type operations during their application build, which results in slow development iteration.
- Ecosystem – S2I encourages a shared ecosystem of images where you can leverage best practices for your applications.
3.1.2. Usage
To build a Java EE application from source, simply set up a new application and point it to the git repository where the source code is checked in. The OpenShift S2I process detects the presence of a pom.xml file in the root of the project and recognizes it as a Java EE application. At this point, the S2I process looks for a default image stream with a jee tag to handle the build and deployment. In the absence of such a tagged image stream, or if more than one has a matching tag, the intended image stream would have to be explicitly provided as part of the new-app command.
For example, to build and deploy an application with a maven-based source code checked into github, the following command line instruction can be used.
$ oc new-app jboss-eap70-openshift~https://github.com/PROJECT.git --name=APPLICATION_NAMEThis uses the oc client library with the jboss-eap70-openshift image stream to build and deploy an application on JBoss EAP 7. It is also possible to set up applications for source deployment, and build and deploy them with Jenkins, using available plugins and/or scripting. Refer to the section on Chapter 4, Jenkins for further details.
As an example, source deployment is used to build and deploy the sample microservice application in the following section.
3.1.3. Example
3.1.3.1. MySQL Images
This reference application includes two database services built on the supported MySQL image. This reference architecture uses version 5.6 of the MySQL image.
To deploy the database services, use the new-app command and provide a number of required and optional environment variables along with the desired service name.
To deploy the product database service:
$ oc new-app -e MYSQL_USER=product -e MYSQL_PASSWORD=password -e MYSQL_DATABASE=product -e MYSQL_ROOT_PASSWORD=passwd mysql --name=product-dbTo deploy the sales database service:
$ oc new-app -e MYSQL_USER=sales -e MYSQL_PASSWORD=password -e MYSQL_DATABASE=sales -e MYSQL_ROOT_PASSWORD=passwd mysql --name=sales-dbDatabase images created with this simple command are ephemeral and result in data loss in the case of a pod restart. Run the image with mounted volumes to enable persistent storage for the database. The data directory where MySQL stores database files is located at /var/lib/mysql/data.
Refer to the official documentation to configure persistent storage.
Enabling clustering for database images is currently in Technology Preview and not intended for production use.
These commands create two OpenShift services, each running MySQL in its own container. In each case, a MySQL user is created with the value specified by the MYSQL_USER attribute and the associated password. The MYSQL_DATABASE attribute results in a database being created and set as the default user database.
Make sure the database services are successfully deployed before deploying other services that may depend on them. The service log clearly shows if the database has been successfully deployed. Use tab to complete the pod name, for example:
$ oc logs product-db-1-3drkp
---> 01:57:48 Processing MySQL configuration files ...
---> 01:57:48 Initializing database ...
---> 01:57:48 Running mysql_install_db ...
...omitted...
2017-03-17 01:58:01 1 [Note] /opt/rh/rh-mysql56/root/usr/libexec/mysqld: ready for connections.
Version: '5.6.34' socket: '/var/lib/mysql/mysql.sock' port: 3306 MySQL Community Server (GPL)3.1.3.2. JBoss EAP 7 xPaaS Images
The microservice project used in this reference architecture can be cloned from its public repository:
git clone https://github.com/RHsyseng/MSA-EAP7-OSE.git
The Billing, Product, Sales, and Presentation services rely on OpenShift S2I for Java EE applications and use the Red Hat xPaaS EAP Image. You can verify the presence of this image stream in the openshift project as the root user, but first switch from the default to the openshift project as the root user:
# oc project openshift
Now using project "openshift" on server "https://ocp-master1.hostname.example.com:8443".Query the configured image streams for the project:
# oc get imagestreams
NAME DOCKER REPO
...omitted...
jboss-eap70-openshift registry.access.redhat.com/jboss-eap-7/eap70-openshift latest,1.4,1.3 + 2 more... 3 weeks ago
...omitted...3.1.3.3. Building the Services
The microservice application for this reference architecture is made available in a public git repository at https://github.com/RHsyseng/MSA-EAP7-OSE.git. This includes four distinct services, provided as subdirectories of this repository: Billing, Product, Sales, and Presentation.They are all implemented in Java and tested on the JBoss EAP 7 xPaaS Images.
Start by building and deploying the Billing service, which has no dependencies on either a database or another service. Switch back to the regular user with the associated project and run:
$ oc new-app jboss-eap70-openshift~https://github.com/RHsyseng/MSA-EAP7-OSE.git --context-dir=Billing --name=billing-serviceOnce again, oc status can be used to monitor the progress of the operation. To monitor the build and deployment process more closely, find the running build and follow the build log:
$ oc get builds NAME TYPE FROM STATUS STARTED DURATION billing-service-1 Source Git Running 1 seconds ago 1s $ oc logs -f bc/billing-service
Once this service has successfully deployed, use similar commands to deploy the Product and Sales services, bearing in mind that both have a database dependency and rely on previous MySQL services. Change any necessary default database parameters by passing them as environment variables.
To deploy the Product service:
$ oc new-app -e MYSQL_USER=product -e MYSQL_PASSWORD=password jboss-eap70-openshift~https://github.com/RHsyseng/MSA-EAP7-OSE.git --context-dir=Product --name=product-serviceTo deploy the Sales service:
$ oc new-app -e MYSQL_USER=sales -e MYSQL_PASSWORD=password jboss-eap70-openshift~https://github.com/RHsyseng/MSA-EAP7-OSE.git --context-dir=Sales --name=sales-serviceFinally, deploy the Presentation service, which exposes a web tier and an aggregator that uses the three previously deployed services to fulfill the business request:
$ oc new-app jboss-eap70-openshift~https://github.com/RHsyseng/MSA-EAP7-OSE.git --context-dir=Presentation --name=presentationNote that the Maven build file for this project specifies a war file name of ROOT, which results in this application being deployed to the root context of the server.
Once all four services have successfully deployed, the presentation service can be accessed through a browser to verify application functionality. First create a route to expose this service to clients outside the OpenShift environment:
$ oc expose service presentation --hostname=msa.example.com
NAME HOST/PORT PATH SERVICE LABELS TLS TERMINATION
presentation msa.example.com presentation app=presentationThe route tells the deployed router to load balance any requests with the given host name among the replicas of the presentation service. This host name should map to the IP address of the hosts where the router has been deployed, not necessarily where the service is hosted. For clients outside of this network and for testing purposes, simply modify your /etc/hosts file to map this host name to the IP address of the master host.
To verify that every step has been correctly performed and was successful, this document reviews the working application and allows you to ensure its correct behavior.
3.1.3.4. Running the Application
3.1.3.4.1. Browser Access
To use the application, simply point your browser to the address exposed by the route. This address should ultimately resolve to the IP address of the OpenShift host where the router is deployed.
Figure 3.1. Application Homepage before initialization

At this stage, the database tables are still empty and content needs to be created for the application to function properly.
3.1.3.4.2. Sample Data
The application includes a demo page that when triggered, populates the database with sample data. To use this page and populate the sample data, point your browser to http://msa.example.com/demo.jsp:
Figure 3.2. Trigger sample data population

3.1.3.4.3. Featured Product Catalog
After populating the product database, the demo page redirects your browser to the route address, but this time you will see the featured products listed:
Figure 3.3. Application Homepage after initialization

3.1.3.4.4. User Registration
Anonymous users are constrained to browsing inventory, viewing featured products and searching the catalog. To use other features that result in calls to the Sales and Billing services, a valid customer must be logged in.
To register a customer and log in, click on the Register button in the top-right corner of the screen and fill out the registration form:
Figure 3.4. Customer Registration Form

After registration, the purchase button allows customers to add items to their shopping cart, to subsequently visit the shopping cart and check out, and review their order history.
For details on application functionality and how each feature is implemented by the provided services and exposes through their REST API, refer to the previously published reference architecture on Building microservices with JBoss EAP 7.
3.2. Binary Source Deployment
3.2.1. Overview
In environments where a pre-existing build management tool like maven, ant or other is used, as the target environment transitions from a standalone application server to one based on OpenShift, there may be a requirement or preference to continue using the existing build setup and simply deploy the generated war or ear file.
The obvious advantage of using binary source deployment is the lower effort and impact of migration from existing environments to OpenShift, as there is no need to change the existing build strategy, source code management, release workflow, or other processes. Only the deployment target is switched to OpenShift.
3.2.2. Usage
To deploy a Java EE application that has already been built, copy the archive file to a location where the oc client binary is available. The deployment is a two-step process, where the new application is first defined and created without providing a source of any type. To ensure that no source is provided to the oc client, create an empty directory and use it to set up the new application:
$ mkdir /tmp/nocontent $ oc new-app jboss-eap70-openshift~/tmp/nocontent --name=APPLICATION_NAME
This results in a build by the same name being created, which can now be started by providing the binary source, for example:
$ oc start-build APPLICATION_NAME --from-file=APPLICATION.warIt is in fact rare for an application and all its dependencies to be self-contained in a war file. There are often dependencies and custom configuration that require changes to the server. This often means providing a customized standalone-openshift.xml for each service.
Place the standalone-openshift.xml in a directory called configuration, which itself is created at the same level as the war file. Assuming you create a deploy directory with the war file and the configuration directory as the only two items inside, the following command is used to deploy the content of the deploy directory:
$ oc start-build APPLICATION_NAME --from-dir=deploy
It is also possible to set up applications for binary deployment, and build and deploy them with Jenkins, using a combination of available plugins and scripting. Refer to the section on Chapter 4, Jenkins for further details.
As an example and to provide further context and details, the following section leverages the binary deployment of Java EE applications to build and deploy the sample microservice application.
3.2.3. Example
3.2.3.1. Deploying Services
This section deploys the same four services as before, but assumes that these services are first built and packaged into war files in the local environment, then uploaded to the OpenShift environment.
The deployment of database services is not covered here again, since there is no change in the process of database setup between the different build strategies. Please refer to the previous example for details.
When using oc new-app, the source code argument provided (the part after "~" in the [image]~[source code] combination) is assumed to be the location of a local or remote git repository, where the source code resides. For binary deployment, the directory provided to this command should not contain a git repository. To ensure this, create an empty directory to use for this purpose, for example: mkdir /tmp/nocontent
Start by building and deploying the Billing service, which has no dependencies on either a database or another service. Switch back to the regular user with the associated project and run:
$ oc new-app jboss-eap70-openshift:latest~/tmp/nocontent --name=billing-service $ oc start-build billing-service --from-file=billing.war
Using the OC client, type the oc start-build --help command to see the explanation for the from-file option.
from-file='': A file to use as the binary input for the build; example a pom.xml or Dockerfile. Will be the only file in the build source.
In this example, the war file is specified as the input for the from-file option, so the war file will be sent as a binary stream to the builder. The builder then stores the data in a file with the same name at the top of the build context.
Next, use similar start-build commands to deploy the Product and Sales services, however both services need to use previously deployed MySQL services, and the configuration is inside /configuration/standalone-openshift.xml, which is outside the war file.
Below is the datasource configuration of standalone-openshift.xml.
<datasources>
<!-- ##DATASOURCES## -->
<datasource jndi-name="java:jboss/datasources/ProductDS" enabled="true" use-java-context="true" pool-name="ProductDS">
<connection-url>jdbc:mysql://${env.DATABASE_SERVICE_HOST:product-db}:${env.DATABASE_SERVICE_PORT:3306}/${env.MYSQL_DATABASE:product}</connection-url>
<driver>mysql</driver>
<security>
<user-name>${env.MYSQL_USER:product}</user-name>
<password>${env.MYSQL_PASSWORD:password}</password>
</security>
</datasource>So merely using the "from-file" option to deploy the web archive is not enough, and the configuration file also needs to be deployed for both Product and Sales services. To achieve this, the start-build command includes another option called "from-dir".
Using the OC client, type "oc start-build --help" command to see the help for the from-dir option
from-dir='': A directory to archive and use as the binary input for a build.
To include both the web archive and the standalone-openshift.xml server configuration file, prepare the directory in this format.
For Product service.

For Sales services

Command to deploy the Product service:
$ oc new-app jboss-eap70-openshift:latest~/tmp/nocontent --name=product-service $ oc start-build product-service --from-dir=deploy
To deploy the Sales service:
$ oc new-app jboss-eap70-openshift:latest~/tmp/nocontent --name=sales-service $ oc start-build sales-service --from-dir=deploy
Finally, deploy the Presentation service, which exposes a web tier and an aggregator that uses the three previously deployed services to fulfill the business request:
$ oc new-app jboss-eap70-openshift:latest~/tmp/nocontent --name=presentation $ oc start-build presentation --from-file=ROOT.war
Once all four services have successfully deployed, the presentation service can be accessed through a browser to verify application functionality. First create a route to expose this service to clients outside the OpenShift environment:
$ oc expose service presentation --hostname=msa.example.com
NAME HOST/PORT PATH SERVICE LABELS TLS TERMINATION
presentation msa.example.com presentation app=presentationThe use of the "from-dir" option is likely to be much more common in production scenarios than "from-file", since most Red Hat JBoss Enterprise Application Platform deployments involve some configuration changes to the server.
To validate the steps taken in this section and ensure that the application is behaving correctly, please refer to the section on running the application in the previous example.
3.3. Container Image Deployment
3.3.1. Overview
An alternative deployment scenario is one where a container image is created outside of OpenShift and then deployed to create a service. This may be entirely a custom image, but is often the result of layers on top of a standard and supported base image. It is common to use the Dockerfile syntax to add layers on top of existing images.
For more details on creating Docker images, refer to the Red Hat documentation on Creating Docker images
3.3.2. Usage
Assuming the existence of a proper image in a docker registry that is accessible to OpenShift, deploying the image and creating an OpenShift service is very easy and follows a similar syntax to that used for S2I and other methods of creating services:
$ oc new-app --docker-image=172.30.12.190:5000/MYNAMESPACE/MYIMAGE --name=MYSERVICENAMEOpenShift provides an internal registry, called the Integrated OpenShift Container Platform Registry, which is ideal for storing images, and to use for the purpose of creating and deploying services.
To directly access this registry, an OpenShift user must be set up with the required roles, and use the login token to directly authenticate against the docker registry. Follow the steps in the Red Hat documentation to grant this user the required privileges.
3.3.3. Example
Use a system admin account, such as the root user, to grant the OpenShift user the required roles:
# oadm policy add-role-to-user system:registry ocuser # oadm policy add-role-to-user system:image-builder ocuser
Also use this system admin account to determine the connection address of the internal docker registry:
# oc get services -n default
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
docker-registry 172.30.182.75 <none> 5000/TCP 81d
...
The ocuser credentials can now be used to push new images to the registry. Make sure this linux user is part of a user group called docker in order to be able to invoke docker commands.
Obtain the user’s authentication token and use it to log in to the docker registry:
$ oc whoami -t UVicwxVTEWUGJyj84vT0cGHeM-K2Cq9Mhiq5mQXSl1o $ docker login -u ocuser -p UVicwxVTEWUGJyj84vT0cGHeM-K2Cq9Mhiq5mQXSl1o 172.30.182.75:5000 Login Succeeded
Create a Dockerfile using a simple text editor and use it to create a layer on top of the standard JBoss EAP 7 image, to add the built archive and the server configuration files.
# Product Service Docker image # Version 1 # Pull base image FROM registry.access.redhat.com/jboss-eap-7/eap70-openshift # Maintainer MAINTAINER Calvin Zhu # deploy app ADD product.war $JBOSS_HOME/standalone/deployments/ ADD standalone-openshift.xml $JBOSS_HOME/standalone/configuration USER root # In the JBoss EAP container, JBoss EAP runs as the jboss user # but copied files are brought in as root RUN chown jboss:jboss $JBOSS_HOME/standalone/deployments/product.war RUN chown jboss:jboss $JBOSS_HOME/standalone/configuration/standalone-openshift.xml USER jboss
The creating of layered images on top of the standard EAP 7 image for the purpose of deployment is only done to demonstrate the use of docker images for OpenShift deployment. It would otherwise be simpler and preferrable to use source or binary deployment for the EAP image.
In order to deploy the product service, place Dockerfile in a directory along with product.war, and standalone-openshift.xml, which includes the database configuration:
$ ls -l
total 52
-rw-rw-r--. 1 czhu czhu 469 Apr 12 22:42 Dockerfile
-rw-rw-r--. 1 czhu czhu 19391 Feb 14 19:07 product.war
-rw-rw-r--. 1 czhu czhu 25757 Apr 12 22:33 standalone-openshift.xmlThe next step is to build and tag the new image. This tag is subsequently used to push the image to the internal registry.
$ docker build -t 172.30.182.75:5000/deploy-project/msa-product .Once the new image is successfully built, push it to the local docker registry:
$ docker push 172.30.182.75:5000/deploy-project/msa-productThe final step is deploying the service on OpenShift by having it pull this new docker image from the registry:
$ oc new-app -e MYSQL_USER=product -e MYSQL_PASSWORD=password --docker-image=172.30.182.75:5000/deploy-project/msa-product --name=product-serviceThe process for building the remaining three images is almost identical. The content of the Dockerfile for each follows.
Dockerfile for Billing service:
# Billing Service Docker image # Version 1 # Pull base image FROM registry.access.redhat.com/jboss-eap-7/eap70-openshift # Maintainer MAINTAINER Calvin Zhu # deploy app ADD billing.war $JBOSS_HOME/standalone/deployments/ USER root RUN chown jboss:jboss $JBOSS_HOME/standalone/deployments/billing.war USER jboss
Dockerfile for Sales service:
# Sales Service Docker image # Version 1 # Pull base image FROM registry.access.redhat.com/jboss-eap-7/eap70-openshift # Maintainer MAINTAINER Calvin Zhu # deploy app ADD sales.war $JBOSS_HOME/standalone/deployments/ ADD standalone-openshift.xml $JBOSS_HOME/standalone/configuration USER root RUN chown jboss:jboss $JBOSS_HOME/standalone/deployments/sales.war RUN chown jboss:jboss $JBOSS_HOME/standalone/configuration/standalone-openshift.xml USER jboss
Dockerfile for Presentation service
# Presentation Service Docker image # Version 1 # Pull base image FROM registry.access.redhat.com/jboss-eap-7/eap70-openshift # Maintainer MAINTAINER Calvin Zhu # deploy app ADD ROOT.war $JBOSS_HOME/standalone/deployments/ USER root RUN chown jboss:jboss $JBOSS_HOME/standalone/deployments/ROOT.war USER jboss
Docker scripts for building the new docker images:
$ docker build -t 172.30.182.75:5000/deploy-project/msa-billing . $ docker build -t 172.30.182.75:5000/deploy-project/msa-sales . $ docker build -t 172.30.182.75:5000/deploy-project/msa-presentation .Docker scripts for pushing the new docker images to local registry.
$ docker push 172.30.182.75:5000/deploy-project/msa-billing $ docker push 172.30.182.75:5000/deploy-project/msa-presentation $ docker push 172.30.182.75:5000/deploy-project/msa-salesCLI scripts for deploy the new docker images to OpenShift and expose the service.
$ oc new-app --docker-image=172.30.182.75:5000/deploy-project/msa-billing --name=billing-service $ oc new-app --docker-image=172.30.182.75:5000/deploy-project/msa-presentation --name=presentation $ oc new-app -e MYSQL_USER=sales -e MYSQL_PASSWORD=password --docker-image=172.30.182.75:5000/deploy-project/msa-sales --name=sales-service $ oc expose service presentation --hostname=msa.example.com3.3.4. Maven Plugin
The fabric8-maven-plugin is a Maven plugin that accelerates development by allowing easy and quick deployment of Java applications to OpenShift. This plugin reduces the required steps by building the application, creating a docker image and deploying it to OpenShift in a single command, which is especially useful in the development stage, where multiple retries often accompany each step.
The following example demonstrates using fabric8-maven-plugin alongside the Dockerfile assets used in the previous section, to build and deploy the same four services to OpenShift.
Each project’s pom.xml file needs to be updated with the following plugin elements. The newly added configuration for the Product service looks as follows:
<plugin>
<groupId>io.fabric8</groupId>
<artifactId>fabric8-maven-plugin</artifactId>
<version>3.2.28</version>
<configuration>
<images>

</images>
<enricher>
<config>
<fmp-controller>
<name>product-service</name>
</fmp-controller>
</config>
</enricher>
<env>
<MYSQL_USER>product</MYSQL_USER>
<MYSQL_PASSWORD>password</MYSQL_PASSWORD>
</env>
</configuration>
<executions>
<execution>
<id>fmp</id>
<goals>
<goal>resource</goal>
<goal>build</goal>
<goal>push</goal>
</goals>
</execution>
</executions>
</plugin>
<plugin>
<artifactId>maven-resources-plugin</artifactId>
<version>3.0.2</version>
<executions>
<execution>
<id>copy-resources</id>
<phase>package</phase>
<goals>
<goal>copy-resources</goal>
</goals>
<configuration>
<outputDirectory>/home/czhu/dockerBuild/product/</outputDirectory>
<resources>
<resource>
<directory>/home/czhu/MSA-EAP7-OSE/Product/target</directory>
<includes>
<include>product.war</include>
</includes>
</resource>
<resource>
<directory>/home/czhu/MSA-EAP7-OSE/Product/configuration</directory>
<includes>
<include>standalone-openshift.xml</include>
</includes>
</resource>
</resources>
</configuration>
</execution>
</executions>
</plugin>The image element contains the docker repository name, and the Dockerfile information:
<images>

</images>
The fmp-controller enricher is used to name the service. In its absence, the artifactId configured in the pom.xml will be used.
<enricher>
<config>
<fmp-controller>
<name>product-service</name>
</fmp-controller>
</config>
</enricher>The env element is used to configure environment variables and replaces passing environment variables through the -e flag:
<env> <MYSQL_USER>product</MYSQL_USER> <MYSQL_PASSWORD>password</MYSQL_PASSWORD> </env>
By using maven-resources-plugin, our build file copies both product.war and standalone-openshift.xml to the Dockerfile folder and prepares the for the image build:
<artifactId>maven-resources-plugin</artifactId>
<version>3.0.2</version>
<executions>
<execution>
<id>copy-resources</id>
<phase>package</phase>
<goals>
<goal>copy-resources</goal>
</goals>
<configuration>
<outputDirectory>/home/czhu/dockerBuild/product/</outputDirectory>
<resources>
<resource>
<directory>/home/czhu/MSA-EAP7-OSE/Product/target</directory>
<includes>
<include>product.war</include>
</includes>
</resource>
<resource>
<directory>/home/czhu/MSA-EAP7-OSE/Product/configuration</directory>
<includes>
<include>standalone-openshift.xml</include>
</includes>
</resource>
</resources>
</configuration>
</execution>
</executions>Deploying through the plugin also requires the use of Resource Fragments to configure OpenShift services. Create the src/main/fabric8 folder for each project, and add a svc.yml file inside it:
[czhu@mw-ocp-master fabric8{master}]$ pwd
/home/czhu/MSA-EAP7-OSE/Product/src/main/fabric8
[czhu@mw-ocp-master fabric8{master}]$ ls -l
total 4
-rw-rw-r--. 1 czhu czhu 157 Apr 19 18:55 svc.yml
The content of svc.yml for Product service is:
apiVersion: v1
kind: Service
metadata:
name: product-service
spec:
ports:
- protocol: TCP
port: 8080
targetPort: 8080
type: ClusterIPWith these two changes, you can use the following command to trigger the maven build and deployment:
mvn clean fabric8:deploy -Dfabric8.mode=kubernetes
To clean up all the OpenShift resources that were just created, run:
mvn fabric8:undeploy
Additions to pom.xml for other services should be as follows.
For the Billing service:
<plugin>
<groupId>io.fabric8</groupId>
<artifactId>fabric8-maven-plugin</artifactId>
<version>3.2.28</version>
<configuration>
<images>

</images>
<enricher>
<config>
<fmp-controller>
<name>billing-service</name>
</fmp-controller>
</config>
</enricher>
</configuration>
<executions>
<execution>
<id>fmp</id>
<goals>
<goal>resource</goal>
<goal>build</goal>
<goal>push</goal>
</goals>
</execution>
</executions>
</plugin>
<plugin>
<artifactId>maven-resources-plugin</artifactId>
<version>3.0.2</version>
<executions>
<execution>
<id>copy-resources</id>
<phase>package</phase>
<goals>
<goal>copy-resources</goal>
</goals>
<configuration>
<outputDirectory>/home/czhu/dockerBuild/billing/</outputDirectory>
<resources>
<resource>
<directory>/home/czhu/MSA-EAP7-OSE/Billing/target</directory>
<includes>
<include>billing.war</include>
</includes>
</resource>
</resources>
</configuration>
</execution>
</executions>
</plugin>For the Presentation service:
<plugin>
<groupId>io.fabric8</groupId>
<artifactId>fabric8-maven-plugin</artifactId>
<version>3.2.28</version>
<configuration>
<images>

</images>
<enricher>
<config>
<fmp-controller>
<name>presentation</name>
</fmp-controller>
</config>
</enricher>
</configuration>
<executions>
<execution>
<id>fmp</id>
<goals>
<goal>resource</goal>
<goal>build</goal>
<goal>push</goal>
</goals>
</execution>
</executions>
</plugin>
<plugin>
<artifactId>maven-resources-plugin</artifactId>
<version>3.0.2</version>
<executions>
<execution>
<id>copy-resources</id>
<phase>package</phase>
<goals>
<goal>copy-resources</goal>
</goals>
<configuration>
<outputDirectory>/home/czhu/dockerBuild/presentation/</outputDirectory>
<resources>
<resource>
<directory>/home/czhu/MSA-EAP7-OSE/Presentation/target</directory>
<includes>
<include>ROOT.war</include>
</includes>
</resource>
</resources>
</configuration>
</execution>
</executions>
</plugin>For the Sales service:
<plugin>
<groupId>io.fabric8</groupId>
<artifactId>fabric8-maven-plugin</artifactId>
<version>3.2.28</version>
<configuration>
<images>

</images>
<enricher>
<config>
<fmp-controller>
<name>sales-service</name>
</fmp-controller>
</config>
</enricher>
<env>
<MYSQL_USER>sales</MYSQL_USER>
<MYSQL_PASSWORD>password</MYSQL_PASSWORD>
</env>
</configuration>
<executions>
<execution>
<id>fmp</id>
<goals>
<goal>resource</goal>
<goal>build</goal>
<goal>push</goal>
</goals>
</execution>
</executions>
</plugin>
<plugin>
<artifactId>maven-resources-plugin</artifactId>
<version>3.0.2</version>
<executions>
<execution>
<id>copy-resources</id>
<phase>package</phase>
<goals>
<goal>copy-resources</goal>
</goals>
<configuration>
<outputDirectory>/home/czhu/dockerBuild/sales/</outputDirectory>
<resources>
<resource>
<directory>/home/czhu/MSA-EAP7-OSE/Sales/target</directory>
<includes>
<include>sales.war</include>
</includes>
</resource>
<resource>
<directory>/home/czhu/MSA-EAP7-OSE/Sales/configuration</directory>
<includes>
<include>standalone-openshift.xml</include>
</includes>
</resource>
</resources>
</configuration>
</execution>
</executions>
</plugin>The svc.yml resource fragment would also be created under src/main/fabric8 folder for each service as described below.
For the Billing service:
apiVersion: v1
kind: Service
metadata:
name: billing-service
spec:
ports:
- protocol: TCP
port: 8080
targetPort: 8080
type: ClusterIP
For the Presentation service:
apiVersion: v1
kind: Service
metadata:
name: presentation
spec:
ports:
- protocol: TCP
port: 8080
targetPort: 8080
type: ClusterIPFor the Sales service:
apiVersion: v1
kind: Service
metadata:
name: sales-service
spec:
ports:
- protocol: TCP
port: 8080
targetPort: 8080
type: ClusterIPOnce all four services are created in OpenShift, create the route using OpenShift CLI:
$ oc expose service presentation --hostname=msa.example.com

Where did the comment section go?
Red Hat's documentation publication system recently went through an upgrade to enable speedier, more mobile-friendly content. We decided to re-evaluate our commenting platform to ensure that it meets your expectations and serves as an optimal feedback mechanism. During this redesign, we invite your input on providing feedback on Red Hat documentation via the discussion platform.