Red Hat JBoss Fuse Integration Services 2.0 for OpenShift

Red Hat xPaaS 0

Installing and developing with the Fuse 2.0 xPaaS Image

Red Hat xPaaS Documentation Team

Legal Notice

Copyright © 2016 Red Hat, Inc.
The text of and illustrations in this document are licensed by Red Hat under a Creative Commons Attribution–Share Alike 3.0 Unported license ("CC-BY-SA"). An explanation of CC-BY-SA is available at http://creativecommons.org/licenses/by-sa/3.0/. In accordance with CC-BY-SA, if you distribute this document or an adaptation of it, you must provide the URL for the original version.
Red Hat, as the licensor of this document, waives the right to enforce, and agrees not to assert, Section 4d of CC-BY-SA to the fullest extent permitted by applicable law.
Red Hat, Red Hat Enterprise Linux, the Shadowman logo, JBoss, OpenShift, Fedora, the Infinity logo, and RHCE are trademarks of Red Hat, Inc., registered in the United States and other countries.
Linux® is the registered trademark of Linus Torvalds in the United States and other countries.
Java® is a registered trademark of Oracle and/or its affiliates.
XFS® is a trademark of Silicon Graphics International Corp. or its subsidiaries in the United States and/or other countries.
MySQL® is a registered trademark of MySQL AB in the United States, the European Union and other countries.
Node.js® is an official trademark of Joyent. Red Hat Software Collections is not formally related to or endorsed by the official Joyent Node.js open source or commercial project.
The OpenStack® Word Mark and OpenStack logo are either registered trademarks/service marks or trademarks/service marks of the OpenStack Foundation, in the United States and other countries and are used with the OpenStack Foundation's permission. We are not affiliated with, endorsed or sponsored by the OpenStack Foundation, or the OpenStack community.
All other trademarks are the property of their respective owners.

Abstract

Guide to using the Red Hat xPaaS Fuse Integration Services 2.0 image

Chapter 1. Release Notes

Important

This documentation is for a Technical Preview of Fuse Integration Services 2.0. This is not a supported release and both the documentation and the product are liable to change between now and the GA release.

1.1. What’s New

The following features are new in Fuse Integration Services 2.0:

OpenShift Container Platform 3.3
This release supports version 3.3 of OpenShift Container Platform (previously known as OpenShift Enterprise).
S2I binary workflow
This release introduces the S2I binary workflow, which simplifies the workflow for developers building projects on their local machine. For more details, see Section 4.5, “Create and Deploy a Project Using the S2I Binary Workflow”.
Spring Boot
This release now supports Spring Boot, providing an image that runs a Spring Boot container. New Spring Boot based quickstarts and templates are also provided, which enable you to get started with building and deploying applications to the Spring Boot container.

1.2. Deprecated and Removed Features

The following features have been deprecated or removed from the Fuse Integration Services (FIS) 2.0 release:

  • The FIS 1.0 Fabric8 Maven workflow using docker builds outside OpenShift is deprecated in FIS 2.0. The supported workflows for building and deploying FIS 2.0 applications are the S2I source workflow and the S2I binary workflow. The FIS 2.0 version of the Fabric8 Maven plug-in now uses OpenShift S2I binary builds by default.
  • Support for the packaging and deployment of applications using the Hawt App launcher and Camel CDI has been removed in this release. The recommended approach in FIS 2.0 is to use the Spring Boot launcher and Camel Spring Boot. We recommend that you migrate any legacy Hawt App based applications to use Spring Boot instead.

1.3. Known Issues

The following issues are known to affect Fuse Integration Services 2.0:

OSFUSE-422 Quickstarts READMEs: remove links to fabric8, add info about OSE and templates
README files in archetypes may contain upstream-specific or outdated information not relevant to FIS 2.0.
OSFUSE-481 openshift-client can’t access OCP when OPENSHIFT_URL is used
The OPENSHIFT_URL environment variable must not be used with the fabric8-maven-plugin, in the context of the S2I binary workflow, because the value is not evaluated correctly and the fabric8-maven-plugin tries to access the wrong URL. Running oc login is sufficient to access an OpenShift Container Platform 3.3 or CDK 2.3 cluster.

1.4. API Changes

Note the following API changes in this release:

  • Although the Kubernetes client API and Domain Specific Language (DSL) is supported for use in application code, it is currently evolving at a rapid rate and is therefore liable to introduce code-breaking changes between minor releases.

1.5. Supported Camel Versions

In this release, the supported version of Apache Camel depends on which container image you are using, as follows:

Spring Boot container image
Camel 2.18.1
Karaf container image
Camel 2.17 (same version as JBoss Fuse 6.3)
Note

Only the productized distributions of Camel 2.18.1 and Camel 2.17 are supported. For the exact version numbers (which have a build number embedded in them), see the dependencies declared in the pom.xml files of the FIS 2.0 quickstarts and archetypes.

1.6. Camel Components not Supported with Spring Boot

In general, Fuse Integration Services supports the same subset of Apache Camel components as the main JBoss Fuse product. In the context of Spring Boot applications, however, note that the following Camel components are not supported:

  • camel-blueprint (intended for OSGi only)
  • camel-cdi (intended for CDI only)
  • camel-core-osgi (intended for OSGi only)
  • camel-ejb (intended for JEE only)
  • camel-eventadmin (intended for OSGi only)
  • camel-ibatis (camel-mybatis-starter is included)
  • camel-jclouds
  • camel-mina (camel-mina2-starter is included)
  • camel-paxlogging (intended for OSGi only)
  • camel-quartz (camel-quartz2-starter is included)
  • camel-spark-rest
  • camel-swagger (camel-swagger-java-starter is included)

Chapter 2. Introduction to Fuse Integration Services 2.0 Image

Important

This documentation is for a Technical Preview of Fuse Integration Services 2.0. This is not a supported release and both the documentation and the product are liable to change between now and the GA release.

2.1. What Is JBoss Fuse Integration Services?

Red Hat JBoss Fuse Integration Services provides a set of tools and containerized xPaaS images that enable development, deployment, and management of integration microservices within OpenShift.

Important

There are significant differences in supported configurations and functionality in Fuse Integration Services compared to the standalone JBoss Fuse product.

Chapter 3. Before You Begin

Important

This documentation is for a Technical Preview of Fuse Integration Services 2.0. This is not a supported release and both the documentation and the product are liable to change between now and the GA release.

3.1. Comparison: Fuse and Fuse Integration Services

There are several major functionality differences:

  • Fuse Management Console is not included as Fuse administration views have been integrated directly within the OpenShift Web Console.
  • An application deployment with Fuse Integration Services consists of an application and all required runtime components packaged inside a Docker-formatted container image. Applications are not deployed to a runtime as with Fuse, the application image itself is a complete runtime environment deployed and managed through OpenShift.
  • Patching in an OpenShift environment is different from standalone Fuse since each application image is a complete runtime environment. To apply a patch, the application image is rebuilt and redeployed within OpenShift. Core OpenShift management capabilities allow for rolling upgrades and side-by-side deployment to maintain availability of your application during upgrade.
  • Provisioning and clustering capabilities provided by Fabric in Fuse have been replaced with equivalent functionality in Kubernetes and OpenShift. There is no need to create or configure individual child containers as OpenShift automatically does this for you as part of deploying and scaling your application.
  • Messaging services are created and managed using the A-MQ xPaaS images for OpenShift and not included directly within Fuse.
  • Live updates to running Karaf instances using the Karaf shell is strongly discouraged as updates will not be preserved if an application container is restarted or scaled up. This is a fundamental tenet of immutable architecture and essential to achieving scalability and reproducible deployments within OpenShift.

Additional details on technical differences and support scope are documented in an associated KCS article.

3.2. Version Compatibility and Support

See the xPaaS part of the OpenShift and Atomic Platform Tested Integrations page for details about OpenShift image version compatibility.

Chapter 4. Get Started for Developers

Important

This documentation is for a Technical Preview of Fuse Integration Services 2.0. This is not a supported release and both the documentation and the product are liable to change between now and the GA release.

You can start using Fuse Integration Services by creating an application and deploying it to OpenShift using one of the following OpenShift Source-to-Image (S2I) application development workflows:

S2I source workflow
Using the FIS 2.0 quickstart templates from the OpenShift console.
S2I binary workflow
Using the FIS 2.0 Maven archetypes and the Fabric8 Maven plug-in.
Note

For the technical preview release, the instructions in this chapter are currently targeted at a Linux O/S.

4.1. Prerequisites

4.1.1. Access to an OpenShift Server

The fundamental requirement for developing and testing FIS projects is having access to an OpenShift Server. You have the following basic alternatives:

Note

Fuse Integration Services requires CDK version 2.3, which is not yet generally available at the time of the FIS 2.0 Tech Preview release. CDK version 2.2 includes an incompatible version of OpenShift and should not be used. The CDK 2.3 release is expected shortly after the FIS 2.0 Tech Preview release — please follow the CDK Updates page to check on availability of CDK 2.3.

4.1.1.1. Install the Container Development Kit (CDK) on your local machine

In most cases, the most practical alternative for a developer is to install the Red Hat CDK on their local machine. Using the CDK, you can boot a virtual machine (VM) instance that runs an image of OpenShift on Red Hat Enterprise Linux (RHEL) 7. An installation of the CDK consists of the following key components:

  • A virtual machine (libvirt, VirtualBox, or Hyper-V)
  • Vagrant (script for configuring and booting the VM image)
  • Vagrant plug-ins (providing special features of the CDK)
  • VM image of OpenShift on RHEL 7

Installing the CDK is a fairly lengthy and complex procedure. For FIS 2.0 you need to install version 2.3 of the CDK. Detailed instructions for installing and using the CDK 2.3 are provided in the following guides:

If you opt to use the CDK, it is recommended that you read and thoroughly understand the content of the preceding guides before proceeding with the examples in this chapter.

Important

OpenShift has fairly strict requirements for the versions of the CDK components you use. Make sure that you follow the guidance given in the Installation Guide. If necessary, you might have to uninstall an existing component and re-install the correct version.

4.1.1.2. Get remote access to an existing OpenShift Server

Your IT department might already have set up an OpenShift cluster on some server machines. In this case, the following requirements must be satisfied for getting started with FIS 2.0:

  • The server machines must be running OpenShift Container Platform 3.3.
  • Ask the OpenShift administrator to install the FIS container base images and the FIS templates on the OpenShift servers.
  • Ask the OpenShift administrator to create a user account for you, having the usual developer permissions (enabling you to create, deploy, and run OpenShift projects).
  • Ask the administrator for the URL of the OpenShift Server (which you can use either to browse to the OpenShift console or connect to OpenShift using the oc command-line client) and the login credentials for your account.

4.1.2. Java Version

On your developer machine, make sure you have installed a Java version that is supported by JBoss Fuse 6.3. For details of the supported Java versions, see Supported Configurations.

4.1.3. Install the Requisite Client-Side Tools

We recommend that you have the following tools installed on your developer machine:

Apache Maven 3.3.x
Required for local builds of OpenShift projects. Download the appropriate package from the Apache Maven download page. Make sure that you have at least version 3.3.x (or later) installed, otherwise Maven might have problems resolving dependencies when you build your project.
Git
Required for the OpenShift S2I source workflow and generally recommended for source control of your FIS projects. Download the appropriate package from the Git Downloads page.
OpenShift client

If you are using the CDK, the oc client tool can conveniently be installed as follows:

$ vagrant service-manager install-cli openshift
Docker client

If you are using the CDK, the docker client tool can conveniently be installed as follows:

$ vagrant service-manager install-cli docker
Important

Make sure that you install versions of the oc tool and the docker tool that are compatible with the version of OpenShift running on the OpenShift Server. An advantage of using the Vagrant service-manager approach is that you automatically get the right version of the tools. For more details, see Red Hat CDK 2.3 Installation Guide.

4.2. Prepare Your Development Environment

After installing the required software and tools, prepare your development environment as follows.

4.2.1. Configure Maven Repositories

Configure the Maven repositories, which hold the archetypes and artifacts you will need for building an FIS project on your local machine. Edit your Maven settings.xml file, which is usually located in ~/.m2/settings.xml (on Linux or Max OS) or Documents and Settings\<USER_NAME>/.m2/settings.xml (on Windows). The following Maven repositories are required:

You must add the preceding repositories both to the dependency repositories section as well as the plug-in repositories section of your settings.xml file.

4.3. (CDK Only) Prepare the Virtual OpenShift Server

If you are using the CDK, you need to install the requisite FIS components into the OpenShift Server, as follows:

  1. Start the virtual OpenShift Server. Open a command prompt and change to the directory where your Vagrantfile is located (typically in the rhel-ose subdirectory of the downloaded container tools) before booting the OpenShift image:

    $ cd <CDK_UNZIPPED>/cdk/components/rhel/rhel-ose
    $ vagrant up
    Note

    After the virtual machine has booted, the vagrant script prompts you to register the box with vagrant-registration. The first time you run the vagrant script you must enter the credentials for your Red Hat subscription (the same username and password you would normally use to log on to the Customer Portal, https://access.redhat.com).

  2. Log in to the virtual OpenShift Server as an administrator, as follows:

    oc login -u admin -p admin
    Note

    The admin user is a standard account that is automatically created on the virtual OpenShift Server by the CDK.

  3. Install the FIS templates. Enter the following commands at a command prompt:

    Warning

    The following commands would overwrite an existing FIS 1.0 image stream with the FIS 2.0 Tech Preview version. To be able to use the FIS 1.0 image stream on the same OpenShift instance, install the FIS 2.0 Tech Preview image streams into a different namespace (using the -n parameter of the oc replace command).

    BASEURL=https://raw.githubusercontent.com/jboss-fuse/application-templates/application-templates-2.0.redhat-000026
    oc replace --force -n openshift -f ${BASEURL}/fis-image-streams.json
    oc replace --force -n openshift -f ${BASEURL}/quickstarts/karaf2-camel-amq-template.json
    oc replace --force -n openshift -f ${BASEURL}/quickstarts/karaf2-camel-log-template.json
    oc replace --force -n openshift -f ${BASEURL}/quickstarts/karaf2-camel-rest-sql-template.json
    oc replace --force -n openshift -f ${BASEURL}/quickstarts/karaf2-cxf-rest-template.json
    oc replace --force -n openshift -f ${BASEURL}/quickstarts/springboot-camel-amq-template.json
    oc replace --force -n openshift -f ${BASEURL}/quickstarts/springboot-camel-jdg-template.json
    oc replace --force -n openshift -f ${BASEURL}/quickstarts/springboot-camel-rest-sql-template.json
    oc replace --force -n openshift -f ${BASEURL}/quickstarts/springboot-camel-template.json
    oc replace --force -n openshift -f ${BASEURL}/quickstarts/springboot-camel-xml-template.json

4.4. Create and Deploy a Project Using the S2I Source Workflow

In this section, you will use the OpenShift S2I source workflow to build and deploy an FIS project based on a template. The starting point for this demonstration is a quickstart project stored in a remote Git repository. Using the OpenShift console, you will download, build, and deploy this quickstart project in the OpenShift server.

  1. Navigate to the OpenShift console in your browser (https://10.1.2.2:8443, in the case of the CDK) and log in to the console with your credentials (for example, with username openshift-dev and password, devel).
  2. On the Projects screen of the OpenShift console, click New Project.
  3. On the New Project screen, enter test in the Name field and click Create. The Select Image or Template screen now opens.
  4. On the Select Image or Template screen, scroll down to the Java templates and click on the s2i-karaf2-camel-log template.

    get started s2i source 01
    Tip

    To see the complete list of Java templates, click See all.

  5. The s2i-karaf2-camel-log template form opens, as shown below. You can accept all of the default settings on this form. Scroll down to the bottom of the form and click Create.

    get started s2i source 02
    Note

    If the FIS 2.0 Tech Preview image streams have been installed to a namespace other than the default openshift namespace (see Section 4.3, “(CDK Only) Prepare the Virtual OpenShift Server”), change the Image Stream Namespace parameter to the namespace where the FIS 2.0 Tech Peview image streams have been installed.

    Note

    If you want to modify the application code (instead of just running the quickstart as is), you would need to fork the original quickstart Git repository and fill in the appropriate values in the Git repository URL and Git reference fields. For more details of this process, see Section 5.2, “S2I Source Workflow”.

  6. The Application created screen now opens. Click Continue to overview to go to the Overview tab of the OpenShift console (which shows an overview of the available services and pods in the current project). If you have not previously created any application builds in this project, this screen will be empty.
  7. In the navigation pane on the left-hand side, select Browse→Builds to open the Builds screen.
  8. Click the s2i-karaf2-camel-log build name to open the s2i-karaf2-camel-log build page, as shown below.

    get started s2i source 04
  9. Click View log to view the log for the latest build — if the build should fail for any reason, the build log can help you to diagnose the problem.
  10. If the build completes successfully, click Overview in the left-hand navigation pane to view the running pod for this application.
  11. Click in the centre of the pod icon (blue circle) to view the list of pods for s2i-karaf2-camel-log.

    List of pods associated with the s2i-karaf2-camel-log service
  12. Click on the pod Name (in this example, s2i-karaf2-camel-log-1-n1a5f) to view the details of the running pod.

    Detail view of the running pod for s2i-karaf2-camel-log
  13. Click on the Logs tab to view the application log and scroll down the log to find the Hello World messages generated by the Camel application.

    View of the application log showing Hello World messages
  14. Click Overview on the left-hand navigation bar to return to the overview of the services in the test namespace. To shut down the running pod, click the down arrow get started s2i binary 05 beside the pod icon. When a dialog prompts you with the question Scale down deployment s2i-karaf2-camel-log-1?, click Scale Down.
  15. (Optional) If you are using the CDK, you can shut down the virtual OpenShift Server completely by returning to the command prompt that you used to start up the virtual machine (that is, in the same directory as the Vagrantfile file) and entering the following command:

    vagrant halt

4.5. Create and Deploy a Project Using the S2I Binary Workflow

In this section, you will use the OpenShift S2I binary workflow to create, build, and deploy an FIS project.

  1. Create a new FIS project using a Maven archetype. For this example, we use an archetype that creates a sample Spring Boot Camel project. Open a new command prompt and enter the following Maven command:

    mvn archetype:generate \
      -DarchetypeCatalog=https://maven.repository.redhat.com/earlyaccess/all/io/fabric8/archetypes/archetypes-catalog/2.2.180.redhat-000004/archetypes-catalog-2.2.180.redhat-000004-archetype-catalog.xml \
      -DarchetypeGroupId=org.jboss.fuse.fis.archetypes \
      -DarchetypeArtifactId=spring-boot-camel-archetype \
      -DarchetypeVersion=2.2.180.redhat-000004

    The archetype plug-in switches to interactive mode to prompt you for the remaining fields:

    Define value for property 'groupId': : org.example.fis
    Define value for property 'artifactId': : fis-spring-boot
    Define value for property 'version':  1.0-SNAPSHOT: :
    Define value for property 'package':  org.example.fis: :
    [INFO] Using property: spring-boot-version = 1.4.1.RELEASE
    Confirm properties configuration:
    groupId: org.example.fis
    artifactId: fis-spring-boot
    version: 1.0-SNAPSHOT
    package: org.example.fis
    spring-boot-version: 1.4.1.RELEASE
     Y: :

    When prompted, enter org.example.fis for the groupId value and fis-spring-boot for the artifactId value. Accept the defaults for the remaining fields.

  2. If the previous command exited with the BUILD SUCCESS status, you should now have a new FIS project under the fis-spring-boot subdirectory. You can inspect the Java application code in the fis-spring-boot/src/main/java/org/example/fis/Application.java file. The demonstration code defines a simple Camel route that continuously sends Hello World messages to the log.
  3. In preparation for building and deploying the FIS project, log in to the OpenShift Server as follows:

    oc login -u openshift-dev -p devel https://10.1.2.2:8443
    Note

    The openshift-dev user (with devel password) is a standard account that is automatically created on the virtual OpenShift Server by the CDK. If you are accessing a remote server, use the URL and credentials provided by your OpenShift administrator.

  4. Create a new project namespace called test (assuming it does not already exist), as follows:

    oc new-project test

    If the test project namespace already exists, you can switch to it using the following command:

    oc project test
  5. You are now ready to build and deploy the fis-spring-boot project. Assuming you are still logged into OpenShift, change to the directory of the fis-spring-boot project, and then build and deploy the project, as follows:

    cd fis-spring-boot
    mvn fabric8:deploy

    The first time you run this command, Maven has to download quite a lot of dependencies, which could take a few minutes. At the end of a successful build, you should see some output like the following:

    ...
    [INFO] OpenShift platform detected
    [INFO] Using project: test
    [INFO] Creating a Service from openshift.yml namespace test name fis-spring-boot
    [INFO] Created Service: target/fabric8/applyJson/test/service-fis-spring-boot.json
    [INFO] Updating ImageStream fis-spring-boot from openshift.yml
    [INFO] Creating a DeploymentConfig from openshift.yml namespace test name fis-spring-boot
    [INFO] Created DeploymentConfig: target/fabric8/applyJson/test/deploymentconfig-fis-spring-boot.json
    [INFO] F8: HINT: Use the command `oc get pods -w` to watch your pods start up
    [INFO] ------------------------------------------------------------------------
    [INFO] BUILD SUCCESS
    [INFO] ------------------------------------------------------------------------
    [INFO] Total time: 01:23 min
    [INFO] Finished at: 2016-11-10T17:46:05+01:00
    [INFO] Final Memory: 66M/666M
    [INFO] ------------------------------------------------------------------------
  6. Navigate to the OpenShift console in your browser (https://10.1.2.2:8443, in the case of the CDK) and log in to the console with your credentials (for example, with username openshift-dev and password, devel).
  7. In the OpenShift console, scroll down to find the test project namespace. Click the test project and an overview of the fis-spring-boot service opens, as shown.

    OpenShift console test namespace overview showing fis-spring-boot service and associated pods
  8. Click in the centre of the pod icon (blue circle) to view the list of pods for fis-spring-boot.

    List of pods associated with the fis-spring-boot service
  9. Click on the pod Name (in this example, fis-spring-boot-1-1rieh) to view the details of the running pod.

    Detail view of the running pod for fis-spring-boot
  10. Click on the Logs tab to view the application log and scroll down the log to find the Hello World messages generated by the Camel application.

    View of the application log showing Hello World messages
  11. Click Overview on the left-hand navigation bar to return to the overview of the services in the test namespace. To shut down the running pod, click the down arrow get started s2i binary 05 beside the pod icon. When a dialog prompts you with the question Scale down deployment fis-spring-boot-1?, click Scale Down.
  12. (Optional) If you are using the CDK, you can shut down the virtual OpenShift Server completely by returning to the command prompt that you used to start up the virtual machine (that is, in the same directory as the Vagrantfile file) and entering the following command:

    vagrant halt

4.6. Tips and Tricks

Here are a few tips that you might find useful when working with OpenShift.

4.6.1. (CDK Only) Using SSH to Log On to the Virtual OpenShift Server

You can log in directly to the RHEL 7 image on the virtual machine using SSH, which gives you a command prompt on the virtual RHEL 7 operating system. Change directory to the location of your Vagrantfile and use Vagrant to log in, as follows:

$ cd <CDK_UNZIPPED>/cdk/components/rhel/rhel-ose
$ vagrant ssh
[vagrant@rhel-cdk ~]$ pwd
/home/vagrant

You are not prompted for credentials (the login authentication is based on SSH keys that Vagrant has automatically configured and installed). You will find that you are logged in as the user, vagrant. If you need superuser permissions on the virtual machine, use sudo. The vagrant user has sudo permissions and you are not required to enter a password when you invoke sudo.

4.6.2. (CDK Only) Sharing Files with the Virtual Machine

Vagrant automatically mounts your home directory on the virtual machine, which is convenient for transferring data and sharing files. For example, if your username is myusername, after you log in to the virtual machine through SSH you will be able see the contents of your home directory:

$ cd <CDK_UNZIPPED>/cdk/components/rhel/rhel-ose
$ vagrant ssh
[vagrant@rhel-cdk ~]$ ls /home
myusername  vagrant
[vagrant@rhel-cdk ~]$ cd /home/myusername

This feature is implemented by the CDK’s sshfs plug-in for Vagrant.

4.6.3. (CDK Only) Customizing the Docker Daemon Configuration

If you want to customize the configuration of the Docker daemon running on the virtual machine, log into the virtual machine and edit the Docker daemon configuration file, as follows:

  1. Log in to the virtual machine:

    $ cd <CDK_UNZIPPED>/cdk/components/rhel/rhel-ose
    $ vagrant ssh
    [vagrant@rhel-cdk ~]$
  2. Edit the Docker daemon configuration file:

    [vagrant@rhel-cdk ~]$ sudo vi /etc/sysconfig/docker
  3. Restart the Docker daemon:

    [vagrant@rhel-cdk ~]$ sudo systemctl restart docker.service

Chapter 5. Build and Deploy Your Application

5.1. Deployment Workflows

This section gives an overview of the workflows available for building and deploying applications on FIS.

5.2. S2I Source Workflow

Describes the S2I source workflow.

5.3. S2I Binary Workflow

Describes the S2I binary workflow.