Development Guide
For Red Hat JBoss Developers
Abstract
Part I. Overview
Chapter 1. About This Guide
- Detailed Architecture of JBoss BRMS and JBoss BPM Suite.
- Detailed description of how to author, test, debug, and package simple and complex business rules and processes using Integrated Development environment (IDE).
- JBoss BRMS runtime environment.
- Domain specific languages (DSLs) and how to use them in a rule.
- Complex event processing.
- OverviewThis section provides detailed information on JBoss BRMS and JBoss BPM suite, their architecture, key components. It also discusses the role of Maven in project building and deploying.
- All About RulesThis section provides details on all you have to know to author rules with JBoss Developer Studio. It describes the rule algorithms, rule structure, components, advanced conditions, constraints, commands, Domain Specific Languages and Complex Event Processing. It provides details on how to use the various views, editors, and perspectives that JBoss Developer Studio offers.
- All About ProcessesThis section describes what comprises a business process and how you can author and test them using JBoss Developer Studio.
- KIEThis section highlights the KIE API with detailed description of how to create, build, deploy, and run KIE projects.
- AppendixThis section comprises important reference material such as key knowledge terms, and examples.
1.1. Audience
- Author of rules and processes who are responsible for authoring and testing business rules and processes using JBoss Developer Studio.
- Java application developers responsible for developing and integrating business rules and processes into Java and Java EE enterprise applications.
1.2. Prerequisites
- Basic Java/Java EE programming experience
- Knowledge of the Eclipse IDE, Maven and GIT
Chapter 2. JBoss BRMS And JBoss BPM Suite Architecture
2.1. JBoss Business Rules Management System
2.1.1. JBoss BRMS Key Components
- Drools ExpertDrools Expert is a pattern matching based rule engine that runs on Java EE application servers, JBoss BRMS platform, or bundled with Java applications. It comprises an inference engine, a production memory , and a working memory. Rules are stored in the production memory and the facts that the inference engine matches the rules against, are stored in the working memory.
- Business CentralBusiness Central is a web interface intended for business analysts for creation and maintenance of business rules and rule artifacts. It is designed to ease creation, testing, and packaging of rules for business users.
- Drools FlowDrools flow provides business process capabilities to the JBoss BRMS platform. This framework can be embedded into any Java application or can even run standalone on a server. A business process provides stepwise tasks using a flow chart, for the Rule Engine to execute.
- Drools FusionDrools Fusion provides event processing capabilities to the JBoss BRMS platform. Drools Fusion defines a set of goals to be achieved such as:
- Support events as first class citizens.
- Support detection, correlation, aggregation and composition of events.
- Support processing streams of events.
- Support temporal constraints in order to model the temporal relationships between events.
- Drools Integrated Development Environment (IDE)We encourage you to use Red Hat JBoss Developer Studio (JBDS) with JBoss BRMS plug-ins to develop and test business rules. The JBoss Developer Studio builds upon an extensible, open source Java-based IDE Eclipse providing platform and framework capabilities, making it ideal for JBoss BRMS rules development.
2.1.2. JBoss BRMS Features
- Centralized repository of business assets (JBoss BRMS artifacts)
- IDE tools to define and govern decision logic
- Building, deploying, and testing the decision logic
- Packages of business assets
- Categorization of business assets
- Integration with development tools
- Business logic and data separation
- Business logic open to reuse and changes
- Easy to maintain business logic
- Enables several stakeholders (business analysts, developer, administrators) to contribute in defining the business logic
2.2. JBoss Business Process Management Suite
2.2.1. JBoss BPM Suite Key Components
- JBoss BPM Central (Business Central)Business Central is a web-based application for creating, editing, building, managing, and monitoring JBoss BPM Suite business assets. It also allows execution of business processes and management of tasks created by those processes.
- Business Activity Monitoring DashboardsThe Business Activity Monitor (BAM) dashboard provides report generation capabilities. It allows you to use a pre-difined dashboard and even create your own customized dashboard.
- Maven Artifact RepositoryJBoss BPM Suite projects are built as Apache Maven projects and the default location of the Maven repository is
<working-directory>/repositories/kie. You can specify an alternate repository location by changing the org.guvnor.m2repo.dir property.Each project builds a JAR artifact file called akjar. You can store your project artifacts and dependent jars in this repository. - Execution EngineThe JBoss BPM Suite execution engine is responsible for executing business processes and managing the tasks, which result from these processes. Business Central provides a user interface for executing processes and managing tasks.
Note
To execute your business processes, you can use Business Central web application that bundles the execution engine, enabling a ready to use process execution environment. Alternatively, you can create your own execution server and embed the JBoss BPM Suite and JBoss BRMS libraries with your application using the standard Java EE way.For example, if you are developing a web application, include the JBoss BPM Suite/BRMS libraries in theWEB-INF/libfolder of your application. - Business Central RepositoryThe business artifacts of a JBoss BPM Suite project such as process models, rules, and forms are stored in Git repositories managed through the Business Central. You can also access these repositories outside of Business Central through the git or ssh protocols.
2.2.2. JBoss BPM Suite Features
- Pluggable human task service based on WS-HumanTask for including tasks that need to be performed by human actors.
- Pluggable persistence and transactions (based on JPA / JTA).
- Web-based process designer to support the graphical creation and simulation of your business processes (drag and drop).
- Web-based data modeler and form modeler to support the creation of data models and process and task forms.
- Web-based, customizable dashboards and reporting.
- A web-based workbench called Business Central, supporting the complete BPM life cycle:
- Modeling and deployment: To author your processes, rules, data models, forms and other assets.
- Execution: To execute processes, tasks, rules and events on the core runtime engine.
- Runtime Management: To work on assigned task, manage process instances.
- Reporting: To keep track of the execution using Business Activity Monitoring capabilities.
- Eclipse-based developer tools to support the modeling, testing and debugging of processes.
- Remote API to process engine as a service (REST, JMS, Remote Java API).
- Integration with Maven, Spring, and OSGi.
2.3. Supported Platforms
- Red Hat JBoss Enterprise Application Platform 6.4
- Red Hat JBoss Web Server 2.1 (Tomcat 7) on JDK 1.7
- IBM WebSphere Application Server 8.5.5.0
- Oracle WebLogic Server 12.1.3 (12c)
2.4. Use Cases
2.4.1. Use Case: Business Decision Management in the Insurance Industry with Red Hat JBoss BRMS

Figure 2.1. JBoss BRMS Use Case: Insurance Industry Decision Making
2.4.2. Use Case: Process-based solutions in the loan industry

Figure 2.2. High-level loan application process flow

Figure 2.3. Loan Application Process Automation
Chapter 3. Maven Dependencies
- The build process is easy and a uniform build system is implemented across projects.
- All the required jar files for a project are made available at compile time.
- A proper project structure is set up.
- Dependencies and versions are well managed.
- No need for additional build processing, as Maven builds output into a number of predefined types, such as jar and war.
3.1. Maven Repositories
3.2. Using Maven Repository in Your Project
- By configuring the project's POM file (
pom.xml). - By modifying the Maven settings file (
settings.xml).
3.3. Maven Configuration File
pom.xml file that holds configuration details for your project.
pom.xml is an XML file that contains information about the project (such as project name, version, description, developers, mailing list, and license), and build details (such as dependencies, location of the source, test, target directories, and plug-ins, repositories).
pom.xml file. You can edit this file to add more dependencies and new repositories. Maven downloads all the jar files and the dependent jar files from the Maven repository when you compile and package your project.
pom.xml file can be found at http://maven.apache.org/maven-v4_0_0.xsd.
3.4. Maven Settings file
settings.xml) is used to configure Maven execution. You can locate this file in the following locations:
- In the Maven install directory at
$M2_HOME/conf/settings.xml. These settings are called global settings. - In the user's install directory at
${user.home}/.m2/settings.xml. These settings are called user settings. - Folder location specified by the system property kie.maven.settings.custom.
settings.xml file:
<settings>
<profiles>
<profile>
<id>my-profile</id>
<activation>
<activeByDefault>true</activeByDefault>
</activation>
<repositories>
<repository>
<id>fusesource</id>
<url>http://repo.fusesource.com/nexus/content/groups/public/</url>
<snapshots>
<enabled>false</enabled>
</snapshots>
<releases>
<enabled>true</enabled>
</releases>
</repository>
...
</repositories>
</profile>
</profiles>
...
</settings>
3.5. Dependency Management
pom.xml file. Adding the BOM files ensures that the correct versions of transitive dependencies from the provided Maven repositories are included in the project.
- org.jboss.bom.brms:jboss-brms-bpmsuite-bom:VERSION: This is the basic BOM without any Java EE6 support.
- org.jboss.bom.brms:jboss-javaee-6.0-with-brms-bpmsuite:VERSION: This provides support for Java EE6.
3.6. Integrated Maven Dependencies
pom.xml file and should be included like the following example:
<dependency>
<groupId>commons-logging</groupId>
<artifactId>commons-logging</artifactId>
<version>1.1.1-redhat-2</version>
<scope>compile</scope>
</dependency>
Note
3.7. Uploading Artifacts to Maven Repository
pom.xml. In such cases, you can programmatically upload dependencies to JBoss BPM Suite by uploading artifacts to the embedded maven repository through Business Central. JBoss BPM Suite uses a servlet for the maven repository interactions. This servlet processes a GET request to download an artifact and a POST request to upload one. You can leverage the servlet's POST request to upload an artifact to the repository via REST. To do this, implement the Http basic authentication and issue an HTTP POST request in the following format:
[protocol]://[hostname]:[port]/[context-root]/maven2/[groupId replacing '.' with '/']/[artifactId]/[version]/[artifactId]-[version].jar
org.slf4j:slf4j-api:1.7.7 jar, where artifactId is slf4j-api, groupId is slf4j, and version is 1.7.7, the URI must be:
http://localhost:8080/business-central/maven2/org/slf4j/slf4j-api/1.7.7/slf4j-api-1.7.7.jar
/tmp directory as a user bpmsAdmin with the password abcd1234!, to an instance of JBoss BPM Suite running locally:
package com.rhc.example;
import java.io.File;
import java.io.IOException;
import org.apache.http.HttpEntity;
import org.apache.http.HttpHost;
import org.apache.http.auth.AuthScope;
import org.apache.http.auth.UsernamePasswordCredentials;
import org.apache.http.client.AuthCache;
import org.apache.http.client.ClientProtocolException;
import org.apache.http.client.CredentialsProvider;
import org.apache.http.client.methods.CloseableHttpResponse;
import org.apache.http.client.methods.HttpPost;
import org.apache.http.client.protocol.HttpClientContext;
import org.apache.http.entity.mime.HttpMultipartMode;
import org.apache.http.entity.mime.MultipartEntityBuilder;
import org.apache.http.entity.mime.content.FileBody;
import org.apache.http.impl.auth.BasicScheme;
import org.apache.http.impl.client.BasicAuthCache;
import org.apache.http.impl.client.BasicCredentialsProvider;
import org.apache.http.impl.client.CloseableHttpClient;
import org.apache.http.impl.client.HttpClients;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
public class UploadMavenArtifact {
private static final Logger LOG = LoggerFactory.getLogger(UploadMavenArtifact.class);
public static void main(String[] args) {
//Maven coordinates
String groupId = "com.rhc.example";
String artifactId = "bpms-upload-jar";
String version = "1.0.0-SNAPSHOT";
//File to upload
File file = new File("/tmp/"+artifactId+"-"+version+".jar");
//Server properties
String protocol = "http";
String hostname = "localhost";
Integer port = 8080;
String username = "bpmsAdmin";
String password = "abcd1234!";
//Create the HttpEntity (body of our POST)
FileBody fileBody = new FileBody(file);
MultipartEntityBuilder builder = MultipartEntityBuilder.create();
builder.setMode(HttpMultipartMode.BROWSER_COMPATIBLE);
builder.addPart("upfile", fileBody);
HttpEntity entity = builder.build();
//Calculate the endpoint from the maven coordinates
String resource = "/business-central/maven2/" + groupId.replace('.', '/') + "/" + artifactId +"/" + version + "/" + artifactId + "-" + version + ".jar";
LOG.info("POST " + hostname + ":" + port + resource);
//Set up HttpClient to use Basic pre-emptive authentication with the provided credentials
HttpHost target = new HttpHost(hostname, port, protocol);
CredentialsProvider credsProvider = new BasicCredentialsProvider();
credsProvider.setCredentials(
new AuthScope(target.getHostName(), target.getPort()),
new UsernamePasswordCredentials(username,password));
CloseableHttpClient httpclient = HttpClients.custom()
.setDefaultCredentialsProvider(credsProvider).build();
HttpPost httpPost = new HttpPost(resource);
httpPost.setEntity(entity);
AuthCache authCache = new BasicAuthCache();
BasicScheme basicAuth = new BasicScheme();
authCache.put(target, basicAuth);
HttpClientContext localContext = HttpClientContext.create();
localContext.setAuthCache(authCache);
try {
//Perform the HTTP POST
CloseableHttpResponse response = httpclient.execute(target, httpPost, localContext);
LOG.info(response.toString());
//Now check your artifact repository!
} catch (ClientProtocolException e) {
LOG.error("Protocol Error", e);
throw new RuntimeException(e);
} catch (IOException e) {
LOG.error("IOException while getting response", e);
throw new RuntimeException(e);
}
}
}
An alternative maven approach is to configure your projects pom.xml by adding the repository as shown below:
<distributionManagement>
<repository>
<id>guvnor-m2-repo</id>
<name>maven repo</name>
<url>http://localhost:8080/business-central/maven2/</url>
<layout>default</layout>
</repository>
</distributionManagement>
Once you specify the repository information in the pom.xml, add the corresponding configuration in settings.xml as shown below:
<server>
<id>guvnor-m2-repo</id>
<username>bpmsAdmin</username>
<password>abcd1234!</password>
<configuration>
<wagonProvider>httpclient</wagonProvider>
<httpConfiguration>
<all>
<usePreemptive>true</usePreemptive>
</all>
</httpConfiguration>
</configuration>
</server>
Now when you run the mvn deploy command, the jar file gets uploaded.
3.8. Deploying Red Hat JBoss BPM Suite Artifacts to Red Hat JBoss Fuse
MANIFEST.MF files describe their dependencies, amongst other things. You can plug these JARs directly into an OSGi environment, like Fuse.
Warning
MinimalPomParser.
MinimalPomParser is a very simple POM parser implementation provided by Drools and is limited in what it can parse. It ignores some POM file parts, like a kJAR's parent POM. This means that users must not rely on those POM features (such as dependencies declared in parent POM in their kJARs) when using KIE-CI in OSGi environment.
Separating assets and code
Chapter 4. Install and Setup JBoss Developer Studio
Warning
UTF-8. You can do this by editing the $JBDS_HOME/studio/jbdevstudio.ini file and adding the following property: "-Dfile.encoding=UTF-8"
4.1. Installing the JBoss Developer Studio Plug-ins
Procedure 4.1. Install the JBoss BRMS and JBoss BPM Suite Plug-ins in JBoss Developer Studio 8
- Start JBoss Developer Studio.
- Select → .
- Click Add to enter the Add Repository menu.
- Provide a name for the software site in the Name field and add the following url in the Location field: https://devstudio.jboss.com/updates/8.0/integration-stack/
- Click OK.
- Select the JBoss Business Process and Rule Development from the available options and click Next and then Next again.
- Read and accept the license by selecting the appropriate radio button, and click Finish.
- You must restart JBoss Developer Studio, after the installation of the plug-ins has completed.
4.2. Configuring the JBoss BRMS/BPM Suite Server
Procedure 4.2. Configure the Server
- Open the Drools view by selecting → → and select Drools and click OK.To open the JBoss BPM Suite view, select → → and select jBPM and click OK.
- Add the server view by selecting → → and select → .
- Open the server menu by right clicking the Servers panel and select → .
- Define the server by selecting → and clicking Next.
- Set the home directory by clicking the Browse button. Navigate to and select the installation directory for JBoss EAP which has JBoss BRMS installed. For configuring JBoss BPM Suite server, select the installation directory which has JBoss BPM Suite installed.
- Provide a name for the server in the Name field, make sure that the configuration file is set, and click Finish.
4.3. Importing Projects from a Git Repository into JBoss Developer Studio
Procedure 4.3. Cloning a Remote Git Repository
- Start the Red Hat JBoss BRMS/BPM Suite server (whichever is applicable) by selecting the server from the server tab and click the start icon.
- Simultaneously, start the Secure Shell server, if not running already, by using the following command. The command is Linux and Mac specific only. On these platforms, if sshd has already been started, this command fails. In that case, you may safely ignore this step.
/sbin/service sshd start
- In JBoss Developer Studio, select → and navigate to the Git folder. Open the Git folder to select and click .
- Select the repository source as and click .
- Enter the details of the Git repository in the next window and click .

Figure 4.1. Git Repository Details
- Select the branch you wish to import in the following window and click .
- To define the local storage for this project, enter (or select) a non-empty directory, make any configuration changes and click .
- Import the project as a general project in the following window and click . Name the project and click .
Procedure 4.4. Importing a Local Git Repository
- Start the Red Hat JBoss BRMS/BPM Suite server (whichever is applicable) by selecting the server from the server tab and click the start icon.
- In JBoss Developer Studio, select → and navigate to the Git folder. Open the Git folder to select and click .
- Select the repository source as and click .

Figure 4.2. Git Repository Details
- Select the repository that is to be configured from the list of available repositories and click .
- In the dialog that opens, select the radio button from the and click . Name the project and click .

Figure 4.3. Wizard for Project Import
Part II. All About Rules
Chapter 5. Rule Algorithms
5.1. PHREAK Algorithm
- Three layers of contextual memory: Node, Segment and Rule memories.
- Rule, segment, and node based linking.
- Lazy (delayed) rule evaluation.
- Stack based evaluations with pause and resume.
- Isolated rule evaluation.
- Set oriented propagations.
5.2. Rule Evaluation With PHREAK Algorithm
5.3. Rete Algorithm
5.3.1. ReteOO
5.3.2. The Rete Root Node

Figure 5.1. ReteNode
5.3.3. The ObjectTypeNode
instanceof check.
5.3.4. AlphaNodes
5.3.5. Hashing
5.3.6. BetaNodes
5.3.7. Alpha Memory
5.3.8. Beta Memory
5.3.9. Lookups with BetaNodes
5.3.10. LeftInputNodeAdapters
5.3.11. Terminal Nodes
5.3.12. Node Sharing
rule
when
Cheese( $cheddar : name == "cheddar" )
$person: Person( favouriteCheese == $cheddar )
then
System.out.println( $person.getName() + " likes cheddar" );
end
rule
when
Cheese( $cheddar : name == "cheddar" )
$person : Person( favouriteCheese != $cheddar )
then
System.out.println( $person.getName() + " does not like cheddar" );
endTerminalNode.

Figure 5.2. Node Sharing
5.4. Switching Between PHREAK and ReteOO
Switching Using System Properties
drools.ruleEngine system properties with the following values:
drools.ruleEngine=phreak
drools.ruleEngine=reteoo
<dependency>
<groupId>org.drools</groupId>
<artifactId>drools-reteoo</artifactId>
<version>${drools.version}</version>
</dependency>
Switching in KieBaseConfiguration
import org.kie.api.KieBase; import org.kie.api.KieBaseConfiguration; import org.kie.api.KieServices; import org.kie.api.runtime.KieContainer; ...
KieServices kservices = KieServices.Factory.get(); KieBaseConfiguration kconfig = kieServices.Factory.get().newKieBaseConfiguration(); // you can either specify phreak (default) kconfig.setOption(RuleEngineOption.PHREAK); // or legacy ReteOO kconfig.setOption(RuleEngineOption.RETEOO); // and then create a KieBase for the selected algorithm (getKieClasspathContainer() is just an example) KieContainer container = kservices.getKieClasspathContainer(); KieBase kbase = container.newKieBase(kieBaseName, kconfig);
Note
drools-reteoo-(version).jar to exist on the classpath. If not, the BRMS Engine reverts back to PHREAK and issues a warning. This applies for switching with KieBaseConfiguration and System Properties.
Chapter 6. Getting Started with Rules and Facts
- BRMS parses all the .drl rule files into the knowledge base.
- Each fact is asserted into the working memory. As the facts are asserted, BRMS uses PHREAK or RETE algorithm to infer how the facts relate to the rules. So the working memory now contains a copy of the parsed rules and a reference to the facts.
- The
fireAllRules()method is called. This triggers all the interactions between facts and rules and the rule engine evaluates all the rules against aal the facts and concludes which rules should be fired against which facts. - All the rule-facts combination (when a particular rule matches against one or more sets of facts), are queued within a data construct called an agenda.
- Finally, activations are processed one-by-one from the agenda, calling the consequence of the rule on the facts that activated it. Note that the firing of an activation on the agenda can modify the contents of the agenda before the next activation is fired. The PHREAK or RETE algorithm are used to handle such situatiobs edfficiently.
6.1. Create Your First Rule
6.1.1. Create and Execute Your First Rule Using Plain Java
Procedure 6.1. Create and Execute your First Rule using plain Java
Create your fact model
Create a POJO based on which your rule runs. For example, create aPerson.javafile in a directory calledmy-project. ThePersonclass contains the getter and setter methods to retrieve and set values of first name, last name, hourly rate, and wage of a person:import org.kie.api.KieServices; import org.kie.api.runtime.KieContainer; import org.kie.api.runtime.KieSession; public class Person { private String firstName; private String lastName; private Integer hourlyRate; private Integer wage; public String getFirstName() { return firstName; } public void setFirstName(String firstName) { this.firstName = firstName; } public String getLastName() { return lastName; } public void setLastName(String lastName) { this.lastName = lastName; } public Integer getHourlyRate() { return hourlyRate; } public void setHourlyRate(Integer hourlyRate) { this.hourlyRate = hourlyRate; } public Integer getWage(){ return wage; } public void setWage(Integer wage){ this.wage = wage; } }Create your rule
Create your rule file in.drlformat undermy-projectdirectory. Here is the simple rule file calledPerson.drl, which does a calculation on the wage and hourly rate values and displays a message based on the result.dialect "java" rule "Wage" when Person(hourlyRate*wage > 100) Person(name : firstName, surname : lastName) then System.out.println( "Hello" + " " + name + " " + surname + "!" ); System.out.println( "You are rich!" ); endCreate a main class
Create your main class (say,DroolsTest.java) and save it in the samemy-projectdirectory as your POJO. This file will load the knowledge base and fire your rules:In theDroolsTest.javafile:- Add the following import statements to import the KIE services, container, and session:
import org.kie.api.KieServices; import org.kie.api.runtime.KieContainer; import org.kie.api.runtime.KieSession;
- Load the knowledge base and fire your rule from the
main()method:public class DroolsTest { public static final void main(String[] args) { try { // load up the knowledge base KieServices ks = KieServices.Factory.get(); KieContainer kContainer = ks.getKieClasspathContainer(); KieSession kSession = kContainer.newKieSession(); // go ! Person p = new Person(); p.setWage(12); p.setFirstName("Tom"); p.setLastName("Summers"); p.setHourlyRate(10); kSession.insert(p); kSession.fireAllRules(); } catch (Throwable t) { t.printStackTrace(); } } }Themain()method passes the model to the rule, which contains the first name, last name, wage, and hourly rate.
Download the BRMS Engine jar files
Download the BRMS engine jar files and save them undermy-project/BRMS-engine-jars/. These files are available from the Red Hat Customer Portal under the generic deployable version.Create the
kmodule.xmlmetadata fileCreate a file calledkmodule.xmlundermy-project/META-INFto create the default session. At the minimum, this file contains the following:<?xml version="1.0" encoding="UTF-8"?> <kmodule xmlns="http://jboss.org/kie/6.0.0/kmodule"> </kmodule>
Build your example
Navigate to themy-projectdirectory and execute the following command from the command line:javac -classpath "./BRMS-engine-jars/*:." DroolsTest.java
This will compile and build your java files.Run your example
If there were no compilation errors, you can now run theDroolsTestto execute your rule:java -classpath "./BRMS-engine-jars/*:." DroolsTest
The expected output is:Hello Tom Summers! You are rich!
6.1.2. Create and Execute Your First Rule Using Maven
Procedure 6.2. Create and Execute your First Rule using Maven
Create a basic Maven archetype
Navigate to a directory of choice in your system and execute the following command:mvn archetype:generate -DgroupId=com.mycompany.app -DartifactId=my-app -DarchetypeArtifactId=maven-archetype-quickstart -DinteractiveMode=false
This creates a directory calledmy-appwith the following structure:my-app |-- pom.xml `-- src |-- main | `-- java | `-- com | `-- mycompany | `-- app | `-- App.java `-- test `-- java `-- com `-- mycompany `-- app `-- AppTest.javaThemy-appdirectory comprises:- A
src/maindirectory for storing your application's sources. - A
src/testdirectory for storing your test sources. - A
pom.xmlfile containing the Project Object Model (POM) for your project. At this stage, thepom.xmlfile contains the following:<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/maven-v4_0_0.xsd"> <modelVersion>4.0.0</modelVersion> <groupId>com.sample.app</groupId> <artifactId>my-app</artifactId> <packaging>jar</packaging> <version>1.0-SNAPSHOT</version> <name>my-app</name> <url>http://maven.apache.org</url> <dependencies> <dependency> <groupId>junit</groupId> <artifactId>junit</artifactId> <version>3.8.1</version> <scope>test</scope> </dependency> </dependencies> </project>
Create your Fact Model
Once you are done with the archetype, create a class based on which your rule runs. Create the POJO calledPerson.javafile undermy-app/src/main/java/com/mycompany/appfolder. This class contains the getter and setter methods to retrieve and set values of first name, last name, hourly rate, and wage of a person.package com.mycompany.app; public class Person { private String firstName; private String lastName; private Integer hourlyRate; private Integer wage; public String getFirstName() { return firstName; } public void setFirstName(String firstName) { this.firstName = firstName; } public String getLastName() { return lastName; } public void setLastName(String lastName) { this.lastName = lastName; } public Integer getHourlyRate() { return hourlyRate; } public void setHourlyRate(Integer hourlyRate) { this.hourlyRate = hourlyRate; } public Integer getWage(){ return wage; } public void setWage(Integer wage){ this.wage = wage; } }Create your rule
Create your rule file in.drlformat undermy-app/src/main/resources/rules.Here is the simple rule file calledPerson.drl, which imports thePersonclass:package com.mycompany.app; import com.mycompany.app.Person; dialect "java" rule "Wage" when Person(hourlyRate*wage > 100) Person(name : firstName, surname : lastName) then System.out.println( "Hello " + name + " " + surname + "!" ); System.out.println( "You are rich!" ); endAs before, this rule does a simple calculation on the wage and hourly rate values and displays a message based on the result.Create the
kmodule.xmlmetadata fileCreate an empty file calledkmodule.xmlundermy-app/src/main/resources/META-INFto create the default session. This file contains the following:<?xml version="1.0" encoding="UTF-8"?> <kmodule xmlns="http://jboss.org/kie/6.0.0/kmodule"> </kmodule>
Set project dependencies in the
pom.xmlconfiguration fileAs Maven manages the classpath through this configuration file, you must declare in it the libraries your application requires. Edit themy-app/pom.xmlfile to set the JBoss BRMS dependencies and setup the GAV values for your application, as shown below:<?xml version="1.0" encoding="UTF-8"?> <project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd"> <modelVersion>4.0.0</modelVersion> <groupId>com.mycompany.app</groupId> <artifactId>my-app</artifactId> <version>1.0.0</version> <repositories> <repository> <id>jboss-ga-repository</id> <url>http://maven.repository.redhat.com/techpreview/all</url> </repository> </repositories> <dependencies> <dependency> <groupId>org.drools</groupId> <artifactId>drools-compiler</artifactId> <version>LATEST</version> </dependency> <dependency> <groupId>org.kie</groupId> <artifactId>kie-api</artifactId> <version>LATEST</version> </dependency> <dependency> <groupId>junit</groupId> <artifactId>junit</artifactId> <version>4.11</version> <scope>test</scope> </dependency> </dependencies> </project>Test it!
After you add the dependencies in thepom.xmlfile, use thetestAppmethod of themy-app/src/test/java/com/mycompany/app/AppTest.java(which is created by default by Maven) to instantiate and test the rule.In theAppTest.javafile:- Add the following import statements to import the KIE services, container, and session:
import org.kie.api.KieServices; import org.kie.api.runtime.KieContainer; import org.kie.api.runtime.KieSession;
- Load the knowledge base and fire your rule from the
testApp()method:public void testApp() { // load up the knowledge base KieServices ks = KieServices.Factory.get(); KieContainer kContainer = ks.getKieClasspathContainer(); KieSession kSession = kContainer.newKieSession(); // set up our Person fact model Person p = new Person(); p.setWage(12); p.setFirstName("Tom"); p.setLastName("Summers"); p.setHourlyRate(10); // insert him into the session kSession.insert(p); // and fire all rules on him kSession.fireAllRules(); // we can assert here, but the rule itself should output something since the person's wage is more than our baseline rule }ThetestApp()method passes the model to the rule, which contains the first name, last name, wage, and hourly rate.
Build your example
Navigate to themy-appdirectory and execute the following command from the command line:mvn clean install
When you run this command for the first time, it may take a while as Maven downloads all the artifacts required for this project such as JBoss BRMS jar files.The expected output is:Hello Tom Summers! You are rich! Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.194 sec Results : Tests run: 1, Failures: 0, Errors: 0, Skipped: 0 [INFO] ... [INFO] ------------------------------------------------------------------------ [INFO] BUILD SUCCESS [INFO] ------------------------------------------------------------------------ [INFO] Total time: 6.393 s ... [INFO] ------------------------------------------------------------------------
6.1.3. Create and Execute Your First Rule Using JBoss Developer Studio
Procedure 6.3. Create and Execute your First Rule using JBoss Developer Studio
Create a BRMS Project.
- Start JBoss Developer Studio and navigate to → → .This opens a New Project dialog box.
- In the New Project dialog box, select → and click .
- Type a name for your project and click .The New Project dialog box provides you choice to add some default artifacts to your project, such as sample rules, decision tables and Java classes for them. Let us select the first two check boxes and click .
- Select the configured BRMS runtime in the Drools Runtime dialog box. If you have not already configured your BRMS runtime, click Configure Workspace Settings... link and configure the BRMS runtime jars.
- Select Drools 6.0.x for Generate code compatible with: field and provide values for groupId, artifactId, and version. These values form your project's fully qualified artifact name. Let us provide the following values:
- groupId: com.mycompany.app
- artifactId: my-app
- version: 1.0.0
- Click Finish.This sets up a basic project structure, classpath and sample rules for you to get started with.
My-Project `-- src/main/java |-- com.sample | `-- DroolsTest.java | `-- src/main/rules | -- Sample.drl | `-- JRE System Library | `-- Drools Library | `-- src | `-- pom.xmlThis newly created project called My-Project comprises the following:- A rule file called
Sample.drlundersrc/main/rulesdirectory. - An example java file called
DroolsTest.javaundersrc/main/javain the com.sample package. You can use theDroolsTestclass to execute your rules in the BRMS engine. - The Drools Library directory. This acts as a custom classpath container that contains all the other jar files necessary for execution.
Create your fact model
The sampleDroolsTest.javafile contains a sample POJO calledMessagewith getter and setter methods. You can edit this class or create another similar POJO. Let us remove this POJO and create a new POJO calledPerson, which sets and retrieves values of first name, last name, hourly rate, and wage of a person:public static class Person { private String firstName; private String lastName; private Integer hourlyRate; private Integer wage; public String getFirstName() { return firstName; } public void setFirstName(String firstName) { this.firstName = firstName; } public String getLastName() { return lastName; } public void setLastName(String lastName) { this.lastName = lastName; } public Integer getHourlyRate() { return hourlyRate; } public void setHourlyRate(Integer hourlyRate) { this.hourlyRate = hourlyRate; } public Integer getWage(){ return wage; } public void setWage(Integer wage){ this.wage = wage; }Update the main method
The sampleDroolsTest.javafile contains amain()method that loads up the knowledge base and fires the rules. Update thismain()method to pass the Person object to the rule:public static final void main(String[] args) { try { // load up the knowledge base KieServices ks = KieServices.Factory.get(); KieContainer kContainer = ks.getKieClasspathContainer(); KieSession kSession = kContainer.newKieSession("ksession-rules"); // go ! Person p = new Person(); p.setWage(12); p.setFirstName("Tom"); p.setLastName("Summers"); p.setHourlyRate(10); kSession.insert(p); kSession.fireAllRules(); } catch (Throwable t) { t.printStackTrace(); } }Note
To load the knowledge base, you first get the KieServices instance and the classpath based KieContainer. Then you build your KieSession with the KieContainer. Here, we are passing the session name ksession-rules that matches the one defined inkmodule.xmlfile.Create your rule
The sample rule fileSample.drlcontains a basic skeleton of a rule. You can edit this file or create a new one to write your own rule.In your rule file:- Include the package name:
package com.sample
- Import facts into the rule:
import com.sample.DroolsTest.Person;
- Create the rule in "when", "then" format.
dialect "java" rule "Wage" when Person(hourlyRate*wage > 100) Person(name : firstName, surname : lastName) then System.out.println( "Hello" + " " + name + " " + surname + "!" ); System.out.println( "You are rich!" ); end
Test your rule
Right-click theDroolsTest.javafile and select → .
Hello Tom Summers! You are rich!
6.1.4. Create and Execute Your First Rule Using Business Central
Ensure that you have successfully installed JBoss BPM Suite before you run this simple rule example using Business Central interface.
Procedure 6.4. Create and Execute your First Rule using Business Central
Login to Business Central
- On the command line, move into the
$SERVER_HOME/bin/directory and execute the following command for Unix environment:./standalone.sh
for Windows environment:./standalone.bat
- Once your server is up and running, open the following in a web browser:
http://localhost:8080/business-central
This opens the Business Central login page. - Log in to the Business Central with the user credentials created during installation.
Create a repository structure and create a project under it
- On the main menu of Business Central, go to → .
- Click → , then click .
- Click → , then click .
- In the displayed Add New Organizational Unit dialog box, define the unit properties. For example:
- Name: EmployeeWage
- Owner: Employee
Click . - On the perspective menu, click → .
- In the displayed Create Repository dialog box, define the repository properties. For example:
- Repository Name: EmployeeRepo
- Organizational Unit: EmployeeWage
Click . - Go to → .
- In the Project Explorer, under the organizational unit drop-down box, select EmployeeWage, and in the repository drop-down box select EmployeeRepo.
- On the perspective menu, go to → .
- In the displayed Create new Project dialog box, provide a name (for example, MyProject) for your project properties and click .
- In the New Project dialog box, define the maven properties of the Project. For example:
- Group ID: org.bpms
- Artifact ID: MyProject
- Version ID: 1.0.0
Click .
Create a fact model
- On the perspective menu, go to → .
- In the displayed Create new Data Object dialog box, provide the values for object name and package. For example:
- Data Object: Person
- Package: org.bpms.myproject
Click . - In the displayed Create new field window of the newly created
Persondata object. Click to open New field dialogue. Add a variable name in the Id field, select data type for the variable in the Type field, and click until you have defined all the necessary variables. For example:- Id: firstNameType: String
- Id: lastNameType: String
- Id: hourlyRateType: Integer
- Id: wageType: Integer
Click for the last variable and then .
Create a rule
- On the perspective menu, click → .
- In the Create new dialog box, provide the name and package name of your rule file. For example:
- DRL file name: MyRule
- Package: org.bpms.myproject
Click . - In the displayed DRL editor with the
MyRule.drlfile, write your rule as shown below:package org.bpms.myproject; rule "MyRule" ruleflow-group "MyProjectGroup" when Person(hourlyRate*wage > 100) Person(name : firstName, surname : lastName) then System.out.println( "Hello" + " " + name + " " + surname + "!" ); System.out.println( "You are rich!" ); end - Click .
Create a Business Process and add a Business Rule Task
- On the main menu of Business Central, go to → .
- In the Create new Business Process dialog box, provide values for Business Process name and package. For example:
- Business Process: MyProcess
- Package: org.bpms.myproject
ClickThe Process Designer with the canvas of the created Process definition opens. - Click on the white canvas. Expand the Properties palette on the right-hand side of the canvas. When you click
in the Variable Definitions field, Editor for Variable Definitions opens:
Click .- Click Add Variable
- In the Name column, enter person_proc
- In the Defined Types column, select Person [org.bpms.myproject]
- Expand the Object Library palette with Process Elements on the left-hand side of the canvas.
- From the Object Library, navigate to Tasks and drag a Business Rule Task to the canvas. Next, navigate to End Events and drag the None end event to the canvas.
- Integrate the Business Rule task and the Start and End events into the process workflow. Click the Start event. Then, click and drag it to the Business Rule Task. This connects the two objects. Do the same to connect the Business Rule Task to the End event.
- Select the Business Rule Task and set the following properties in the Properties panel under Core Properties:
- Name:Rule_Task
- Ruleflow Group: when you click
in the Ruleflow Group field, an editor for Ruleflow Groups opens:
- In the Ruleflow Group Name column, select
MyProjectGroup - In the Rules column, select
MyRule.drl - Click
- AssignmentsWhen you click
in the Assignments field, the Rule_Task Data I/O opens. Click under Data Inputs and Assignments and provide the data input elements. For example:
- Name:person_Task
- Data Type:Person[org.bpms.myproject]
- Source: person_procClick .
- Click Generate all Forms (
).
- Save the Process.
Build and deploy your rule
- Open Project Editor and click .A green notification appears in the upper part of the screen informing you that the project has been built and deployed successfully to the Execution Server.
- Go to → .You can see your newly built process listed in the Process Definitions window.
- Click
button under Actions to start your Process.
A MyProcess dialog box opens. - In the MyProcess dialog box, provide the following values of the variables defined in your fact model and click :
- firstName: Tom
- hourlyRate: 12
- lastName: Summers
- wage: 10
As these values satisfy the rule condition, the expected output at the console is:16:19:58,479 INFO [org.jbpm.kie.services.impl.store.DeploymentSynchronizer] (http-/127.0.0.1:8080-1) Deployment unit org.bpms:MyProject:1.0 stored successfully 16:26:56,119 INFO [stdout] (http-/127.0.0.1:8080-5) Hello Tom Summers! 16:26:56,119 INFO [stdout] (http-/127.0.0.1:8080-5) You are rich!
6.2. Execution of Rules
6.2.1. Agenda
WorkingMemory, rules may become fully matched and eligible for execution. A single Working Memory Action can result in multiple eligible rules. When a rule is fully matched an Activation is created, referencing the rule and the matched facts, and placed onto the Agenda. The Agenda controls the execution order of these Activations using a Conflict Resolution strategy.
6.2.2. Agenda Processing
- Working Memory Actions. This is where most of the work takes place, either in the Consequence (the RHS itself) or the main Java application process. Once the Consequence has finished or the main Java application process calls
fireAllRules()the engine switches to the Agenda Evaluation phase. - Agenda Evaluation. This attempts to select a rule to fire. If no rule is found it exits, otherwise it fires the found rule, switching the phase back to Working Memory Actions.
6.2.3. Conflict Resolution
6.2.4. AgendaGroup
6.2.5. setFocus()
setFocus() is called it pushes the specified Agenda Group onto a stack. When the focus group is empty it is popped from the stack and the focus group that is now on top evaluates. An Agenda Group can appear in multiple locations on the stack. The default Agenda Group is "MAIN", with all rules which do not specify an Agenda Group being in this group. It is also always the first group on the stack, given focus initially, by default.
6.2.6. setFocus() Example
ksession.getAgenda().getAgendaGroup( "Group A" ).setFocus();
6.2.7. ActivationGroup
clear() method can be called at any time, which cancels all of the activations before one has had a chance to fire.
6.2.8. ActivationGroup Example
ksession.getAgenda().getActivationGroup( "Group B" ).clear();
6.3. Inference
6.3.1. The Inference Engine
6.3.2. Inference Example
rule "Infer Adult" when $p : Person( age >= 18 ) then insert( new IsAdult( $p ) ) end
$p : Person() IsAdult( person == $p )
6.4. Truth Maintenance

Figure 6.1. Stated Assertion

Figure 6.2. Logical Assertion
Important
equals and hashCode methods from java.lang.Object as per the Java standard.Two objects are equal if and only if their equals methods return true for each other and if their hashCode methods return the same values. For more information, refer the Java API documentation.
6.4.1. Example Illustrating Truth Maintenance
rule "Issue Child Bus Pass" when $p : Person( age < 16 ) then insert(new ChildBusPass( $p ) ); end rule "Issue Adult Bus Pass" when $p : Person( age >= 16 ) then insert(new AdultBusPass( $p ) ); end
rule "Infer Child" when $p : Person( age < 16 ) then insertLogical( new IsChild( $p ) ) end rule "Infer Adult" when $p : Person( age >= 16 ) then insertLogical( new IsAdult( $p ) ) end
IsChild fact. Once the person is 16 years or above, the IsChild fact is automatically retracted and the IsAdult fact inserted.
ChildBusPass and AdultBusPass facts, as the Truth Maintenance System supports chaining of logical insertions for a cascading set of retracts. Here is how the logical insertion is done:
rule "Issue Child Bus Pass"
when
$p : Person( )
IsChild( person == $p )
then
insertLogical(new ChildBusPass( $p ) );
end
rule "Issue Adult Bus Pass"
when
$p : Person( age >= 16 )
IsAdult( person =$p )
then
insertLogical(new AdultBusPass( $p ) );
end
IsChild fact as well as the person's ChildBusPass fact is retracted. To these set of conditions, you can relate another rule which states that a person must return the child pass after turning 16 years old. So when the Truth Maintenance System automatically retracts the ChildBusPass object, this rule triggers and sends a request to the person:
rule "Return ChildBusPass Request"
when
$p : Person( )
not( ChildBusPass( person == $p ) )
then
requestChildBusPass( $p );
end
6.5. Using Decision Tables in Spreadsheets
6.5.1. Decision Tables in Spreadsheets
Note
6.5.2. OpenOffice Example

Figure 6.3. OpenOffice Screenshot
Note
6.5.3. Rules and Spreadsheets
- Rules inserted into rows
- As each row is a rule, the same principles apply as with written code. As the rule engine processes the facts, any rules that match may fire.
- Agendas
- It is possible to clear the agenda when a rule fires and simulate a very simple decision table where only the first match effects an action.
- Multiple tables
- You can have multiple tables on one spreadsheet. This way, rules can be grouped where they share common templates, but are still all combined into one rule package.
6.5.4. The RuleTable Keyword
Important
6.5.5. The RuleSet Keyword
6.5.6. Rule Template Example
- Store your data in a database (or any other format)
- Conditionally generate rules based on the values in the data
- Use data for any part of your rules (such as condition operator, class name, and property name)
- Run different templates over the same data

Figure 6.4. Template Data

Figure 6.5. Rule Template
- Line 1: All rule templates start with template header.
- Lines 2-4: Following the header is the list of columns in the order they appear in the data. In this case we are calling the first column age, the second type and the third log.
- Line 5: An empty line signifies the end of the column definitions.
- Lines 6-9: Standard rule header text. This is standard rule DRL and will appear at the top of the generated DRL. Put the package statement and any imports and global and function definitions into this section.
- Line 10: The keyword template signals the start of a rule template. There can be more than one template in a template file, but each template must have a unique name.
- Lines 11-18: The rule template.
- Line 20: The keywords end template signify the end of the template.
@{token_name}. The built-in expression @{row.rowNumber} gives a unique number for each row of data and enables you to generate unique rule names. For each row of data, a rule is generated with the values in the data substituted for the tokens in the template. With the example data above, the following rule file is generated:
package org.drools.examples.templates;
global java.util.List list;
rule "Cheese fans_1"
when
Person(age == 42)
Cheese(type == "stilton")
then
list.add("Old man stilton");
end
rule "Cheese fans_2"
when
Person(age == 21)
Cheese(type == "cheddar")
then
list.add("Young man cheddar");
endDecisionTableConfiguration dtableconfiguration =
KnowledgeBuilderFactory.newDecisionTableConfiguration();
dtableconfiguration.setInputType( DecisionTableInputType.XLS );
KnowledgeBuilder kbuilder = KnowledgeBuilderFactory.newKnowledgeBuilder();
kbuilder.add( ResourceFactory.newClassPathResource( getSpreadsheetName(),
getClass() ),
ResourceType.DTABLE,
dtableconfiguration );6.5.7. Data-Defining Cells
RuleSet, defines all DRL items except rules. The other one may occur repeatedly and is to the right and below a cell whose contents begin with RuleTable. These areas represent the actual decision tables, each area resulting in a set of rules of similar structure.
RuleSet cell and containing a keyword designating the kind of value contained in the other one that follows in the same row.
6.5.8. Rule Table Columns
RuleTable are earmarked as header area, mostly used for the definition of code to construct the rules. It is any additional row below these four header rows that spawns another rule, with its data providing for variations in the code defined in the Rule Table header.
Note
6.5.9. Rule Set Entries
RuleSet is upheld as the one containing the keyword.
6.5.10. Entries in the Rule Set Area
Table 6.1. Entries in the Rule Set area
| Keyword | Value | Usage |
|---|---|---|
| RuleSet | The package name for the generated DRL file. Optional, the default is rule_table. | Must be First entry. |
| Sequential | "true" or "false". If "true", then salience is used to ensure that rules fire from the top down. | Optional, at most once. If omitted, no firing order is imposed. |
| EscapeQuotes | "true" or "false". If "true", then quotation marks are escaped so that they appear literally in the DRL. | Optional, at most once. If omitted, quotation marks are escaped. |
| Import | A comma-separated list of Java classes to import. | Optional, may be used repeatedly. |
| Variables | Declarations of DRL globals, i.e., a type followed by a variable name. Multiple global definitions must be separated with a comma. | Optional, may be used repeatedly. |
| Functions | One or more function definitions, according to DRL syntax. | Optional, may be used repeatedly. |
| Queries | One or more query definitions, according to DRL syntax. | Optional, may be used repeatedly. |
| Declare | One or more declarative types, according to DRL syntax. | Optional, may be used repeatedly. |
6.5.11. Rule Attribute Entries in the Rule Set Area
Important
Table 6.2. Rule Attribute Entries in the Rule Set Area
| Keyword | Initial | Value |
|---|---|---|
| PRIORITY | P | An integer defining the "salience" value for the rule. Overridden by the "Sequential" flag. |
| DURATION | D | A long integer value defining the "duration" value for the rule. |
| TIMER | T | A timer definition. See "Timers" section. |
| CALENDARS | E | A calendars definition. See "Calendars" section. |
| NO-LOOP | U | A Boolean value. "true" inhibits looping of rules due to changes made by its consequence. |
| LOCK-ON-ACTIVE | L | A Boolean value. "true" inhibits additional activations of all rules with this flag set within the same ruleflow or agenda group. |
| AUTO-FOCUS | F | A Boolean value. "true" for a rule within an agenda group causes activations of the rule to automatically give the focus to the group. |
| ACTIVATION-GROUP | X | A string identifying an activation (or XOR) group. Only one rule within an activation group will fire, i.e., the first one to fire cancels any existing activations of other rules within the same group. |
| AGENDA-GROUP | G | A string identifying an agenda group, which has to be activated by giving it the "focus", which is one way of controlling the flow between groups of rules. |
| RULEFLOW-GROUP | R | A string identifying a rule-flow group. |
| DATE-EFFECTIVE | V | A string containing a date and time definition. A rule can only activate if the current date and time is after DATE-EFFECTIVE attribute. |
| DATE-EXPIRES | Z | A string containing a date and time definition. A rule cannot activate if the current date and time is after the DATE-EXPIRES attribute. |
6.5.12. The RuleTable Cell
6.5.13. Column Types
6.5.14. Column Headers in the Rule Table
Table 6.3. Column Headers in the Rule Table
| Keyword | Initial | Value | Usage |
|---|---|---|---|
| NAME | N | Provides the name for the rule generated from that row. The default is constructed from the text following the RuleTable tag and the row number. | At most one column |
| DESCRIPTION | I | A text, resulting in a comment within the generated rule. | At most one column |
| CONDITION | C | Code snippet and interpolated values for constructing a constraint within a pattern in a condition. | At least one per rule table |
| ACTION | A | Code snippet and interpolated values for constructing an action for the consequence of the rule. | At least one per rule table |
| METADATA | @ | Code snippet and interpolated values for constructing a metadata entry for the rule. | Optional, any number of columns |
6.5.15. Conditional Elements
- Text in the first cell below CONDITION develops into a pattern for the rule condition, with the snippet in the next line becoming a constraint. If the cell is merged with one or more neighbours, a single pattern with multiple constraints is formed: all constraints are combined into a parenthesized list and appended to the text in this cell. The cell may be left blank, which means that the code snippet in the next row must result in a valid conditional element on its own.To include a pattern without constraints, you can write the pattern in front of the text for another pattern.The pattern may be written with or without an empty pair of parentheses. A "from" clause may be appended to the pattern.If the pattern ends with "eval", code snippets are supposed to produce boolean expressions for inclusion into a pair of parentheses after "eval".
- Text in the second cell below CONDITION is processed in two steps.
- The code snippet in this cell is modified by interpolating values from cells farther down in the column. If you want to create a constraint consisting of a comparison using "==" with the value from the cells below, the field selector alone is sufficient. Any other comparison operator must be specified as the last item within the snippet, and the value from the cells below is appended. For all other constraint forms, you must mark the position for including the contents of a cell with the symbol
$param. Multiple insertions are possible by using the symbols$1,$2, etc., and a comma-separated list of values in the cells below.A text according to the patternforall(delimiter){snippet}is expanded by repeating the snippet once for each of the values of the comma-separated list of values in each of the cells below, inserting the value in place of the symbol$and by joining these expansions by the given delimiter. Note that the forall construct may be surrounded by other text. - If the cell in the preceding row is not empty, the completed code snippet is added to the conditional element from that cell. A pair of parentheses is provided automatically, as well as a separating comma if multiple constraints are added to a pattern in a merged cell.If the cell above is empty, the interpolated result is used as is.
- Text in the third cell below CONDITION is for documentation only. It should be used to indicate the column's purpose to a human reader.
- From the fourth row on, non-blank entries provide data for interpolation as described above. A blank cell results in the omission of the conditional element or constraint for this rule.
6.5.16. Action Statements
- Text in the first cell below ACTION is optional. If present, it is interpreted as an object reference.
- Text in the second cell below ACTION is processed in two steps.
- The code snippet in this cell is modified by interpolating values from cells farther down in the column. For a singular insertion, mark the position for including the contents of a cell with the symbol
$param. Multiple insertions are possible by using the symbols$1,$2, etc., and a comma-separated list of values in the cells below.A method call without interpolation can be achieved by a text without any marker symbols. In this case, use any non-blank entry in a row below to include the statement.The forall construct is available here, too. - If the first cell is not empty, its text, followed by a period, the text in the second cell and a terminating semicolon are stringed together, resulting in a method call which is added as an action statement for the consequence.If the cell above is empty, the interpolated result is used as is.
- Text in the third cell below ACTION is for documentation only. It should be used to indicate the column's purpose to a human reader.
- From the fourth row on, non-blank entries provide data for interpolation as described above. A blank cell results in the omission of the action statement for this rule.
Note
$1 instead of $param will fail if the replacement text contains a comma.
6.5.17. Metadata Statements
- Text in the first cell below METADATA is ignored.
- Text in the second cell below METADATA is subject to interpolation, as described above, using values from the cells in the rule rows. The metadata marker character
@is prefixed automatically, and should not be included in the text for this cell. - Text in the third cell below METADATA is for documentation only. It should be used to indicate the column's purpose to a human reader.
- From the fourth row on, non-blank entries provide data for interpolation as described above. A blank cell results in the omission of the metadata annotation for this rule.
6.5.18. Interpolating Cell Data Example
- If the template is
Foo(bar == $param)and the cell is42, then the result isFoo(bar == 42). - If the template is
Foo(bar < $1, baz == $2)and the cell contains42,43, the result will beFoo(bar < 42, baz ==43). - The template
forall(&&){bar != $}with a cell containing42,43results inbar != 42 && bar != 43.
6.5.19. Tips for Working Within Cells
- Multiple package names within the same cell must be comma-separated.
- Pairs of type and variable names must be comma-separated.
- Functions must be written as they appear in a DRL file. This should appear in the same column as the "RuleSet" keyword. It can be above, between or below all the rule rows.
- You can use Import, Variables, Functions and Queries repeatedly instead of packing several definitions into a single cell.
- Trailing insertion markers can be omitted.
- You can provide the definition of a binding variable.
- Anything can be placed in the object type row. Apart from the definition of a binding variable, it could also be an additional pattern that is to be inserted literally.
- The cell below the ACTION header can be left blank. Using this style, anything can be placed in the consequence, not just a single method call. (The same technique is applicable within a CONDITION column.)
6.5.20. The SpreadsheetCompiler Class
SpreadsheetCompiler class is the main class used with API spreadsheet-based decision tables in the drools-decisiontables module. This class takes spreadsheets in various formats and generates rules in DRL.
SpreadsheetCompiler can be used to generate partial rule files and assemble them into a complete rule package after the fact. This allows the separation of technical and non-technical aspects of the rules if needed.
6.5.21. Using Spreadsheet-Based Decision Tables
Procedure 6.5. Task
- Generate a sample spreadsheet that you can use as the base.
- If the JBoss BRMS plug-in is being used, use the wizard to generate a spreadsheet from a template.
- Use an XSL-compatible spreadsheet editor to modify the XSL.
6.5.22. Lists
lists of values. These can be stored in other worksheets to provide valid lists of values for cells.
6.5.23. Revision Control
6.5.24. Tabular Data Sources
6.6. Logging
<dependency> <groupId>ch.qos.logback</groupId> <artifactId>logback-classic</artifactId> <version>1.x</version> </dependency>
Note
6.6.1. Configuring Logging Level
logback.xml file when you are using Logback:
<configuration>
<logger name="org.drools" level="debug"/>
...
...
<configuration>log4j.xml file when you are using Log4J:
<log4j:configuration xmlns:log4j="http://jakarta.apache.org/log4j/">
<category name="org.drools">
<priority value="debug" />
</category>
...
...
</log4j:configuration>Chapter 7. Complex Event Processing
7.1. Introduction to Complex Event Processing
- On an algorithmic trading application: Take an action if the security price increases X% above the day's opening price.The price increases are denoted by events on a stock trade application.
- On a monitoring application: Take an action if the temperature in the server room increases X degrees in Y minutes.The sensor readings are denoted by events.
- Both business rules and event processing require seamless integration with the enterprise infrastructure and applications. This is particularly important with regard to life-cycle management, auditing, and security.
- Both business rules and event processing have functional requirements like pattern matching and non-functional requirements like response time limits and query/rule explanations.
Note
- They usually process large numbers of events, but only a small percentage of the events are of interest.
- The events are usually immutable, as they represent a record of change in state.
- The rules and queries run against events and must react to detected event patterns.
- There are usually strong temporal relationships between related events.
- Individual events are not important. The system is concerned with patterns of related events and the relationships between them.
- It is often necessary to perform composition and aggregation of events.
- Support events, with their proper semantics, as first class citizens.
- Allow detection, correlation, aggregation, and composition of events.
- Support processing streams of events.
- Support temporal constraints in order to model the temporal relationships between events.
- Support sliding windows of interesting events.
- Support a session-scoped unified clock.
- Support the required volumes of events for complex event processing use cases.
- Support reactive rules.
- Support adapters for event input into the engine (pipeline).
7.2. Events
- Events are immutable
- An event is a record of change which has occurred at some time in the past, and as such it cannot be changed.
Note
The rules engine does not enforce immutability on the Java objects representing events; this makes event data enrichment possible.The application should be able to populate un-populated event attributes, which can be used to enrich the event with inferred data; however, event attributes that have already been populated should not be changed. - Events have strong temporal constraints
- Rules involving events usually require the correlation of multiple events that occur at different points in time relative to each other.
- Events have managed life-cycles
- Because events are immutable and have temporal constraints, they are usually only of interest for a specified period of time. This means the engine can automatically manage the life-cycle of events.
- Events can use sliding windows
- It is possible to define and use sliding windows with events since all events have timestamps associated with them. Therefore, sliding windows allow the creation of rules on aggregations of values over a time period.
7.2.1. Event Declaration
@role meta-data tag to the fact with the event parameter. The @role meta-data tag can accept two possible values:
fact: Assigning the fact role declares the type is to be handled as a regular fact. Fact is the default role.event: Assigning the event role declares the type is to be handled as an event.
StockTick fact type will be handled as an event:
Example 7.1. Declaring a Fact Type as an Event
import some.package.StockTick declare StockTick @role( event ) end
StockTick was a fact type declared in the DRL instead of in a pre-existing class, the code would be as follows:
Example 7.2. Declaring a Fact Type and Assigning it to an Event Role
declare StockTick @role( event ) datetime : java.util.Date symbol : String price : double end
7.2.2. Event Meta-Data
- @role
- @timestamp
- @duration
- @expires
Example 7.3. The VoiceCall Fact Class
/**
* A class that represents a voice call in
* a Telecom domain model
*/
public class VoiceCall {
private String originNumber;
private String destinationNumber;
private Date callDateTime;
private long callDuration; // in milliseconds
// constructors, getters, and setters
}
- @role
- The @role meta-data tag indicates whether a given fact type is either a regular fact or an event. It accepts either
factoreventas a parameter. The default isfact.@role( <fact|event> )Example 7.4. Declaring VoiceCall as an Event Type
declare VoiceCall @role( event ) end
- @timestamp
- A timestamp is automatically assigned to every event. By default, the time is provided by the session clock and assigned to the event at insertion into the working memory. Events can have their own timestamp attribute, which can be included by telling the engine to use the attribute's timestamp instead of the session clock.To use the attribute's timestamp, use the attribute name as the parameter for the
@timestamptag.@timestamp( <attributeName> )Example 7.5. Declaring the VoiceCall Timestamp Attribute
declare VoiceCall @role( event ) @timestamp( callDateTime ) end
- @duration
- JBoss BRMS Complex Event Processing supports both point-in-time and interval-based events. A point-in-time event is represented as an interval-based event with a duration of zero time units. By default, every event has a duration of zero. To assign a different duration to an event, use the attribute name as the parameter for the
@durationtag.@duration( <attributeName> )Example 7.6. Declaring the VoiceCall Duration Attribute
declare VoiceCall @role( event ) @timestamp( callDateTime ) @duration( callDuration ) end
- @expires
- Events may be set to expire automatically after a specific duration in the working memory. By default, this happens when the event can no longer match and activate any of the current rules. You can also explicitly define when an event should expire. The @expires tag is only used when the engine is running in stream mode.
@expires( <timeOffset> )The value oftimeOffsetis a temporal interval that sets the relative duration of the event.[#d][#h][#m][#s][#[ms]]
All parameters are optional and the#parameter should be replaced by the appropriate value.To declare that theVoiceCallfacts should expire one hour and thirty-five minutes after insertion into the working memory, use the following:Example 7.7. Declaring the Expiration Offset for the VoiceCall Events
declare VoiceCall @role( event ) @timestamp( callDateTime ) @duration( callDuration ) @expires( 1h35m ) end
7.3. Clock Implementation in Complex Event Processing
7.3.1. Session Clock
- Rules testing: Testing always requires a controlled environment, and when the tests include rules with temporal constraints, it is necessary to control the input rules, facts, and the flow of time.
- Regular execution: A rules engine that reacts to events in real time needs a real-time clock.
- Special environments: Specific environments may have specific time control requirements. For instance, clustered environments may require clock synchronization or JEE environments may require you to use an application server-provided clock.
- Rules replay or simulation: In order to replay or simulate scenarios, it is necessary that the application controls the flow of time.
7.3.2. Available Clock Implementations
- Real-Time Clock
- The real-time clock is the default implementation based on the system clock. The real-time clock uses the system clock to determine the current time for timestamps.To explicitly configure the engine to use the real-time clock, set the session configuration parameter to
realtime:import org.kie.api.KieServices.Factory; import org.kie.api.runtime.KieSessionConfiguration; KieSessionConfiguration config = KieServices.Factory.get().newKieSessionConfiguration() config.setOption( ClockTypeOption.get("realtime") );
- Pseudo-Clock
- The pseudo-clock is useful for testing temporal rules since it can be controlled by the application.To explicitly configure the engine to use the pseudo-clock, set the session configuration parameter to
pseudo:import org.kie.api.runtime.KieSessionConfiguration; import org.kie.api.KieServices.Factory; KieSessionConfiguration config = KieServices.Factory.get().newKieSessionConfiguration(); config.setOption( ClockTypeOption.get("pseudo") );This example shows how to control the pseudo-clock:import org.kie.api.runtime.KieSessionConfiguration; import org.kie.api.KieServices.Factory; import org.kie.api.runtime.KieSession; import org.kie.api.time.SessionClock; import org.kie.api.runtime.rule.FactHandle; KieSessionConfiguration conf = KieServices.Factory.get().newKieSessionConfiguration(); conf.setOption( ClockTypeOption.get( "pseudo" ) ); KieSession session = kbase.newKieSession( conf, null ); SessionPseudoClock clock = session.getSessionClock(); // then, while inserting facts, advance the clock as necessary: FactHandle handle1 = session.insert( tick1 ); clock.advanceTime( 10, TimeUnit.SECONDS ); FactHandle handle2 = session.insert( tick2 ); clock.advanceTime( 30, TimeUnit.SECONDS ); FactHandle handle3 = session.insert( tick3 );
7.4. Event Processing Modes
7.4.1. Cloud Mode
- No need for clock synchronization since there is no notion of time.
- No requirement on ordering events since the engine looks at the events as an unordered cloud against which the engine tries to match rules.
import org.kie.api.KieBaseConfiguration; import org.kie.api.KieServices.Factory; KieBaseConfiguration config = KieServices.Factory.get().newKieBaseConfiguration(); config.setOption( EventProcessingOption.CLOUD );
drools.eventProcessingMode = cloud
7.4.2. Stream Mode
- Events in each stream must be ordered chronologically.
- A session clock must be present to synchronize event streams.
Note
import org.kie.api.KieBaseConfiguration; import org.kie.api.KieServices.Factory; KieBaseConfiguration config = KieServices.Factory.get().newKieBaseConfiguration(); config.setOption( EventProcessingOption.STREAM );
drools.eventProcessingMode = stream
7.5. Event Streams
- Events in the stream are ordered by timestamp. The timestamps may have different semantics for different streams, but they are always ordered internally.
- There is usually a high volume of events in the stream.
- Atomic events contained in the streams are rarely useful by themselves.
- Streams are either homogeneous (they contain a single type of event) or heterogeneous (they contain events of different types).
7.5.1. Declaring and Using Entry Points
Example 7.8. Example ATM Rule
rule "authorize withdraw"
when
WithdrawRequest( $ai : accountId, $am : amount ) from entry-point "ATM Stream"
CheckingAccount( accountId == $ai, balance > $am )
then
// authorize withdraw
end
WithdrawRequest events coming from the "ATM Stream."
WithdrawalRequest) from the stream with a fact from the main working memory (CheckingAccount).
Example 7.9. Using Multiple Streams
rule "apply fee on withdraws on branches"
when
WithdrawRequest( $ai : accountId, processed == true ) from entry-point "Branch Stream"
CheckingAccount( accountId == $ai )
then
// apply a $2 fee on the account
end
WithdrawRequest) as the example ATM rule but from a different stream. Events inserted into the "ATM Stream" will never match the pattern on the second rule, which is tied to the "Branch Stream;" accordingly, events inserted into the "Branch Stream" will never match the pattern on the example ATM rule, which is tied to the "ATM Stream".
Example 7.10. Inserting Facts into an Entry Point
import org.kie.api.runtime.KieSession; // create your rulebase and your session as usual KieSession session = ... // get a reference to the entry point WorkingMemoryEntryPoint atmStream = session.getWorkingMemoryEntryPoint( "ATM Stream" ); // and start inserting your facts into the entry point atmStream.insert( aWithdrawRequest );
7.5.2. Negative Pattern in Stream Mode
Example 7.11. A Rule with a Negative Pattern
rule "Sound the alarm"
when
$f : FireDetected( )
not( SprinklerActivated( ) )
then
// sound the alarm
end
Example 7.12. A Rule with a Negative Pattern, Temporal Constraints, and an Explicit Duration Parameter.
rule "Sound the alarm"
duration( 10s )
when
$f : FireDetected( )
not( SprinklerActivated( this after[0s,10s] $f ) )
then
// sound the alarm
end
Example 7.13. A Rule with a Negative Pattern with Temporal Constraints
rule "Sound the alarm"
when
$f : FireDetected( )
not( SprinklerActivated( this after[0s,10s] $f ) )
then
// sound the alarm
end
Example 7.14. Excluding Bound Events in Negative Patterns
rule "Sound the alarm"
when
$h: Heartbeat( ) from entry-point "MonitoringStream"
not( Heartbeat( this != $h, this after[0s,10s] $h ) from entry-point "MonitoringStream" )
then
// Sound the alarm
end
7.6. Temporal Operations
7.6.1. Temporal Reasoning
Note
7.6.2. Temporal Operations
- After
- Before
- Coincides
- During
- Finishes
- Finishes By
- Includes
- Meets
- Met By
- Overlaps
- Overlapped By
- Starts
- Started By
7.6.3. After
after operator correlates two events and matches when the temporal distance (the time between the two events) from the current event to the event being correlated falls into the distance range declared for the operator.
$eventA : EventA( this after[ 3m30s, 4m ] $eventB )
$eventB finished and the time when $eventA started is between the lower limit of three minutes and thirty seconds and the upper limit of four minutes.
3m30s <= $eventA.startTimestamp - $eventB.endTimeStamp <= 4m
after operator accepts one or two optional parameters:
- If two values are defined, the interval starts on the first value (3 minutes and 30 seconds in the example) and ends on the second value (4 minutes in the example).
- If only one value is defined, the interval starts on the provided value and runs indefinitely with no end time.
- If no value is defined, the interval starts at one millisecond and runs indefinitely with no end time.
after operator also accepts negative temporal distances.
$eventA : EventA( this after[ -3m30s, -2m ] $eventB )
$eventA : EventA( this after[ -3m30s, -2m ] $eventB ) $eventA : EventA( this after[ -2m, -3m30s ] $eventB )
7.6.4. Before
before operator correlates two events and matches when the temporal distance (time between the two events) from the event being correlated to the current event falls within the distance range declared for the operator.
$eventA : EventA( this before[ 3m30s, 4m ] $eventB )
$eventA finished and the time when $eventB started is between the lower limit of three minutes and thirty seconds and the upper limit of four minutes.
3m30s <= $eventB.startTimestamp - $eventA.endTimeStamp <= 4m
before operator accepts one or two optional parameters:
- If two values are defined, the interval starts on the first value (3 minutes and 30 seconds in the example) and ends on the second value (4 minutes in the example).
- If only one value is defined, the interval starts on the provided value and runs indefinitely with no end time.
- If no value is defined, the interval starts at one millisecond and runs indefinitely with no end time.
before operator also accepts negative temporal distances.
$eventA : EventA( this before[ -3m30s, -2m ] $eventB )
$eventA : EventA( this before[ -3m30s, -2m ] $eventB ) $eventA : EventA( this before[ -2m, -3m30s ] $eventB )
7.6.5. Coincides
coincides operator correlates two events and matches when both events happen at the same time.
$eventA : EventA( this coincides $eventB )
$eventA and $eventB are identical and the end timestamps of both $eventA and $eventB are also identical.
coincides operator accepts optional thresholds for the distance between the events' start times and the events' end times, so the events do not have to start at exactly the same time or end at exactly the same time, but they need to be within the provided thresholds.
coincides operator:
- If only one parameter is given, it is used to set the threshold for both the start and end times of both events.
- If two parameters are given, the first is used as a threshold for the start time and the second one is used as a threshold for the end time.
$eventA : EventA( this coincides[15s, 10s] $eventB )
abs( $eventA.startTimestamp - $eventB.startTimestamp ) <= 15s && abs( $eventA.endTimestamp - $eventB.endTimestamp ) <= 10s
Warning
coincides operator does not accept negative intervals, and the rules engine will throw an exception if an attempt is made to use negative distance internals.
7.6.6. During
during operator correlates two events and matches when the current event happens during the event being correlated.
$eventA : EventA( this during $eventB )
$eventA starts after $eventB and ends before $eventB ends.
$eventB.startTimestamp < $eventA.startTimestamp <= $eventA.endTimestamp < $eventB.endTimestamp
during operator accepts one, two, or four optional parameters:
during operator:
- If one value is defined, this value will represent the maximum distance between the start times of the two events and the maximum distance between the end times of the two events.
- If two values are defined, these values represent a threshold that the current event's start time and end time must occur between in relation to the correlated event's start and end times.If the values 5s and 10s are provided, the current event must start between 5 and 10 seconds after the correlated event, and similarly the current event must end between 5 and 10 seconds before the correlated event.
- If four values are defined, the first and second values will be used as the minimum and maximum distances between the starting times of the events, and the third and fourth values will be used as the minimum and maximum distances between the end times of the two events.
7.6.7. Finishes
finishes operator correlates two events and matches when the current event's start timestamp post-dates the correlated event's start timestamp and both events end simultaneously.
$eventA : EventA( this finishes $eventB )
$eventA starts after $eventB starts and ends at the same time as $eventB ends.
$eventB.startTimestamp < $eventA.startTimestamp && $eventA.endTimestamp == $eventB.endTimestamp
finishes operator accepts one optional parameter. If defined, the optional parameter sets the maximum time allowed between the end times of the two events.
$eventA : EventA( this finishes[ 5s ] $eventB )
$eventB.startTimestamp < $eventA.startTimestamp && abs( $eventA.endTimestamp - $eventB.endTimestamp ) <= 5s
Warning
finishes operator does not accept negative intervals, and the rules engine will throw an exception if an attempt is made to use negative distance intervals.
7.6.8. Finishes By
finishedby operator correlates two events and matches when the current event's start time predates the correlated event's start time but both events end simultaneously. finishedby is the symmetrical opposite of the finishes operator.
$eventA : EventA( this finishedby $eventB )
$eventA starts before $eventB starts and ends at the same time as $eventB ends.
$eventA.startTimestamp < $eventB.startTimestamp && $eventA.endTimestamp == $eventB.endTimestamp
finishedby operator accepts one optional parameter. If defined, the optional parameter sets the maximum time allowed between the end times of the two events.
$eventA : EventA( this finishedby[ 5s ] $eventB )
$eventA.startTimestamp < $eventB.startTimestamp && abs( $eventA.endTimestamp - $eventB.endTimestamp ) <= 5s
Warning
finishedby operator does not accept negative intervals, and the rules engine will throw an exception if an attempt is made to use negative distance intervals.
7.6.9. Includes
includes operator examines two events and matches when the event being correlated happens during the current event. It is the symmetrical opposite of the during operator.
$eventA : EventA( this includes $eventB )
$eventB starts after $eventA and ends before $eventA ends.
$eventA.startTimestamp < $eventB.startTimestamp <= $eventB.endTimestamp < $eventA.endTimestamp
includes operator accepts 1, 2 or 4 optional parameters:
- If one value is defined, this value will represent the maximum distance between the start times of the two events and the maximum distance between the end times of the two events.
- If two values are defined, these values represent a threshold that the current event's start time and end time must occur between in relation to the correlated event's start and end times.If the values 5s and 10s are provided, the current event must start between 5 and 10 seconds after the correlated event, and similarly the current event must end between 5 and 10 seconds before the correlated event.
- If four values are defined, the first and second values will be used as the minimum and maximum distances between the starting times of the events, and the third and fourth values will be used as the minimum and maximum distances between the end times of the two events.
7.6.10. Meets
meets operator correlates two events and matches when the current event ends at the same time as the correlated event starts.
$eventA : EventA( this meets $eventB )
$eventA ends at the same time as $eventB starts.
abs( $eventB.startTimestamp - $eventA.endTimestamp ) == 0
meets operator accepts one optional parameter. If defined, it determines the maximum time allowed between the end time of the current event and the start time of the correlated event.
$eventA : EventA( this meets[ 5s ] $eventB )
abs( $eventB.startTimestamp - $eventA.endTimestamp) <= 5s
Warning
meets operator does not accept negative intervals, and the rules engine will throw an exception if an attempt is made to use negative distance intervals.
7.6.11. Met By
metby operator correlates two events and matches when the current event starts at the same time as the correlated event ends.
$eventA : EventA( this metby $eventB )
$eventA starts at the same time as $eventB ends.
abs( $eventA.startTimestamp - $eventB.endTimestamp ) == 0
metby operator accepts one optional parameter. If defined, it sets the maximum distance between the end time of the correlated event and the start time of the current event.
$eventA : EventA( this metby[ 5s ] $eventB )
abs( $eventA.startTimestamp - $eventB.endTimestamp) <= 5s
Warning
metby operator does not accept negative intervals, and the rules engine will throw an exception if an attempt is made to use negative distance intervals.
7.6.12. Overlaps
overlaps operator correlates two events and matches when the current event starts before the correlated event starts and ends after the correlated event starts, but it ends before the correlated event ends.
$eventA : EventA( this overlaps $eventB )
$eventA.startTimestamp < $eventB.startTimestamp < $eventA.endTimestamp < $eventB.endTimestamp
overlaps operator accepts one or two optional parameters:
- If one parameter is defined, it will define the maximum distance between the start time of the correlated event and the end time of the current event.
- If two values are defined, the first value will be the minimum distance, and the second value will be the maximum distance between the start time of the correlated event and the end time of the current event.
7.6.13. Overlapped By
overlappedby operator correlates two events and matches when the correlated event starts before the current event, and the correlated event ends after the current event starts but before the current event ends.
$eventA : EventA( this overlappedby $eventB )
$eventB.startTimestamp < $eventA.startTimestamp < $eventB.endTimestamp < $eventA.endTimestamp
overlappedby operator accepts one or two optional parameters:
- If one parameter is defined, it sets the maximum distance between the start time of the correlated event and the end time of the current event.
- If two values are defined, the first value will be the minimum distance, and the second value will be the maximum distance between the start time of the correlated event and the end time of the current event.
7.6.14. Starts
starts operator correlates two events and matches when they start at the same time, but the current event ends before the correlated event ends.
$eventA : EventA( this starts $eventB )
$eventA and $eventB start at the same time, and $eventA ends before $eventB ends.
$eventA.startTimestamp == $eventB.startTimestamp && $eventA.endTimestamp < $eventB.endTimestamp
starts operator accepts one optional parameter. If defined, it determines the maximum distance between the start times of events in order for the operator to still match:
$eventA : EventA( this starts[ 5s ] $eventB )
abs( $eventA.startTimestamp - $eventB.startTimestamp ) <= 5s && $eventA.endTimestamp < $eventB.endTimestamp
Warning
starts operator does not accept negative intervals, and the rules engine will throw an exception if an attempt is made to use negative distance intervals.
7.6.15. Started By
startedby operator correlates two events. It matches when both events start at the same time and the correlating event ends before the current event.
$eventA : EventA( this startedby $eventB )
$eventA and $eventB start at the same time, and $eventB ends before $eventA ends.
$eventA.startTimestamp == $eventB.startTimestamp && $eventA.endTimestamp > $eventB.endTimestamp
startedby operator accepts one optional parameter. If defined, it sets the maximum distance between the start time of the two events in order for the operator to still match:
$eventA : EventA( this starts[ 5s ] $eventB )
abs( $eventA.startTimestamp - $eventB.startTimestamp ) <= 5s && $eventA.endTimestamp > $eventB.endTimestamp
Warning
startsby operator does not accept negative intervals, and the rules engine will throw an exception if an attempt is made to use negative distance intervals.
7.7. Sliding Windows
7.7.1. Sliding Time Windows
StockTick() over window:time( 2m )
over keyword to associate windows with patterns.
Example 7.15. Average Value over Time
rule "Sound the alarm in case temperature rises above threshold"
when
TemperatureThreshold( $max : max )
Number( doubleValue > $max ) from accumulate(
SensorReading( $temp : temperature ) over window:time( 10m ),
average( $temp ) )
then
// sound the alarm
end
SensorReading more than ten minutes old and keep re-calculating the average.
7.7.2. Sliding Length Windows
StockTick( company == "RHT" ) over window:length( 10 )
Example 7.16. Average Value over Length
rule "Sound the alarm in case temperature rises above threshold"
when
TemperatureThreshold( $max : max )
Number( doubleValue > $max ) from accumulate(
SensorReading( $temp : temperature ) over window:length( 100 ),
average( $temp ) )
then
// sound the alarm
end
Note
Note
7.8. Memory Management for Events
- Explicitly
- Event expiration can be explicitly set with the @expires
- Implicitly
- The rules engine can analyze the temporal constraints in rules to determine the window of interest for events.
7.8.1. Explicit Expiration
declare statement and the metadata @expires tag.
Example 7.17. Declaring Explicit Expiration
declare StockTick
@expires( 30m )
end
StockTick events, remove any StockTick events from the session automatically after the defined expiration time if no rules still need the events.
7.8.2. Inferred Expiration
Example 7.18. A Rule with Temporal Constraints
rule "correlate orders"
when
$bo : BuyOrder( $id : id )
$ae : AckOrder( id == $id, this after[0,10s] $bo )
then
// do something
end
BuyOrder event occurs it needs to store the event for up to ten seconds to wait for the matching AckOrder event, making the implicit expiration offset for BuyOrder events ten seconds. An AckOrder event can only match an existing BuyOrder event making its implicit expiration offset zero seconds.
Chapter 8. Working With Rules
8.1. What's in a Rule File
8.1.1. A rule file
8.1.2. The structure of a rule file
Example 8.1. Rules file
package package-name
imports
globals
functions
queries
rules8.2. How Rules Operate on Facts
8.2.1. Rule files Accessing the Working Memory
update(object, handle)This method is used to tell the engine that an object has changed and rules may need to be reconsidered.update(object)In this method, the KieSession looks up the fact handle, via an identity check, for the passed object. Although, if property change listeners are provided to the JavaBeans that are inserted into the engine, it is possible to avoid the need to callupdate()method when the object changes.insert(new <method name>())This method places a new object into the working memory.retract(handle)This method removes an object from working memory. It is mapped to the delete method in a KieSession.insertLogical(new <method name>())This method is similar to insert, but the object is automatically retracted from the working memory when there are no more facts to support the truth of the currently firing rule.halt()This method terminates rule execution immediately. This is required for returning control to the point where the current session is put to work withfireUntilHalt()method.getKieRuntime()The full KIE API is exposed through a predefined variable, kcontext, of type RuleContext. Its methodgetKieRuntime()delivers an object of type KieRuntime, which in turn provides access to a wealth of methods, many of which are useful for coding the rule logic. The callkcontext.getKieRuntime().halt()terminates rule execution immediately.
8.3. Using Rule Keywords
8.3.1. Hard Keywords
true, false, and null.
8.3.2. Soft Keywords
8.3.3. List of Soft Keywords

Figure 8.1. Rule Attributes
Table 8.1. Soft Keywords
| Name | Default Value | Type | Description |
|---|---|---|---|
no-loop | false | Boolean | When a rule's consequence modifies a fact, it may cause the rule to activate again, causing an infinite loop. Setting 'no-loop' to "true" will skip the creation of another activation for the rule with the current set of facts. |
lock-on-active | false | Boolean | Whenever a 'ruleflow-group' becomes active or an 'agenda-group' receives the focus, any rule within that group that has 'lock-on-active' set to "true" will not be activated any more. Regardless of the origin of the update, the activation of a matching rule is discarded. This is a stronger version of 'no-loop' because the change is not only caused by the rule itself. It is ideal for calculation rules where you have a number of rules that modify a fact, and you do not want any rule re-matching and firing again. Only when the 'ruleflow-group' is no longer active or the 'agenda-group' loses the focus, those rules with 'lock-on-active' set to "true" become eligible again for their activations to be placed onto the agenda. |
salience | 0 | integer | Each rule has an integer salience attribute which defaults to zero and can be negative or positive. Salience is a form of priority where rules with higher salience values are given higher priority when ordered in the activation queue. BRMS also supports dynamic salience where you can use an expression involving bound variables like the following:
rule "Fire in rank order 1,2,.." salience( -$rank ) when Element( $rank : rank,... ) then ... end |
ruleflow-group | N/A | String | Ruleflow is a BRMS feature that lets you exercise control over the firing of rules. Rules that are assembled by the same ''ruleflow-group' identifier fire only when their group is active. This attribute has been merged with 'agenda-group' and the behaviours are basically the same. |
agenda-group | MAIN | String | Agenda groups allow the user to partition the agenda, which provides more execution control. Only rules in the agenda group that have acquired the focus are allowed to fire. This attribute has been merged with 'ruleflow-group' and the behaviours are basically the same. |
auto-focus | false | Boolean | When a rule is activated where the 'auto-focus' value is "true" and the rule's agenda group does not have focus yet, it is automatically given focus, allowing the rule to potentially fire. |
activation-group | N/A | String | Rules that belong to the same 'activation-group' identified by this attribute's String value, will only fire exclusively. More precisely, the first rule in an 'activation-group' to fire will cancel all pending activations of all rules in the group, i.e., stop them from firing. |
dialect | specified by package | String | Java and MVEL are the possible values of the 'dialect' attribute. This attribute specifies the language to be used for any code expressions in the LHS or the RHS code block. While the 'dialect' can be specified at the package level, this attribute allows the package definition to be overridden for a rule. |
date-effective | N/A | String, date and time definition | A rule can only activate if the current date and time is after the 'date-effective' attribute. An example 'date-effective' attribute is displayed below:
rule "Start Exercising" date-effective "4-Sep-2014" when $m : org.drools.compiler.Message() then $m.setFired(true); end |
date-expires | N/A | String, date and time definition | A rule cannot activate if the current date and time is after the 'date-expires' attribute. An example 'date-expires' attribute is displayed below:
rule "Run 4km" date-effective "4-Sep-2014" date-expires "9-Sep-2014" when $m : org.drools.compiler.Message() then $m.setFired(true); end |
duration | no default | long | If a rule is still "true", the 'duration' attribute will dictate that the rule will fire after a specified duration. |
Note
8.4. Adding Comments to a Rule File
8.4.1. Single Line Comment Example
rule "Testing Comments"
when
// this is a single line comment
eval( true ) // this is a comment in the same line of a pattern
then
// this is a comment inside a semantic code block
end
8.4.2. Multi-Line Comment Example
rule "Test Multi-line Comments"
when
/* this is a multi-line comment
in the left hand side of a rule */
eval( true )
then
/* and this is a multi-line comment
in the right hand side of a rule */
end8.5. Error Messages in Rules
8.5.1. Error Message Format

Figure 8.2. Error Message Format Example
8.5.2. Error Messages Description
Table 8.2. Error Messages
| Error Message | Description | Example | |
|---|---|---|---|
|
[ERR 101] Line 4:4 no viable alternative at input 'exits' in rule one
|
Indicates when the parser came to a decision point but couldn't identify an alternative.
|
1: rule one 2: when 3: exists Foo() 4: exits Bar() 5: then 6: end | |
|
[ERR 101] Line 3:2 no viable alternative at input 'WHEN'
|
This message means the parser has encountered the token
WHEN (a hard keyword) which is in the wrong place, since the rule name is missing.
|
1: package org.drools;
2: rule
3: when
4: Object()
5: then
6: System.out.println("A RHS");
7: end
| |
|
[ERR 101] Line 0:-1 no viable alternative at input '<eof>' in rule simple_rule in pattern [name]
|
Indicates an open quote, apostrophe or parentheses.
|
1: rule simple_rule 2: when 3: Student( name == "Andy ) 4: then 5: end | |
|
[ERR 102] Line 0:-1 mismatched input '<eof>' expecting ')' in rule simple_rule in pattern Bar
|
Indicates that the parser was looking for a particular symbol that it didn't end at the current input position.
|
1: rule simple_rule 2: when 3: foo3 : Bar( | |
|
[ERR 102] Line 0:-1 mismatched input '<eof>' expecting ')' in rule simple_rule in pattern [name]
|
This error is the result of an incomplete rule statement. Usually when you get a 0:-1 position, it means that parser reached the end of source. To fix this problem, it is necessary to complete the rule statement.
|
1: package org.drools;
2:
3: rule "Avoid NPE on wrong syntax"
4: when
5: not( Cheese( ( type == "stilton", price == 10 ) || ( type == "brie", price == 15 ) ) from $cheeseList )
6: then
7: System.out.println("OK");
8: end
| |
|
[ERR 103] Line 7:0 rule 'rule_key' failed predicate: {(validateIdentifierKey(DroolsSoftKeywords.RULE))}? in rule
|
A validating semantic predicate evaluated to false. Usually these semantic predicates are used to identify soft keywords.
|
1: package nesting; 2: dialect "mvel" 3: 4: import org.drools.Person 5: import org.drools.Address 6: 7: fdsfdsfds 8: 9: rule "test something" 10: when 11: p: Person( name=="Michael" ) 12: then 13: p.name = "other"; 14: System.out.println(p.name); 15: end | |
|
[ERR 104] Line 3:4 trailing semi-colon not allowed in rule simple_rule
|
This error is associated with the
eval clause, where its expression may not be terminated with a semicolon. This problem is simple to fix: just remove the semi-colon.
|
1: rule simple_rule 2: when 3: eval(abc();) 4: then 5: end | |
|
[ERR 105] Line 2:2 required (...)+ loop did not match anything at input 'aa' in template test_error
|
The recognizer came to a subrule in the grammar that must match an alternative at least once, but the subrule did not match anything. To fix this problem it is necessary to remove the numeric value as it is neither a valid data type which might begin a new template slot nor a possible start for any other rule file construct.
|
1: template test_error 2: aa s 11; 3: end |
8.6. Packaging
8.6.1. Import Statements
java.lang.
8.6.2. Using Globals
- Declare the global variable in the rules file and use it in rules. Example:
global java.util.List myGlobalList; rule "Using a global" when eval( true ) then myGlobalList.add( "Hello World" ); end - Set the global value on the working memory. It is best practice to set all global values before asserting any fact to the working memory. Example:
List list = new ArrayList(); WorkingMemory wm = rulebase.newStatefulSession(); wm.setGlobal( "myGlobalList", list );
8.6.3. The From Element
8.6.4. Using Globals with an e-Mail Service
Procedure 8.1. Task
- Open the integration code that is calling the rule engine.
- Obtain your emailService object and then set it in the working memory.
- In the DRL, declare that you have a global of type emailService and give it the name "email".
- In your rule consequences, you can use things like email.sendSMS(number, message).
Warning
Globals are not designed to share data between rules and they should never be used for that purpose. Rules always reason and react to the working memory state, so if you want to pass data from rule to rule, assert the data as facts into the working memory.Important
Do not set or change a global value from inside the rules. We recommend to you always set the value from your application using the working memory interface.
8.7. Functions in a Rule
then) part of a rule, especially if that particular action is used repeatedly.
function String hello(String name) {
return "Hello "+name+"!";
}
Note
function keyword is used, even though it's not technically part of Java. Parameters to the function are defined as for a method. You don't have to have parameters if they are not needed. The return type is defined just like in a regular method.
8.7.1. Function Declaration with Static Method Example
Foo.hello(). JBoss BRMS supports the use of function imports, so the following code is all you would need to enter the following:
import function my.package.Foo.hello
8.7.2. Calling a Function Declaration Example
rule "using a static function"
when
eval( true )
then
System.out.println( hello( "Bob" ) );
end
8.7.3. Type Declarations
Table 8.3. Type Declaration Roles
| Role | Description |
|---|---|
| Declaring new types |
JBoss BRMS uses plain Java objects as facts out of the box. However, if you wish to define the model directly to the rules engine, you can do so by declaring a new type. You can also declare a new type when there is a domain model already built and you want to complement this model with additional entities that are used mainly during the reasoning process.
|
| Declaring metadata |
Facts may have meta information associated to them. Examples of meta information include any kind of data that is not represented by the fact attributes and is consistent among all instances of that fact type. This meta information may be queried at runtime by the engine and used in the reasoning process.
|
8.7.4. Declaring New Types
declare is used, followed by the list of fields and the keyword end. A new fact must have a list of fields, otherwise the engine will look for an existing fact class in the classpath and raise an error if not found.
8.7.5. Declaring a New Fact Type Example
Address is used. This fact type will have three attributes: number, streetName and city. Each attribute has a type that can be any valid Java type, including any other class created by the user or other fact types previously declared:
declare Address number : int streetName : String city : String end
8.7.6. Declaring a New Fact Type Additional Example
Person example. dateOfBirth is of the type java.util.Date (from the Java API) and address is of the fact type Address.
declare Person name : String dateOfBirth : java.util.Date address : Address end
8.7.7. Using Import Example
import feature to avoid he need to use fully qualified class names:
import java.util.Date declare Person name : String dateOfBirth : Date address : Address end
8.7.8. Generated Java Classes
8.7.9. Generated Java Class Example
Person fact type:
public class Person implements Serializable {
private String name;
private java.util.Date dateOfBirth;
private Address address;
// empty constructor
public Person() {...}
// constructor with all fields
public Person( String name, Date dateOfBirth, Address address ) {...}
// if keys are defined, constructor with keys
public Person( ...keys... ) {...}
// getters and setters
// equals/hashCode
// toString
}
8.7.10. Using the Declared Types in Rules Example
rule "Using a declared Type" when $p : Person( name == "Bob" ) then // Insert Mark, who is Bob's manager. Person mark = new Person(); mark.setName("Mark"); insert( mark ); end
8.7.11. Declaring Metadata
@metadata_key( metadata_value )
8.7.12. Working with Metadata Attributes
8.7.13. Declaring a Metadata Attribute with Fact Types Example
@author and @dateOfCreation) and two more defined for the name attribute (@key and @maxLength). The @key metadata has no required value, and so the parentheses and the value were omitted:
import java.util.Date declare Person @author( Bob ) @dateOfCreation( 01-Feb-2009 ) name : String @key @maxLength( 30 ) dateOfBirth : Date address : Address end
8.7.14. The @position Attribute
@position attribute can be used to declare the position of a field, overriding the default declared order. This is used for positional constraints in patterns.
8.7.15. @position Example
declare Cheese
name : String @position(1)
shop : String @position(2)
price : int @position(0)
end
8.7.16. Predefined Class Level Annotations
Table 8.4. Predefined Class Level Annotations
| Annotation | Description |
|---|---|
| @role( <fact | event> ) |
This attribute can be used to assign roles to facts and events.
|
| @typesafe( <boolean> ) |
By default, all type declarations are compiled with type safety enabled.
@typesafe( false ) provides a means to override this behavior by permitting a fall-back, to type unsafe evaluation where all constraints are generated as MVEL constraints and executed dynamically. This is useful when dealing with collections that do not have any generics or mixed type collections.
|
| @timestamp( <attribute name> ) |
Creates a timestamp.
|
| @duration( <attribute name> ) |
Sets a duration for the implementation of an attribute.
|
| @expires( <time interval> ) |
Allows you to define when the attribute should expire.
|
| @propertyChangeSupport |
Facts that implement support for property changes as defined in the Javabean spec can now be annotated so that the engine register itself to listen for changes on fact properties. .
|
| @propertyReactive | Makes the type property reactive. |
8.7.17. @key Attribute Functions
- The attribute is used as a key identifier for the type, and thus the generated class implements the
equals()andhashCode()methods taking the attribute into account when comparing instances of this type. - JBoss BRMS generates a constructor using all the key attributes as parameters.
8.7.18. @key Declaration Example
equals() and hashCode() methods that checks the firstName and lastName attributes to determine if two instances of Person are equal to each other. It does not check the age attribute. It also generates a constructor taking firstName and lastName as parameters:
declare Person
firstName : String @key
lastName : String @key
age : int
end
8.7.19. Creating an Instance with the Key Constructor Example
Person person = new Person( "John", "Doe" );
8.7.20. Positional Arguments
@position attribute.
8.7.21. Positional Argument Example
declare Cheese
name : String
shop : String
price : int
end
declare Cheese
name : String @position(1)
shop : String @position(2)
price : int @position(0)
end
8.7.22. The @Position Annotation
8.7.23. Example Patterns
Cheese( "stilton", "Cheese Shop", p; ) Cheese( "stilton", "Cheese Shop"; p : price ) Cheese( "stilton"; shop == "Cheese Shop", p : price ) Cheese( name == "stilton"; shop == "Cheese Shop", p : price )
8.8. Backward-Chaining
8.8.1. Backward-Chaining Systems
8.8.2. Cloning Transitive Closures

Figure 8.3. Reasoning Graph
Procedure 8.2. Configure Transitive Closures
- First, create some java rules to develop reasoning for transitive items. It inserts each of the locations.
- Next, create the
Locationclass; it has the item and where it is located. - Type the rules for the House example as depicted below:
ksession.insert( new Location("office", "house") ); ksession.insert( new Location("kitchen", "house") ); ksession.insert( new Location("knife", "kitchen") ); ksession.insert( new Location("cheese", "kitchen") ); ksession.insert( new Location("desk", "office") ); ksession.insert( new Location("chair", "office") ); ksession.insert( new Location("computer", "desk") ); ksession.insert( new Location("drawer", "desk") ); - A transitive design is created in which the item is in its designated location such as a "desk" located in an "office."

Figure 8.4. Transitive Reasoning Graph of a House.
Note
"key" item in a "drawer" location. This will become evident in a later topic.
8.8.3. Defining a Query
Procedure 8.3. Define a Query
- Create a query to look at the data inserted into the rules engine:
query isContainedIn( String x, String y ) Location( x, y; ) or ( Location( z, y; ) and isContainedIn( x, z; ) ) endNotice how the query is recursive and is calling "isContainedIn." - Create a rule to print out every string inserted into the system to see how things are implemented. The rule should resemble the following format:
rule "go" salience 10 when $s : String( ) then System.out.println( $s ); end - Using Step 2 as a model, create a rule that calls upon the Step 1 query "isContainedIn."
rule "go1" when String( this == "go1" ) isContainedIn("office", "house"; ) then System.out.println( "office is in the house" ); endThe "go1" rule will fire when the first string is inserted into the engine. That is, it asks if the item "office" is in the location "house." Therefore, the Step 1 query is evoked by the previous rule when the "go1" String is inserted. - Create the "go1," insert it into the engine, and call the fireAllRules.
ksession.insert( "go1" ); ksession.fireAllRules(); --- go1 office is in the houseThe --- line indicates the separation of the output of the engine from the firing of the "go" rule and the "go1" rule.- "go1" is inserted
- Salience ensures it goes first
- The rule matches the query
8.8.4. Transitive Closure Example
Procedure 8.4. Create a Transitive Closure
- Create a Transitive Closure by implementing the following rule:
rule "go2" when String( this == "go2" ) isContainedIn("drawer", "house"; ) then System.out.println( "Drawer in the House" ); end - Recall from the Cloning Transitive Closure's topic, there was no instance of "drawer" in "house." "drawer" was located in "desk."

Figure 8.5. Transitive Reasoning Graph of a Drawer.
- Use the previous query for this recursive information.
query isContainedIn( String x, String y ) Location( x, y; ) or ( Location( z, y; ) and isContainedIn( x, z; ) ) end - Create the "go2," insert it into the engine, and call the fireAllRules.
ksession.insert( "go2" ); ksession.fireAllRules(); --- go2 Drawer in the HouseWhen the rule is fired, it correctly tells you "go2" has been inserted and that the "drawer" is in the "house." - Check how the engine determined this outcome.
- The query has to recurse down several levels to determine this.
- Instead of using
Location( x, y; ), The query uses the value of(z, y; )since "drawer" is not in "house." - The
zis currently unbound which means it has no value and will return everything that is in the argument. yis currently bound to "house," sozwill return "office" and "kitchen."- Information is gathered from "office" and checks recursively if the "drawer" is in the "office." The following query line is being called for these parameters:
isContainedIn (x ,z; )
There is no instance of "drawer" in "office;" therefore, it does not match. Withzbeing unbound, it will return data that is within the "office," and it will gather thatz == desk.isContainedIn(x==drawer, z==desk)isContainedInrecurses three times. On the final recurse, an instance triggers of "drawer" in the "desk."Location(x==drawer, y==desk)This matches on the first location and recurses back up, so we know that "drawer" is in the "desk," the "desk" is in the "office," and the "office" is in the "house;" therefore, the "drawer" is in the "house" and returnstrue.
8.8.5. Reactive Transitive Queries
Procedure 8.5. Create a Reactive Transitive Query
- Create a Reactive Transitive Query by implementing the following rule:
rule "go3" when String( this == "go3" ) isContainedIn("key", "office"; ) then System.out.println( "Key in the Office" ); endReactive Transitive Queries can ask a question even if the answer can not be satisfied. Later, if it is satisfied, it will return an answer.Note
Recall from the Cloning Transitive Closures example that there was no "key" item in the system. - Use the same query for this reactive information.
query isContainedIn( String x, String y ) Location( x, y; ) or ( Location( z, y; ) and isContainedIn( x, z; ) ) end - Create the "go3," insert it into the engine, and call the fireAllRules.
ksession.insert( "go3" ); ksession.fireAllRules(); --- go3- "go3" is inserted
fireAllRules();is called
The first rule that matches any String returns "go3" but nothing else is returned because there is no answer; however, while "go3" is inserted in the system, it will continuously wait until it is satisfied. - Insert a new location of "key" in the "drawer":
ksession.insert( new Location("key", "drawer") ); ksession.fireAllRules(); --- Key in the OfficeThis new location satisfies the transitive closure because it is monitoring the entire graph. In addition, this process now has four recursive levels in which it goes through to match and fire the rule.
8.8.6. Queries with Unbound Arguments
Procedure 8.6. Create an Unbound Argument's Query
- Create a Query with Unbound Arguments by implementing the following rule:
rule "go4" when String( this == "go4" ) isContainedIn(thing, "office"; ) then System.out.println( "thing" + thing + "is in the Office" ); endThis rule is asking for everything in the "office," and it will tell everything in all the rows below. The unbound argument (out variablething) in this example will return every possible value; accordingly, it is very similar to thezvalue used in the Reactive Transitive Query example. - Use the query for the unbound arguments.
query isContainedIn( String x, String y ) Location( x, y; ) or ( Location( z, y; ) and isContainedIn( x, z; ) ) end - Create the "go4," insert it into the engine, and call the fireAllRules.
ksession.insert( "go4" ); ksession.fireAllRules(); --- go4 thing Key is in the Office thing Computer is in the Office thing Drawer is in the Office thing Desk is in the Office thing Chair is in the OfficeWhen "go4" is inserted, it returns all the previous information that is transitively below "Office."
8.8.7. Multiple Unbound Arguments
Procedure 8.7. Creating Multiple Unbound Arguments
- Create a query with Mulitple Unbound Arguments by implementing the following rule:
rule "go5" when String( this == "go5" ) isContainedIn(thing, location; ) then System.out.println( "thing" + thing + "is in" + location ); endBoththingandlocationare unbound out variables, and without bound arguments, everything is called upon. - Use the query for multiple unbound arguments.
query isContainedIn( String x, String y ) Location( x, y; ) or ( Location( z, y; ) and isContainedIn( x, z; ) ) end - Create the "go5," insert it into the engine, and call the fireAllRules.
ksession.insert( "go5" ); ksession.fireAllRules(); --- go5 thing Knife is in House thing Cheese is in House thing Key is in House thing Computer is in House thing Drawer is in House thing Desk is in House thing Chair is in House thing Key is in Office thing Computer is in Office thing Drawer is in Office thing Key is in Desk thing Office is in House thing Computer is in Desk thing Knife is in Kitchen thing Cheese is in Kitchen thing Kitchen is in House thing Key is in Drawer thing Drawer is in Desk thing Desk is in Office thing Chair is in OfficeWhen "go5" is called, it returns everything within everything.
8.9. Type Declaration
8.9.1. Declaring Metadata for Existing Types
8.9.2. Declaring Metadata for Existing Types Example
import org.drools.examples.Person declare Person @author( Bob ) @dateOfCreation( 01-Feb-2009 ) end
8.9.3. Declaring Metadata Using a Fully Qualified Class Name Example
declare org.drools.examples.Person @author( Bob ) @dateOfCreation( 01-Feb-2009 ) end
8.9.4. Parametrized Constructors for Declared Types Example
declare Person
firstName : String @key
lastName : String @key
age : int
end
Person() // parameterless constructor Person( String firstName, String lastName ) Person( String firstName, String lastName, int age )
8.9.5. Non-Typesafe Classes
8.9.6. Accessing Declared Types from the Application Code
8.9.7. Declaring a Type
package org.drools.examples import java.util.Date declare Person name : String dateOfBirth : Date address : Address end
8.9.8. Handling Declared Fact Types Through the API Example
// get a reference to a knowledge base with a declared type:
Kie kbase = ...
// get the declared FactType
FactType personType = kbase.getFactType( "org.drools.examples",
"Person" );
// handle the type as necessary:
// create instances:
Object bob = personType.newInstance();
// set attributes values
personType.set( bob,
"name",
"Bob" );
personType.set( bob,
"age",
42 );
// insert fact into a session
KieSession ksession = ...
ksession.insert( bob );
ksession.fireAllRules();
// read attributes
String name = personType.get( bob, "name" );
int age = personType.get( bob, "age" );
8.9.9. Type Declaration Extends
8.9.10. Type Declaration Extends Example
extends annotation:
import org.people.Person
declare Person
end
declare Student extends Person
school : String
end
declare LongTermStudent extends Student
years : int
course : String
end8.9.11. Traits
@format(trait) annotation is added to its declaration in DRL.
8.9.12. Traits Example
declare GoldenCustomer
@format(trait)
// fields will map to getters/setters
code : String
balance : long
discount : int
maxExpense : long
endwhen
$c : Customer()
then
GoldenCustomer gc = don( $c, Customer.class );
end8.9.13. Core Objects and Traits
8.9.14. @Traitable Example
declare Customer
@Traitable
code : String
balance : long
end8.9.15. Writing Rules with Traits
8.9.16. Rules with Traits Example
when
$o: OrderItem( $p : price, $code : custCode )
$c: GoldenCustomer( code == $code, $a : balance, $d: discount )
then
$c.setBalance( $a - $p*$d );
end8.9.17. Hidden Fields
8.9.18. The Two-Part Proxy
8.9.19. Wrappers
8.9.20. Wrapper Example
when
$sc : GoldenCustomer( $c : code, // hard getter
$maxExpense : maxExpense > 1000 // soft getter
)
then
$sc.setDiscount( ... ); // soft setter
end8.9.21. Wrapper with isA Annotation Example
$sc : GoldenCustomer( $maxExpense : maxExpense > 1000,
this isA "SeniorCustomer"
)8.9.22. Removing Traits
- Logical don
- Results in a logical insertion of the proxy resulting from the traiting operation.
then don( $x, // core object Customer.class, // trait class true // optional flag for logical insertion ) - The shed keyword
- The shed keyword causes the retraction of the proxy corresponding to the given argument type
then Thing t = shed( $x, GoldenCustomer.class )This operation returns another proxy implementing the org.drools.factmodel.traits.Thing interface, where the getFields() and getCore() methods are defined. Internally, all declared traits are generated to extend this interface (in addition to any others specified). This allows to preserve the wrapper with the soft fields which would otherwise be lost.
8.9.23. Rule Syntax Example
rule "<name>"
<attribute>*
when
<conditional element>*
then
<action>*
end8.10. Rule Attributes
Table 8.5. Rule Attributes
| Attribute Name | Default Value | Type | Description |
|---|---|---|---|
|
no-loop
|
false
|
Boolean
|
When a rule's consequence modifies a fact it may cause the rule to activate again, causing an infinite loop. Setting no-loop to true will skip the creation of another Activation for the rule with the current set of facts.
|
|
ruleflow-group
|
N/A
|
String
|
Ruleflow is a Drools feature that lets you exercise control over the firing of rules. Rules that are assembled by the same ruleflow-group identifier fire only when their group is active.
|
|
lock-on-active
|
false
|
Boolean
|
Whenever a ruleflow-group becomes active or an agenda-group receives the focus, any rule within that group that has lock-on-active set to true will not be activated any more; irrespective of the origin of the update, the activation of a matching rule is discarded. This is a stronger version of no-loop, because the change could now be caused not only by the rule itself. It's ideal for calculation rules where you have a number of rules that modify a fact and you don't want any rule re-matching and firing again. Only when the ruleflow-group is no longer active or the agenda-group loses the focus those rules with lock-on-active set to true become eligible again for their activations to be placed onto the agenda.
|
|
salience
|
0
|
Integer
|
Each rule has an integer salience attribute which defaults to zero and can be negative or positive. Salience is a form of priority where rules with higher salience values are given higher priority when ordered in the Activation queue.
|
|
agenda-group
|
MAIN
|
String
|
Agenda groups allow the user to partition the Agenda providing more execution control. Only rules in the agenda group that has acquired the focus are allowed to fire.
|
|
auto-focus
|
false
|
Boolean
|
When a rule is activated where the auto-focus value is true and the rule's agenda group does not have focus yet, then it is given focus, allowing the rule to potentially fire.
|
|
activation-group
|
N/A
|
String
|
Rules that belong to the same activation-group, identified by this attribute's string value, will only fire exclusively. In other words, the first rule in an activation-group to fire will cancel the other rules' activations, i.e., stop them from firing.
|
|
dialect
|
As specified by the package
|
String
|
The dialect species the language to be used for any code expressions in the LHS or the RHS code block. Currently two dialects are available, Java and MVEL. While the dialect can be specified at the package level, this attribute allows the package definition to be overridden for a rule.
|
|
date-effective
|
N/A
|
String, containing a date and time definition
|
A rule can only activate if the current date and time is after date-effective attribute.
|
|
date-expires
|
N/A
|
String, containing a date and time definition
|
A rule cannot activate if the current date and time is after the date-expires attribute.
|
|
duration
|
no default value
|
long
|
The duration dictates that the rule will fire after a specified duration, if it is still true.
|
8.10.1. Rule Attribute Example
rule "my rule"
salience 42
agenda-group "number-1"
when ...8.10.2. Timer Attribute Example
timer attribute looks like:
timer ( int: <initial delay> <repeat interval>? ) timer ( int: 30s ) timer ( int: 30s 5m ) timer ( cron: <cron expression> ) timer ( cron:* 0/15 * * * ? )
8.10.3. Timers
- Interval
- Interval (indicated by "int:") timers follow the semantics of
java.util.Timerobjects, with an initial delay and an optional repeat interval. - Cron
- Cron (indicated by "cron:") timers follow standard Unix cron expressions.
8.10.4. Cron Timer Example
rule "Send SMS every 15 minutes"
timer (cron:* 0/15 * * * ?)
when
$a : Alarm( on == true )
then
channels[ "sms" ].insert( new Sms( $a.mobileNumber, "The alarm is still on" );
end8.10.5. Calendars
8.10.6. Quartz Calendar Example
Calendar weekDayCal = QuartzHelper.quartzCalendarAdapter(org.quartz.Calendar quartzCal)
8.10.7. Registering a Calendar
Procedure 8.8. Task
- Start a StatefulKnowledgeSession.
- Use the following code to register the calendar:
ksession.getCalendars().set( "weekday", weekDayCal );
- If you wish to utilize the calendar and a timer together, use the following code:
rule "weekdays are high priority" calendars "weekday" timer (int:0 1h) when Alarm() then send( "priority high - we have an alarm” ); end rule "weekend are low priority" calendars "weekend" timer (int:0 4h) when Alarm() then send( "priority low - we have an alarm” ); end
8.10.8. Left Hand Side
8.10.9. Conditional Elements
and. It is implicit when you have multiple patterns in the LHS of a rule that is not connected in any way.
8.10.10. Rule Without a Conditional Element Example
rule "no CEs"
when
// empty
then
... // actions (executed once)
end
// The above rule is internally rewritten as:
rule "eval(true)"
when
eval( true )
then
... // actions (executed once)
end8.11. Patterns
8.11.1. Pattern Example
rule "2 unconnected patterns"
when
Pattern1()
Pattern2()
then
... // actions
end
// The above rule is internally rewritten as:
rule "2 and connected patterns"
when
Pattern1()
and Pattern2()
then
... // actions
endNote
and cannot have a leading declaration binding. This is because a declaration can only reference a single fact at a time, and when the and is satisfied it matches both facts.
8.11.2. Pattern Matching
8.11.3. Pattern Binding
$p.
8.11.4. Pattern Binding with Variable Example
rule ...
when
$p : Person()
then
System.out.println( "Person " + $p );
endNote
$) is not mandatory.
8.11.5. Constraints
true or false. For example, you can have a constraint that states five is smaller than six.
8.12. Elements and Variables
8.12.1. Property Access on Java Beans (POJOs)
getMyProperty() (or isMyProperty() for a primitive boolean) which takes no arguments and return something.
Introspector class to do this mapping, so it follows the standard Java bean specification.
Warning
8.12.2. POJO Example
Person( age == 50 ) // this is the same as: Person( getAge() == 50 )
- The age property
- The age property is written as
agein DRL instead of the gettergetAge() - Property accessors
- You can use property access (
age) instead of getters explicitly (getAge()) because of performance enhancements through field indexing.
8.12.3. Working with POJOs
Procedure 8.9. Task
- Observe the example below:
public int getAge() { Date now = DateUtil.now(); // Do NOT do this return DateUtil.differenceInYears(now, birthday); } - To solve this, insert a fact that wraps the current date into working memory and update that fact between
fireAllRulesas needed.
8.12.4. POJO Fallbacks
8.12.5. Fallback Example
Person( age == 50 ) // If Person.getAge() does not exists, this falls back to: Person( age() == 50 )
Person( address.houseNumber == 50 ) // this is the same as: Person( getAddress().getHouseNumber() == 50 )
Warning
houseNumber changes, any Person with that Address must be marked as updated.
8.12.6. Java Expressions
Table 8.6. Java Expressions
| Capability | Example |
|---|---|
|
You can use any Java expression that returns a
boolean as a constraint inside the parentheses of a pattern. Java expressions can be mixed with other expression enhancements, such as property access.
|
Person( age == 50 ) |
|
You can change the evaluation priority by using parentheses, as in any logic or mathematical expression.
|
Person( age > 100 && ( age % 10 == 0 ) ) |
|
You can reuse Java methods.
|
Person( Math.round( weight / ( height * height ) ) < 25.0 ) |
|
Type coercion is always attempted if the field and the value are of different types; exceptions will be thrown if a bad coercion is attempted.
|
Person( age == "10" ) // "10" is coerced to 10 |
Warning
Warning
Person( System.currentTimeMillis() % 1000 == 0 ) // Do NOT do this
Important
== and !=.
== operator has null-safe equals() semantics:
// Similar to: java.util.Objects.equals(person.getFirstName(), "John") // so (because "John" is not null) similar to: // "John".equals(person.getFirstName()) Person( firstName == "John" )
!= operator has null-safe !equals() semantics:
// Similar to: !java.util.Objects.equals(person.getFirstName(), "John") Person( firstName != "John" )
8.12.7. Comma-Separated Operators
,') is used to separate constraint groups. It has implicit and connective semantics.
8.12.8. Comma-Separated Operator Example
// Person is at least 50 and weighs at least 80 kg Person( age > 50, weight > 80 )
// Person is at least 50, weighs at least 80 kg and is taller than 2 meter. Person( age > 50, weight > 80, height > 2 )
Note
,) operator cannot be embedded in a composite constraint expression, such as parentheses.
8.12.9. Binding Variables
8.12.10. Binding Variable Examples
// 2 persons of the same age Person( $firstAge : age ) // binding Person( age == $firstAge ) // constraint expression
Note
// Not recommended Person( $age : age * 2 < 100 )
// Recommended (separates bindings and constraint expressions) Person( age * 2 < 100, $age : age )
8.12.11. Unification
8.12.12. Unification Example
Person( $age := age ) Person( $age := age)
8.12.13. Options and Operators in Red Hat JBoss BRMS
Table 8.7. Options and Operators in Red Hat JBoss BRMS
| Option | Description | Example |
|---|---|---|
|
Date literal
|
The date format
dd-mmm-yyyy is supported by default. You can customize this by providing an alternative date format mask as the System property named drools.dateformat. If more control is required, use a restriction.
|
Cheese( bestBefore < "27-Oct-2009" ) |
| List and Map access |
You can directly access a
List value by index.
|
// Same as childList(0).getAge() == 18 Person( childList[0].age == 18 ) |
| Value key |
You can directly access a
Map value by key.
|
// Same as credentialMap.get("jsmith").isValid()
Person( credentialMap["jsmith"].valid )
|
|
Abbreviated combined relation condition
|
This allows you to place more than one restriction on a field using the restriction connectives
&& or ||. Grouping via parentheses is permitted, resulting in a recursive syntax pattern.
|
// Simple abbreviated combined relation condition using a single && Person( age > 30 && < 40 ) // Complex abbreviated combined relation using groupings
Person( age ( (> 30 && < 40) ||
(> 20 && < 25) ) )
// Mixing abbreviated combined relation with constraint connectives Person( age > 30 && < 40 || location == "london" ) |
| Operators |
Operators can be used on properties with natural ordering. For example, for Date fields,
< means before, for String fields, it means alphabetically lower.
|
Person( firstName < $otherFirstName ) Person( birthDate < $otherBirthDate ) |
|
Operator matches
|
Matches a field against any valid Java
regular expression. Typically that regexp is a string literal, but variables that resolve to a valid regexp are also allowed. It only applies on String properties. Using matches against a null value always evaluates to false.
|
Cheese( type matches "(Buffalo)?\\S*Mozarella" ) |
|
Operator not matches
|
The operator returns true if the String does not match the regular expression. The same rules apply as for the
matches operator. It only applies on String properties.
|
Cheese( type not matches "(Buffulo)?\\S*Mozarella" ) |
|
The operator contains
|
CheeseCounter( cheeses contains "stilton" ) // contains with a String literal CheeseCounter( cheeses contains $var ) // contains with a variable | |
|
The operator not contains
|
The operator
not contains is used to check whether a field that is a Collection or array does not contain the specified value. It only applies on Collection properties.
|
CheeseCounter( cheeses not contains "cheddar" ) // not contains with a String literal CheeseCounter( cheeses not contains $var ) // not contains with a variable |
|
The operator memberOf
|
The operator
memberOf is used to check whether a field is a member of a collection or array; that collection must be a variable.
|
CheeseCounter( cheese memberOf $matureCheeses ) |
|
The operator not memberOf
|
The operator
not memberOf is used to check whether a field is not a member of a collection or array. That collection must be a variable.
|
CheeseCounter( cheese not memberOf $matureCheeses ) |
|
The operator soundslike
|
This operator is similar to
matches, but it checks whether a word has almost the same sound (using English pronunciation) as the given value.
|
// match cheese "fubar" or "foobar" Cheese( name soundslike 'foobar' ) |
|
The operator str
|
The operator
str is used to check whether a field that is a String starts with or ends with a certain value. It can also be used to check the length of the String.
|
Message( routingValue str[startsWith] "R1" ) Message( routingValue str[endsWith] "R2" ) Message( routingValue str[length] 17 ) |
|
Compound Value Restriction
|
Compound value restriction is used where there is more than one possible value to match. Currently only the
in and not in evaluators support this. The second operand of this operator must be a comma-separated list of values, enclosed in parentheses. Values may be given as variables, literals, return values or qualified identifiers. Both evaluators are actually syntactic sugar, internally rewritten as a list of multiple restrictions using the operators != and ==.
|
Person( $cheese : favouriteCheese ) Cheese( type in ( "stilton", "cheddar", $cheese ) ) |
|
Inline Eval Operator (deprecated)
|
An inline eval constraint can use any valid dialect expression as long as it results to a primitive boolean. The expression must be constant over time. Any previously bound variable, from the current or previous pattern, can be used; autovivification is also used to auto-create field binding variables. When an identifier is found that is not a current variable, the builder looks to see if the identifier is a field on the current object type, if it is, the field binding is auto-created as a variable of the same name. This is called autovivification of field variables inside of inline eval's.
|
Person( girlAge : age, sex = "F" ) Person( eval( age == girlAge + 2 ), sex = 'M' ) // eval() is actually obsolete in this example |
8.12.14. Operator Precedence
Table 8.8. Operator precedence
| Operator type | Operators | Notes |
|---|---|---|
| (nested) property access | . | Not normal Java semantics |
| List/Map access | [ ] | Not normal Java semantics |
| constraint binding | : | Not normal Java semantics |
| multiplicative | * /% | |
| additive | + - | |
| shift | << >>>>> | |
| relational | < ><= >=instanceof | |
| equality | == != | Does not use normal Java (not) same semantics: uses (not) equals semantics instead. |
| non-short circuiting AND | & | |
| non-short circuiting exclusive OR | ^ | |
| non-short circuiting inclusive OR | | | |
| logical AND | && | |
| logical OR | || | |
| ternary | ? : | |
| Comma separated AND | , | Not normal Java semantics |
8.12.15. Fine Grained Property Change Listeners
Note
8.12.16. Fine Grained Property Change Listener Example
- DRL example
declare Person @propertyReactive firstName : String lastName : String end- Java class example
@PropertyReactive public static class Person { private String firstName; private String lastName; }
8.12.17. Working with Fine Grained Property Change Listeners
8.12.18. Using Patterns with @watch
! and to make the pattern to listen for all or none of the properties of the type used in the pattern respectively with the wildcards * and !*.
8.12.19. @watch Example
// listens for changes on both firstName (inferred) and lastName
Person( firstName == $expectedFirstName ) @watch( lastName )
// listens for all the properties of the Person bean
Person( firstName == $expectedFirstName ) @watch( * )
// listens for changes on lastName and explicitly exclude firstName
Person( firstName == $expectedFirstName ) @watch( lastName, !firstName )
// listens for changes on all the properties except the age one
Person( firstName == $expectedFirstName ) @watch( *, !age )Note
8.12.20. Using @PropertySpecificOption
on option of the KnowledgeBuilderConfiguration. This new PropertySpecificOption can have one of the following 3 values:
- DISABLED => the feature is turned off and all the other related annotations are just ignored
- ALLOWED => this is the default behavior: types are not property reactive unless they are not annotated with @PropertySpecific
- ALWAYS => all types are property reactive by default8.12.21. Basic Conditional Elements
Table 8.9. Basic Conditional Elements
| Name | Description | Example | Additional options |
|---|---|---|---|
|
and
|
The Conditional Element
and is used to group other Conditional Elements into a logical conjunction. JBoss BRMS supports both prefix and and infix and. It supports explicit grouping with parentheses. You can also use traditional infix and prefix and.
|
//infixAnd Cheese( cheeseType : type ) and Person( favouriteCheese == cheeseType ) //infixAnd with grouping
( Cheese( cheeseType : type ) and
( Person( favouriteCheese == cheeseType ) or
Person( favouriteCheese == cheeseType ) )
|
Prefix
and is also supported:
(and Cheese( cheeseType : type )
Person( favouriteCheese == cheeseType ) )
The root element of the LHS is an implicit prefix
and and doesn't need to be specified:
when
Cheese( cheeseType : type )
Person( favouriteCheese == cheeseType )
then
...
|
|
or
|
This is a shortcut for generating two or more similar rules. JBoss BRMS supports both prefix
or and infix or. You can use traditional infix, prefix and explicit grouping parentheses.
|
//infixOr Cheese( cheeseType : type ) or Person( favouriteCheese == cheeseType ) //infixOr with grouping
( Cheese( cheeseType : type ) or
( Person( favouriteCheese == cheeseType ) and
Person( favouriteCheese == cheeseType ) )
(or Person( sex == "f", age > 60 )
Person( sex == "m", age > 65 )
|
Allows for optional pattern binding. Each pattern must be bound separately, using eponymous variables:
pensioner : ( Person( sex == "f", age > 60 ) or Person( sex == "m", age > 65 ) ) (or pensioner : Person( sex == "f", age > 60 )
pensioner : Person( sex == "m", age > 65 ) )
|
|
not
|
This checks to ensure an object specified as absent is not included in the Working Memory. It may be followed by parentheses around the condition elements it applies to. (In a single pattern you can omit the parentheses.)
|
// Brackets are optional:
not Bus(color == "red")
// Brackets are optional:
not ( Bus(color == "red", number == 42) )
// "not" with nested infix
| |
| exists |
This checks the working memory to see if a specified item exists. The keyword
exists must be followed by parentheses around the CEs that it applies to. (In a single pattern you can omit the parentheses.)
|
exists Bus(color == "red")
// brackets are optional:
exists ( Bus(color == "red", number == 42) )
// "exists" with nested infix
| |
Note
or is different from the connective || for constraints and restrictions in field constraints. The engine cannot interpret the Conditional Element or. Instead, a rule with or is rewritten as a number of subrules. This process ultimately results in a rule that has a single or as the root node and one subrule for each of its CEs. Each subrule can activate and fire like any normal rule; there is no special behavior or interaction between these subrules.
8.12.22. The Conditional Element Forall
Forall can be nested inside other CEs. For instance, forall can be used inside a not CE. Only single patterns have optional parentheses, so with a nested forall parentheses must be used.
8.12.23. Forall Examples
- Evaluating to true
rule "All English buses are red" when forall( $bus : Bus( type == 'english') Bus( this == $bus, color = 'red' ) ) then // all English buses are red end- Single pattern forall
rule "All Buses are Red" when forall( Bus( color == 'red' ) ) then // all Bus facts are red end- Multi-pattern forall
rule "all employees have health and dental care programs" when forall( $emp : Employee() HealthCare( employee == $emp ) DentalCare( employee == $emp ) ) then // all employees have health and dental care end- Nested forall
rule "not all employees have health and dental care" when not ( forall( $emp : Employee() HealthCare( employee == $emp ) DentalCare( employee == $emp ) ) ) then // not all employees have health and dental care end
8.12.24. The Conditional Element From
from enables users to specify an arbitrary source for data to be matched by LHS patterns. This allows the engine to reason over data not in the Working Memory. The data source could be a sub-field on a bound variable or the results of a method call. It is a powerful construction that allows out of the box integration with other application components and frameworks. One common example is the integration with data retrieved on-demand from databases using hibernate named queries.
Important
from with lock-on-active rule attribute can result in rules not being fired.
- Avoid the use of
fromwhen you can assert all facts into working memory or use nested object references in your constraint expressions (shown below). - Place the variable assigned used in the modify block as the last sentence in your condition (LHS).
- Avoid the use of
lock-on-activewhen you can explicitly manage how rules within the same rule-flow group place activations on one another.
8.12.25. From Examples
- Reasoning and binding on patterns
rule "validate zipcode" when Person( $personAddress : address ) Address( zipcode == "23920W") from $personAddress then // zip code is ok end- Using a graph notation
rule "validate zipcode" when $p : Person( ) $a : Address( zipcode == "23920W") from $p.address then // zip code is ok end- Iterating over all objects
rule "apply 10% discount to all items over US$ 100,00 in an order" when $order : Order() $item : OrderItem( value > 100 ) from $order.items then // apply discount to $item end- Use with lock-on-active
rule "Assign people in North Carolina (NC) to sales region 1" ruleflow-group "test" lock-on-active true when $p : Person(address.state == "NC" ) then modify ($p) {} // Assign person to sales region 1 in a modify block end rule "Apply a discount to people in the city of Raleigh" ruleflow-group "test" lock-on-active true when $p : Person(address.city == "Raleigh" ) then modify ($p) {} //Apply discount to person in a modify block end
8.12.26. The Conditional Element Collect
collect allows rules to reason over a collection of objects obtained from the given source or from the working memory. In First Oder Logic terms this is the cardinality quantifier.
collect can be any concrete class that implements the java.util.Collection interface and provides a default no-arg public constructor. You can use Java collections like ArrayList, LinkedList and HashSet or your own class, as long as it implements the java.util.Collection interface and provide a default no-arg public constructor.
collect CE are in the scope of both source and result patterns and therefore you can use them to constrain both your source and result patterns. Any binding made inside collect is not available for use outside of it.
8.12.27. The Conditional Element Accumulate
accumulate is a more flexible and powerful form of collect, in the sense that it can be used to do what collect does and also achieve results that the CE collect is not capable of doing. It allows a rule to iterate over a collection of objects, executing custom actions for each of the elements. At the end it returns a result object.
8.12.28. Syntax for the Conditional Element Accumulate
- Top level accumulate syntax
accumulate(<source pattern>;<functions>[;<constraints>] )- Syntax example
rule "Raise alarm" when $s : Sensor() accumulate( Reading( sensor == $s, $temp : temperature ); $min : min( $temp ), $max : max( $temp ), $avg : average( $temp ); $min < 20, $avg > 70 ) then // raise the alarm endIn the above example, min, max and average are Accumulate Functions and will calculate the minimum, maximum and average temperature values over all the readings for each sensor.
8.12.29. Functions of the Conditional Element Accumulate
- average
- min
- max
- count
- sum
- collectList
- collectSet
rule "Average profit"
when
$order : Order()
accumulate( OrderItem( order == $order, $cost : cost, $price : price );
$avgProfit : average( 1 - $cost / $price ) )
then
// average profit for $order is $avgProfit
end8.12.30. The Conditional Element accumulate and Pluggability
org.drools.runtime.rule.TypedAccumulateFunction interface and add a line to the configuration file or set a system property to let the engine know about the new function.
8.12.31. The Conditional Element accumulate and Pluggability Example
average function:
/**
* An implementation of an accumulator capable of calculating average values
*/
public class AverageAccumulateFunction implements org.drools.runtime.rule.TypedAccumulateFunction {
public void readExternal(ObjectInput in) throws IOException, ClassNotFoundException {
}
public void writeExternal(ObjectOutput out) throws IOException {
}
public static class AverageData implements Externalizable {
public int count = 0;
public double total = 0;
public AverageData() {}
public void readExternal(ObjectInput in) throws IOException, ClassNotFoundException {
count = in.readInt();
total = in.readDouble();
}
public void writeExternal(ObjectOutput out) throws IOException {
out.writeInt(count);
out.writeDouble(total);
}
}
/* (non-Javadoc)
* @see org.drools.base.accumulators.AccumulateFunction#createContext()
*/
public Serializable createContext() {
return new AverageData();
}
/* (non-Javadoc)
* @see org.drools.base.accumulators.AccumulateFunction#init(java.lang.Object)
*/
public void init(Serializable context) throws Exception {
AverageData data = (AverageData) context;
data.count = 0;
data.total = 0;
}
/* (non-Javadoc)
* @see org.drools.base.accumulators.AccumulateFunction#accumulate(java.lang.Object, java.lang.Object)
*/
public void accumulate(Serializable context,
Object value) {
AverageData data = (AverageData) context;
data.count++;
data.total += ((Number) value).doubleValue();
}
/* (non-Javadoc)
* @see org.drools.base.accumulators.AccumulateFunction#reverse(java.lang.Object, java.lang.Object)
*/
public void reverse(Serializable context,
Object value) throws Exception {
AverageData data = (AverageData) context;
data.count--;
data.total -= ((Number) value).doubleValue();
}
/* (non-Javadoc)
* @see org.drools.base.accumulators.AccumulateFunction#getResult(java.lang.Object)
*/
public Object getResult(Serializable context) throws Exception {
AverageData data = (AverageData) context;
return new Double( data.count == 0 ? 0 : data.total / data.count );
}
/* (non-Javadoc)
* @see org.drools.base.accumulators.AccumulateFunction#supportsReverse()
*/
public boolean supportsReverse() {
return true;
}
/**
* {@inheritDoc}
*/
public Class< ? > getResultType() {
return Number.class;
}
}
8.12.32. Code for the Conditional Element Accumulate's Functions
- Code for plugging in functions (to be entered into the config file)
jbossrules.accumulate.function.average = org.jbossrules.base.accumulators.AverageAccumulateFunction
- Alternate Syntax: single function with return type
rule "Apply 10% discount to orders over US$ 100,00" when $order : Order() $total : Number( doubleValue > 100 ) from accumulate( OrderItem( order == $order, $value : value ), sum( $value ) ) then # apply discount to $order end
8.12.33. Accumulate with Inline Custom Code
Warning
accumulate CE with inline custom code is:
<result pattern>from accumulate(<source pattern>,init(<init code>),action(<action code>),reverse(<reverse code>),result(<result expression>) )
- <source pattern>: the source pattern is a regular pattern that the engine will try to match against each of the source objects.
- <init code>: this is a semantic block of code in the selected dialect that will be executed once for each tuple, before iterating over the source objects.
- <action code>: this is a semantic block of code in the selected dialect that will be executed for each of the source objects.
- <reverse code>: this is an optional semantic block of code in the selected dialect that if present will be executed for each source object that no longer matches the source pattern. The objective of this code block is to undo any calculation done in the <action code> block, so that the engine can do decremental calculation when a source object is modified or retracted, hugely improving performance of these operations.
- <result expression>: this is a semantic expression in the selected dialect that is executed after all source objects are iterated.
- <result pattern>: this is a regular pattern that the engine tries to match against the object returned from the <result expression>. If it matches, the
accumulateconditional element evaluates to true and the engine proceeds with the evaluation of the next CE in the rule. If it does not matches, theaccumulateCE evaluates to false and the engine stops evaluating CEs for that rule.
8.12.34. Accumulate with Inline Custom Code Examples
- Inline custom code
rule "Apply 10% discount to orders over US$ 100,00" when $order : Order() $total : Number( doubleValue > 100 ) from accumulate( OrderItem( order == $order, $value : value ), init( double total = 0; ), action( total += $value; ), reverse( total -= $value; ), result( total ) ) then # apply discount to $order endIn the above example, for eachOrderin the Working Memory, the engine will execute the init code initializing the total variable to zero. Then it will iterate over allOrderItemobjects for that order, executing the action for each one (in the example, it will sum the value of all items into the total variable). After iterating over allOrderItemobjects, it will return the value corresponding to the result expression (in the above example, the value of variabletotal). Finally, the engine will try to match the result with theNumberpattern, and if the double value is greater than 100, the rule will fire.- Instantiating and populating a custom object
rule "Accumulate using custom objects" when $person : Person( $likes : likes ) $cheesery : Cheesery( totalAmount > 100 ) from accumulate( $cheese : Cheese( type == $likes ), init( Cheesery cheesery = new Cheesery(); ), action( cheesery.addCheese( $cheese ); ), reverse( cheesery.removeCheese( $cheese ); ), result( cheesery ) ); then // do something end
8.12.35. Conditional Element Eval
eval is essentially a catch-all which allows any semantic code (that returns a primitive boolean) to be executed. This code can refer to variables that were bound in the LHS of the rule, and functions in the rule package. Overuse of eval reduces the declarativeness of your rules and can result in a poorly performing engine. While eval can be used anywhere in the patterns, the best practice is to add it as the last conditional element in the LHS of a rule.
8.12.36. Conditional Element Eval Examples
eval looks like in use:
p1 : Parameter() p2 : Parameter() eval( p1.getList().containsKey( p2.getItem() ) )
p1 : Parameter() p2 : Parameter() // call function isValid in the LHS eval( isValid( p1, p2 ) )
8.12.37. The Right Hand Side
Note
8.12.38. RHS Convenience Methods
Table 8.10. RHS Convenience Methods
| Name | Description |
|---|---|
update(object, handle);
|
Tells the engine that an object has changed (one that has been bound to something on the LHS) and rules that need to be reconsidered.
|
update(object);
|
Using
update(), the Knowledge Helper will look up the facthandle via an identity check for the passed object. (If you provide Property Change Listeners to your Java beans that you are inserting into the engine, you can avoid the need to call update() when the object changes.). After a fact's field values have changed you must call update before changing another fact, or you will cause problems with the indexing within the rule engine. The modify keyword avoids this problem.
|
insert(newobject());
|
Places a new object of your creation into the Working Memory.
|
insertLogical(newobject());
|
Similar to insert, but the object will be automatically retracted when there are no more facts to support the truth of the currently firing rule.
|
retract(handle);
|
Removes an object from Working Memory.
|
8.12.39. Convenience Methods using the Drools Variable
- The call
drools.halt()terminates rule execution immediately. This is required for returning control to the point whence the current session was put to work withfireUntilHalt(). - Methods
insert(Object o),update(Object o)andretract(Object o)can be called ondroolsas well, but due to their frequent use they can be called without the object reference. drools.getWorkingMemory()returns theWorkingMemoryobject.drools.setFocus( String s)sets the focus to the specified agenda group.drools.getRule().getName(), called from a rule's RHS, returns the name of the rule.drools.getTuple()returns the Tuple that matches the currently executing rule, anddrools.getActivation()delivers the corresponding Activation. (These calls are useful for logging and debugging purposes.)
8.12.40. Convenience Methods using the Kcontext Variable
- The call
kcontext.getKieRuntime().halt()terminates rule execution immediately. - The accessor
getAgenda()returns a reference to the session'sAgenda, which in turn provides access to the various rule groups: activation groups, agenda groups, and rule flow groups. A fairly common paradigm is the activation of some agenda group, which could be done with the lengthy call:// give focus to the agenda group CleanUp kcontext.getKieRuntime().getAgenda().getAgendaGroup( "CleanUp" ).setFocus();
(You can achieve the same usingdrools.setFocus( "CleanUp" ).) - To run a query, you call
getQueryResults(String query), whereupon you may process the results. - A set of methods dealing with event management lets you add and remove event listeners for the Working Memory and the Agenda.
- Method
getKieBase()returns theKieBaseobject, the backbone of all the Knowledge in your system, and the originator of the current session. - You can manage globals with
setGlobal(...),getGlobal(...)andgetGlobals(). - Method
getEnvironment()returns the runtime'sEnvironment.
8.12.41. The Modify Statement
Table 8.11. The Modify Statement
| Name | Description | Syntax | Example |
|---|---|---|---|
| modify |
This provides a structured approach to fact updates. It combines the update operation with a number of setter calls to change the object's fields.
|
The parenthesized <fact-expression> must yield a fact object reference. The expression list in the block should consist of setter calls for the given object, to be written without the usual object reference, which is automatically prepended by the compiler.
| rule "modify stilton"
when
$stilton : Cheese(type == "stilton")
then
modify( $stilton ){
setPrice( 20 ),
setAge( "overripe" )
}
end
|
8.12.42. Query Examples
Note
ksession.getQueryResults("name"), where "name" is the query's name. This returns a list of query results, which allow you to retrieve the objects that matched the query.
- Query for people over the age of 30
query "people over the age of 30" person : Person( age > 30 ) end- Query for people over the age of X, and who live in Y
query "people over the age of x" (int x, String y) person : Person( age > x, location == y ) end
8.12.43. QueryResults Example
QueryResults results = ksession.getQueryResults( "people over the age of 30" );
System.out.println( "we have " + results.size() + " people over the age of 30" );
System.out.println( "These people are are over 30:" );
for ( QueryResultsRow row : results ) {
Person person = ( Person ) row.get( "person" );
System.out.println( person.getName() + "\n" );
}8.12.44. Queries Calling Other Queries
Note
8.12.45. Queries Calling Other Queries Example
- Query calling another query
declare Location thing : String location : String end query isContainedIn( String x, String y ) Location(x, y;) or ( Location(z, y;) and ?isContainedIn(x, z;) ) end- Using live queries to reactively receive changes over time from query results
query isContainedIn( String x, String y ) Location(x, y;) or ( Location(z, y;) and isContainedIn(x, z;) ) end rule look when Person( $l : likes ) isContainedIn( $l, 'office'; ) then insertLogical( $l 'is in the office' ); end
8.12.46. Unification for Derivation Queries
org.drools.runtime.rule.Variable.v. (You must use v and not an alternative instance of Variable.) These are referred to as out arguments.
Note
8.13. Searching the Working Memory using Query
8.13.1. Queries
get method with the binding variable's name as its argument. If the binding refers to a fact object, its FactHandle can be retrieved by calling getFactHandle, again with the variable's name as the parameter. Illustrated below is a Query example:
QueryResults results =
ksession.getQueryResults( "my query", new Object[] { "string" } );
for ( QueryResultsRow row : results ) {
System.out.println( row.get( "varName" ) );
}8.13.2. Live Queries
dispose method terminates the query and discontinues this reactive scenario.
8.13.3. ViewChangedEventListener Implementation Example
final List updated = new ArrayList();
final List removed = new ArrayList();
final List added = new ArrayList();
ViewChangedEventListener listener = new ViewChangedEventListener() {
public void rowUpdated(Row row) {
updated.add( row.get( "$price" ) );
}
public void rowRemoved(Row row) {
removed.add( row.get( "$price" ) );
}
public void rowAdded(Row row) {
added.add( row.get( "$price" ) );
}
};
// Open the LiveQuery
LiveQuery query = ksession.openLiveQuery( "cars",
new Object[] { "sedan", "hatchback" },
listener );
...
...
query.dispose() // calling dispose to terminate the live queryNote
8.14. Domain Specific Languages (DSLs)
8.14.1. The DSL Editor
Note
8.14.2. Using DSLs
8.14.3. DSL Example
Table 8.12. DSL Example
| Example | Description |
|---|---|
[when]Something is {colour}=Something(colour=="{colour}")
| [when] indicates the scope of the expression (that is, whether it is valid for the LHS or the RHS of a rule).
The part after the bracketed keyword is the expression that you use in the rule.
The part to the right of the equal sign ("=") is the mapping of the expression into the rule language. The form of this string depends on its destination, RHS or LHS. If it is for the LHS, then it ought to be a term according to the regular LHS syntax; if it is for the RHS then it might be a Java statement.
|
8.14.4. How the DSL Parser Works
- The DSL extracts the string values appearing where the expression contains variable names in brackets.
- The values obtained from these captures are interpolated wherever that name occurs on the right hand side of the mapping.
- The interpolated string replaces whatever was matched by the entire expression in the line of the DSL rule file.
Note
8.14.5. The DSL Compiler
8.14.6. DSL Syntax Examples
Table 8.13. DSL Syntax Examples
| Name | Description | Example |
|---|---|---|
| Quotes | Use quotes for textual data that a rule editor may want to enter. You can also enclose the capture with words to ensure that the text is correctly matched. |
[when]something is "{color}"=Something(color=="{color}")
[when]another {state} thing=OtherThing(state=="{state}"
|
| Braces | In a DSL mapping, the braces "{" and "}" should only be used to enclose a variable definition or reference, resulting in a capture. If they should occur literally, either in the expression or within the replacement text on the right hand side, they must be escaped with a preceding backslash ("\"). |
[then]do something= if (foo) \{ doSomething(); \}
|
| Mapping with correct syntax example | n/a |
# This is a comment to be ignored.
[when]There is a person with name of "{name}"=Person(name=="{name}")
[when]Person is at least {age} years old and lives in "{location}"=
Person(age >= {age}, location=="{location}")
[then]Log "{message}"=System.out.println("{message}");
[when]And = and
|
| Expanded DSL example | n/a |
There is a person with name of "Kitty"
==> Person(name="Kitty")
Person is at least 42 years old and lives in "Atlanta"
==> Person(age >= 42, location="Atlanta")
Log "boo"
==> System.out.println("boo");
There is a person with name of "Bob" and Person is at least 30 years old and lives in "Utah"
==> Person(name="Bob") and Person(age >= 30, location="Utah")
|
Note
8.14.7. Chaining DSL Expressions
8.14.8. Adding Constraints to Facts
Table 8.14. Adding Constraints to Facts
| Name | Description | Example |
|---|---|---|
| Expressing LHS conditions |
The DSL facility allows you to add constraints to a pattern by a simple convention: if your DSL expression starts with a hyphen (minus character, "-") it is assumed to be a field constraint and, consequently, is is added to the last pattern line preceding it.
In the example, the class
Cheese, has these fields: type, price, age and country. You can express some LHS condition in normal DRL.
|
Cheese(age < 5, price == 20, type=="stilton", country=="ch") |
| DSL definitions |
The DSL definitions given in this example result in three DSL phrases which may be used to create any combination of constraint involving these fields.
|
[when]There is a Cheese with=Cheese()
[when]- age is less than {age}=age<{age}
[when]- type is '{type}'=type=='{type}'
[when]- country equal to '{country}'=country=='{country}'
|
| "-" |
The parser will pick up a line beginning with "-" and add it as a constraint to the preceding pattern, inserting a comma when it is required.
| There is a Cheese with
- age is less than 42
- type is 'stilton'
Cheese(age<42, type=='stilton') |
| Defining DSL phrases |
Defining DSL phrases for various operators and even a generic expression that handles any field constraint reduces the amount of DSL entries.
|
[when][]is less than or equal to=<= [when][]is less than=< [when][]is greater than or equal to=>= [when][]is greater than=> [when][]is equal to=== [when][]equals=== [when][]There is a Cheese with=Cheese()[when][]- {field:\w*} {operator} {value:\d*}={field} {operator} {value} |
| DSL definition rule | n/a |
There is a Cheese with - age is less than 42 - rating is greater than 50 - type equals 'stilton'
In this specific case, a phrase such as "is less than" is replaced by
<, and then the line matches the last DSL entry. This removes the hyphen, but the final result is still added as a constraint to the preceding pattern. After processing all of the lines, the resulting DRL text is:
Cheese(age<42, rating > 50, type=='stilton') |
Note
8.14.9. Tips for Developing DSLs
- Write representative samples of the rules your application requires and test them as you develop.
- Rules, both in DRL and in DSLR, refer to entities according to the data model representing the application data that should be subject to the reasoning process defined in rules.
- Writing rules is easier if most of the data model's types are facts.
- Mark variable parts as parameters. This provides reliable leads for useful DSL entries.
- You may postpone implementation decisions concerning conditions and actions during this first design phase by leaving certain conditional elements and actions in their DRL form by prefixing a line with a greater sign (">"). (This is also handy for inserting debugging statements.)
- New rules can be written by reusing the existing DSL definitions, or by adding a parameter to an existing condition or consequence entry.
- Keep the number of DSL entries small. Using parameters lets you apply the same DSL sentence for similar rule patterns or constraints.
8.14.10. DSL and DSLR Reference
- A line starting with "#" or "//" (with or without preceding white space) is treated as a comment. A comment line starting with "#/" is scanned for words requesting a debug option, see below.
- Any line starting with an opening bracket ("[") is assumed to be the first line of a DSL entry definition.
- Any other line is appended to the preceding DSL entry definition, with the line end replaced by a space.
8.14.11. The Make Up of a DSL Entry
- A scope definition, written as one of the keywords "when" or "condition", "then" or "consequence", "*" and "keyword", enclosed in brackets ("[" and "]"). This indicates whether the DSL entry is valid for the condition or the consequence of a rule, or both. A scope indication of "keyword" means that the entry has global significance, that is, it is recognized anywhere in a DSLR file.
- A type definition, written as a Java class name, enclosed in brackets. This part is optional unless the next part begins with an opening bracket. An empty pair of brackets is valid, too.
- A DSL expression consists of a (Java) regular expression, with any number of embedded variable definitions, terminated by an equal sign ("="). A variable definition is enclosed in braces ("{" and "}"). It consists of a variable name and two optional attachments, separated by colons (":"). If there is one attachment, it is a regular expression for matching text that is to be assigned to the variable. If there are two attachments, the first one is a hint for the GUI editor and the second one the regular expression.Note that all characters that are "magic" in regular expressions must be escaped with a preceding backslash ("\") if they should occur literally within the expression.
- The remaining part of the line after the delimiting equal sign is the replacement text for any DSLR text matching the regular expression. It may contain variable references, i.e., a variable name enclosed in braces. Optionally, the variable name may be followed by an exclamation mark ("!") and a transformation function, see below.Note that braces ("{" and "}") must be escaped with a preceding backslash ("\") if they should occur literally within the replacement string.
8.14.12. Debug Options for DSL Expansion
Table 8.15. Debug Options for DSL Expansion
| Word | Description |
|---|---|
| result | Prints the resulting DRL text, with line numbers. |
| steps | Prints each expansion step of condition and consequence lines. |
| keyword | Dumps the internal representation of all DSL entries with scope "keyword". |
| when | Dumps the internal representation of all DSL entries with scope "when" or "*". |
| then | Dumps the internal representation of all DSL entries with scope "then" or "*". |
| usage | Displays a usage statistic of all DSL entries. |
8.14.13. DSL Definition Example
# Comment: DSL examples
#/ debug: display result and usage
# keyword definition: replaces "regula" by "rule"
[keyword][]regula=rule
# conditional element: "T" or "t", "a" or "an", convert matched word
[when][][Tt]here is an? {entity:\w+}=
${entity!lc}: {entity!ucfirst} ()
# consequence statement: convert matched word, literal braces
[then][]update {entity:\w+}=modify( ${entity!lc} )\{ \}
8.14.14. Transformation of a DSLR File
- The text is read into memory.
- Each of the "keyword" entries is applied to the entire text. The regular expression from the keyword definition is modified by replacing white space sequences with a pattern matching any number of white space characters, and by replacing variable definitions with a capture made from the regular expression provided with the definition, or with the default (".*?"). Then, the DSLR text is searched exhaustively for occurrences of strings matching the modified regular expression. Substrings of a matching string corresponding to variable captures are extracted and replace variable references in the corresponding replacement text, and this text replaces the matching string in the DSLR text.
- Sections of the DSLR text between "when" and "then", and "then" and "end", respectively, are located and processed in a uniform manner, line by line, as described below.For a line, each DSL entry pertaining to the line's section is taken in turn, in the order it appears in the DSL file. Its regular expression part is modified: white space is replaced by a pattern matching any number of white space characters; variable definitions with a regular expression are replaced by a capture with this regular expression, its default being ".*?". If the resulting regular expression matches all or part of the line, the matched part is replaced by the suitably modified replacement text.Modification of the replacement text is done by replacing variable references with the text corresponding to the regular expression capture. This text may be modified according to the string transformation function given in the variable reference; see below for details.If there is a variable reference naming a variable that is not defined in the same entry, the expander substitutes a value bound to a variable of that name, provided it was defined in one of the preceding lines of the current rule.
- If a DSLR line in a condition is written with a leading hyphen, the expanded result is inserted into the last line, which should contain a pattern CE, that is, a type name followed by a pair of parentheses. if this pair is empty, the expanded line (which should contain a valid constraint) is simply inserted, otherwise a comma (",") is inserted beforehand.If a DSLR line in a consequence is written with a leading hyphen, the expanded result is inserted into the last line, which should contain a "modify" statement, ending in a pair of braces ("{" and "}"). If this pair is empty, the expanded line (which should contain a valid method call) is simply inserted, otherwise a comma (",") is inserted beforehand.
Note
8.14.15. String Transformation Functions
Table 8.16. String Transformation Functions
| Name | Description |
|---|---|
| uc | Converts all letters to upper case. |
| lc | Converts all letters to lower case. |
| ucfirst | Converts the first letter to upper case, and all other letters to lower case. |
| num | Extracts all digits and "-" from the string. If the last two digits in the original string are preceded by "." or ",", a decimal period is inserted in the corresponding position. |
| a?b/c | Compares the string with string a, and if they are equal, replaces it with b, otherwise with c. But c can be another triplet a, b, c, so that the entire structure is, in fact, a translation table. |
8.14.16. Stringing DSL Transformation Functions
Table 8.17. Stringing DSL Transformation Functions
| Name | Description | Example |
|---|---|---|
| .dsl |
A file containing a DSL definition is customarily given the extension
.dsl. It is passed to the Knowledge Builder with ResourceType.DSL. For a file using DSL definition, the extension .dslr should be used. The Knowledge Builder expects ResourceType.DSLR. The IDE, however, relies on file extensions to correctly recognize and work with your rules file.
|
# definitions for conditions
[when][]There is an? {entity}=${entity!lc}: {entity!ucfirst}()
[when][]- with an? {attr} greater than {amount}={attr} <= {amount!num}
[when][]- with a {what} {attr}={attr} {what!positive?>0/negative?%lt;0/zero?==0/ERROR}
|
| DSL passing |
The DSL must be passed to the Knowledge Builder ahead of any rules file using the DSL.
For parsing and expanding a DSLR file the DSL configuration is read and supplied to the parser. Thus, the parser can "recognize" the DSL expressions and transform them into native rule language expressions.
|
KnowledgeBuilder kBuilder = new KnowledgeBuilder(); Resource dsl = ResourceFactory.newClassPathResource( dslPath, getClass() ); kBuilder.add( dsl, ResourceType.DSL ); Resource dslr = ResourceFactory.newClassPathResource( dslrPath, getClass() ); kBuilder.add( dslr, ResourceType.DSLR ); |
Chapter 9. Using JBoss Developer Studio to Create and Test Rules
- Simple wizards for rule and project creation
- Content assistance for generating the basic rule structure. For example, If you open a
.drlfile in the JBoss Developer Studio editor and typeru, and press Ctrl+Space, the template rule structure is created. - Syntax coloring
- Error highlighting
- IntelliSense code completion
- Outline view to display an outline of your structured rule project
- Debug perspective for Rules/Process debugging
- Rete tree view to display Rete network
- Editor for modifying business process diagram
- Support for unit testing via JUnit and TestNG
9.1. JBoss Developer Studio Drools Perspective
- Drools: allows you to work with JBoss BRMS specific resources
- Business Central Repository Exploring
- jBPM: allows you to work with JBoss BPM Suite resources
9.2. JBoss BRMS Runtimes
9.2.1. Defining a JBoss BRMS Runtime
Procedure 9.1. Task
- Extract the runtime jar files located in the
jboss-brms-engine.ziparchive of the JBoss BRMS Generic Deployable zip archive (not the EAP6 deployable zip archive) available from Red Hat Customer Portal. - From the JBoss Developer Studio menu, go to → .The Preferences dialog opens displaying all your preferences.
- Navigate to → .
- To define a new Drools runtime, click the add button.The Drools Runtime dialog opens.
- In the Drools Runtime dialog, you have the following options to provide the name for your runtime and the its location on your file system:
- Use the default JAR files included in the Drools Eclipse plug-in to create a new Drools runtime automatically:
- Click the button.
- Browse and select the folder on your file system where you would like this runtime to be created.The plug-in automatically copies all required dependencies to the specified folder.
- Use one specific release of the Drools project,
- Create a folder on your file system and copy all the necessary Drools libraries and dependencies into it.
- Provide a name for your runtime in the Drools Runtime dialog in the Name field and browse to the location of this folder containing all the required JARs in the Path field.
- Click .The runtime appears in your table of installed Drools runtimes.
- Click the checkbox in front of the newly created runtime to make it the default Drools runtime.This default Drools runtime will be used as the runtime of all your Drools project that does not have a project-specific runtime selected.
- Restart JBoss Developer Studio if you have changed the default runtime to ensure that all the projects that are using the default runtime update their classpath accordingly.
9.2.2. Selecting a Runtime for Your JBoss BRMS Project
Procedure 9.2. Task
- Create a new Drools project and in the final step of the New Drools Project wizard and uncheck the Use default Drools runtime checkbox.
- Click the Configure workspace settings ... link.The workspace preferences showing the currently installed Drools runtimes opens.
- Click to add new runtimes.
9.2.3. Changing the Runtime of Your JBoss BRMS Project
Procedure 9.3. Task
- In the Drools perspective, right-click the project and select Properties.The project properties dialog opens.
- Navigate and select the Drools category.
- Check the Enable project specific settings checkbox and select the appropriate runtime from the drop-down box.If you click the Configure workspace settings ... link, the workspace preferences showing the currently installed Drools runtimes opens. You can add new runtimes there if required. If you uncheck the Enable project specific settings checkbox, it uses the default runtime as defined in your global preferences.
- Click .
9.2.4. Configuring the JBoss BRMS Server
Procedure 9.4. Configure the Server
- Open the Drools view by selecting → → and select Drools and click OK.
- Add the server view by selecting → → and select → .
- Open the server menu by right clicking the Servers panel and select → .
- Define the server by selecting → and clicking Next.
- Set the home directory by clicking the Browse button. Navigate to and select the installation directory for JBoss EAP which has JBoss BRMS installed.
- Provide a name for the server in the Name field, ensure that the configuration file is set, and click Finish.
9.3. Exploring a JBoss BRMS Application
- Facts that are a set of java classes files (POJOs)
- Rules that operate on the facts
- Drools library (jar files) for executing the rules
src/main/javathat stores the class files (facts).src/main/resources/rulesthat stores the .drl files (rules).src/main/resources/processthat stores the .bpmn files (processes).src/main/resources/Drools Librarythat holds the generated .jar files required for rule execution.
9.4. Creating a JBoss BRMS Project
Procedure 9.5. Task
- Go to → → .A New Project wizard opens.
- Navigate to → .A New Drools Project wizard opens.
- On the New Drools Project wizard, click .
- Enter a name for your Drools project and click .
- Check the required checkboxes with default artifacts you need in your project, and click .The Drools Runtime wizard opens.
- Select a Drools runtime.If you have not set up a Drools runtime, click the Configure Workspace Settings... link. If you click this link, the workspace preferences showing the currently installed Drools runtimes opens. Add new runtimes there and click .
- Select the Drools project version from the Select code compatible with: option.
- Provide values for the following:
- groupid: The id of the project's group or the root of your project's Java package name.
- artifactid: The id of the artifact (project).
- version: The version of the artifact under the specified group.
- Click .
- A sample rule file
Sample.drlin thesrc/main/resources/rulesfolder. - A sample process file
Sample.bpmnin thesrc/main/resources/processfolder. - An example java file
DroolsTest.javain thesrc/main/javafolder to execute the rules in the Drools engine in the com.sample package. - All the jar files necessary for execution in the
src/main/resources/DroolsLibrary.
9.5. Using Textual Rule Editor
.drl (or .rule) extension. Usually these contain related rules, but it is also possible to have rules in individual files, grouped by being in the same package namespace. These DRL files are plain text files. Even if your rule group is using a domain specific language (DSL), the rules are still stored as plain text. This allows easy management of rules and versions.
- Content assistance: The pop-up content assistance helps you quickly create rule attributes such as functions, import statements, and package declarations. You can invoke pop-up content assistance by pressing
Ctrl+Space. - Code folding: Code Folding allows you to hide and show sections of a file use the icons with minus and plus on the left vertical line of the editor.
- Sysnchronization with outline view: The text editor is in sync with the structure of the rules in the outline view as soon as you save your rules. The outline view provides a quick way of navigating around rules by name, or even in a file containing hundreds of rules. The items are sorted alphabetically by default.
9.6. Red Hat JBoss BRMS Views
- Working Memory View
- Shows all elements in the Red Hat JBoss BRMS working memory.
- Agenda View
- Shows all elements on the agenda. For each rule on the agenda, the rule name and bound variables are shown.
- Global Data View
- Shows all global data currently defined in the Red Hat JBoss BRMS working memory.
- Audit View
- Can be used to display audit logs containing events that were logged during the execution of a rules engine, in tree form.
- Rete View
- This shows you the current Rete Network for your DRL file. You display it by clicking on the tab "Rete Tree" at the bottom of the DRL Editor window. With the Rete Network visualization being open, you can use drag-and-drop on individual nodes to arrange optimal network overview. You may also select multiple nodes by dragging a rectangle over them so the entire group can be moved around.
Note
The Rete view works only in projects where the rule builder is set in the project´s properties. For other projects, you can use a workaround. Set up a JBoss BRMS Project next to your current project and transfer the libraries and the DRLs you want to inspect with the Rete view. Click on the right tab below in the DRL Editor, then click "Generate Rete View".
9.7. Debugging Rules
- Drools breakpoints are only enabled if you debug your application as a Drools Application. To do this you should perform one of two actions:
- Select the main class of your application. Right-click on it and select → .
- Alternatively, select → to open a new dialog window for creating, managing and running debug configurations.Select the Drools Application item in the left tree and click the button (leftmost icon in the toolbar above the tree). This will create a new configuration with a number of the properties already filled in based on main class you selected in the beginning. All properties shown here are the same as any standard Java program.
Note
Remember to change the name of your debug configuration to something meaningful. - Click the button on the bottom to start debugging your application.
- After enabling the debugging, the application starts executing and will halt if any breakpoint is encountered. This can be a Drools rule breakpoint, or any other standard Java breakpoint. Whenever a Drools rule breakpoint is encountered, the corresponding
.drlfile is opened and the active line is highlighted. The Variables view also contains all rule parameters and their value. You can then use the default Java debug actions to decide what to do next (resume, terminate, step over, etc). The debug views can also be used to determine the contents of the working memory and agenda at that time as well (the current executing working memory is automatically shown).
9.7.1. Creating Breakpoints
- To create breakpoints in the Package Explorer view or Navigator view of the JBoss BRMS perspective, double-click the selected
.drlfile to open it in the editor. - You can add and remove rule breakpoints in the
.drlfiles in two ways:- Double-click the rule in the Rule editor at the line where you want to add a breakpoint. A breakpoint can be removed by double-clicking the rule once more.
Note
Rule breakpoints can only be created in the consequence of a rule. Double-clicking on a line where no breakpoint is allowed does nothing. - Right-click the ruler. Select the action in the context menu. Choosing this action adds a breakpoint at the selected line or remove it if there is one already.
- The Debug perspective contains a Breakpoints view which can be used to see all defined breakpoints, get their properties, enable/disable and remove them. You can switch to it by clicking → → → .
Part III. All About Processes
Chapter 10. Getting Started with Processes
10.1. The JBoss BPM Suite Engine
- Solid, stable core engine for executing your process instances.
- Native support for the latest BPMN 2.0 specification for modeling and executing business processes.
- Strong focus on performance and scalability.
- Light-weight. You can deploy it on almost any device that supports a simple Java Runtime Environment. It does not require any web container at all.
- Pluggable persistence with a default JPA implementation (Optional).
- Pluggable transaction support with a default JTA implementation.
- Implemented as a generic process engine, so it can be extended to support new node types or other process languages.
- Listeners to be notified of various events.
- Ability to migrate running process instances to a new version of their process definition.
10.2. Integrating BPM Suite Engine With Other Services
- The human task serviceThe human task service helps manage human tasks when human actors need to participate in the process. It is fully pluggable and the default implementation is based on the WS-HumanTask specification and manages the life cycle of the tasks, task lists, task forms, and some more advanced features like escalation, delegation, and rule-based assignments.
- The history logThe history log stores all information about the execution of all the processes in the engine. This is necessary if you need access to historic information as runtime persistence only stores the current state of all active process instances. The history log can be used to store all current and historic states of active and completed process instances. It can be used to query for any information related to the execution of process instances, for monitoring, and analysis.
Chapter 11. Working with Processes
11.1. BPMN 2.0 Notation
11.1.1. Business Process Model and Notation (BPMN) 2.0 Specification
Table 11.1. BPMN 2.0 Supported Elements and Attributes
| Element | Supported attributes | Supported elements | Extension attributes | Extension elements |
|---|---|---|---|---|
| definitions | rootElement BPMNDiagram | |||
| process | processType isExecutable name id | property laneSet flowElement | packageName adHoc version | import global |
| sequenceFlow | sourceRef targetRef isImmediate name id | conditionExpression | priority | |
| interface | name id | operation | ||
| operation | name id | inMessageRef | ||
| laneSet | lane | |||
| lane | name id | flowNodeRef | ||
| import* | name | |||
| global* | identifier type | |||
| Events | ||||
| startEvent | name id | dataOutput dataOutputAssociation outputSet eventDefinition | x y width height | |
| endEvent | name id | dataInput dataInputAssociation inputSet eventDefinition | x y width height | |
| intermediateCatchEvent | name id | dataOutput dataOutputAssociation outputSet eventDefinition | x y width height | |
| intermediateThrowEvent | name id | dataInput dataInputAssociation inputSet eventDefinition | x y width height | |
| boundaryEvent | cancelActivity attachedToRef name id | eventDefinition | x y width height | |
| terminateEventDefinition | ||||
| compensateEventDefinition | activityRef | documentation extensionElements | ||
| conditionalEventDefinition | condition | |||
| errorEventDefinition | errorRef | |||
| error | errorCode id | |||
| escalationEventDefinition | escalationRef | |||
| escalation | escalationCode id | |||
| messageEventDefinition | messageRef | |||
| message | itemRef id | |||
| signalEventDefinition | signalRef | |||
| timerEventDefinition | timeCycle timeDuration | |||
| Activities | ||||
| task | name id | ioSpecification dataInputAssociation dataOutputAssociation | taskName x y width height | |
| scriptTask | scriptFormat name id | script | x y width height | |
| script | text[mixed content] | |||
| userTask | name id | ioSpecification dataInputAssociation dataOutputAssociation resourceRole | x y width height | onEntry-script onExit-script |
| potentialOwner | resourceAssignmentExpression | |||
| resourceAssignmentExpression | expression | |||
| businessRuleTask | name id | x y width height ruleFlowGroup | onEntry-script onExit-script | |
| manualTask | name id | x y width height | onEntry-script onExit-script | |
| sendTask | messageRef name id | ioSpecification dataInputAssociation | x y width height | onEntry-script onExit-script |
| receiveTask | messageRef name id | ioSpecification dataOutputAssociation | x y width height | onEntry-script onExit-script |
| serviceTask | operationRef name id | ioSpecification dataInputAssociation dataOutputAssociation | x y width height | onEntry-script onExit-script |
| subProcess | name id | flowElement property loopCharacteristics | x y width height | |
| adHocSubProcess | cancelRemainingInstances name id | completionCondition flowElement property | x y width height | |
| callActivity | calledElement name id | ioSpecification dataInputAssociation dataOutputAssociation | x y width height waitForCompletion independent | onEntry-script onExit-script |
| multiInstanceLoopCharacteristics | loopDataInputRef inputDataItem | |||
| onEntry-script* | scriptFormat | script | ||
| onExit-script* | scriptFormat | script | ||
| Gateways | ||||
| parallelGateway | gatewayDirection name id | x y width height | ||
| eventBasedGateway | gatewayDirection name id | x y width height | ||
| exclusiveGateway | default gatewayDirection name id | x y width height | ||
| inclusiveGateway | default gatewayDirection name id | x y width height | ||
| Data | ||||
| property | itemSubjectRef id | |||
| dataObject | itemSubjectRef id | |||
| itemDefinition | structureRef id | |||
| ioSpecification | dataInput dataOutput inputSet outputSet | |||
| dataInput | name id | |||
| dataInputAssociation | sourceRef targetRef assignment | |||
| dataOutput | name id | |||
| dataOutputAssociation | sourceRef targetRef assignment | |||
| inputSet | dataInputRefs | |||
| outputSet | dataOutputRefs | |||
| assignment | from to | |||
| formalExpression | language | text[mixed content] | ||
| BPMNDI | ||||
| BPMNDiagram | BPMNPlane | |||
| BPMNPlane | bpmnElement | BPMNEdge BPMNShape | ||
| BPMNShape | bpmnElement | Bounds | ||
| BPMNEdge | bpmnElement | waypoint | ||
| Bounds | x y width height | |||
| waypoint | x y | |||
11.1.2. BPMN 2.0 Process Example
<?xml version="1.0" encoding="UTF-8"?>
<definitions id="Definition"
targetNamespace="http://www.example.org/MinimalExample"
typeLanguage="http://www.java.com/javaTypes"
expressionLanguage="http://www.mvel.org/2.0"
xmlns="http://www.omg.org/spec/BPMN/20100524/MODEL"
xmlns:xs="http://www.w3.org/2001/XMLSchema-instance"
xs:schemaLocation="http://www.omg.org/spec/BPMN/20100524/MODEL BPMN20.xsd"
xmlns:bpmndi="http://www.omg.org/spec/BPMN/20100524/DI"
xmlns:dc="http://www.omg.org/spec/DD/20100524/DC"
xmlns:di="http://www.omg.org/spec/DD/20100524/DI"
xmlns:tns="http://www.jboss.org/drools">
<process processType="Private" isExecutable="true" id="com.sample.HelloWorld" name="Hello World" >
<!-- nodes -->
<startEvent id="_1" name="StartProcess" />
<scriptTask id="_2" name="Hello" >
<script>System.out.println("Hello World");</script>
</scriptTask>
<endEvent id="_3" name="EndProcess" >
<terminateEventDefinition/>
</endEvent>
<!-- connections -->
<sequenceFlow id="_1-_2" sourceRef="_1" targetRef="_2" />
<sequenceFlow id="_2-_3" sourceRef="_2" targetRef="_3" />
</process>
<bpmndi:BPMNDiagram>
<bpmndi:BPMNPlane bpmnElement="Minimal" >
<bpmndi:BPMNShape bpmnElement="_1" >
<dc:Bounds x="15" y="91" width="48" height="48" />
</bpmndi:BPMNShape>
<bpmndi:BPMNShape bpmnElement="_2" >
<dc:Bounds x="95" y="88" width="83" height="48" />
</bpmndi:BPMNShape>
<bpmndi:BPMNShape bpmnElement="_3" >
<dc:Bounds x="258" y="86" width="48" height="48" />
</bpmndi:BPMNShape>
<bpmndi:BPMNEdge bpmnElement="_1-_2" >
<di:waypoint x="39" y="115" />
<di:waypoint x="75" y="46" />
<di:waypoint x="136" y="112" />
</bpmndi:BPMNEdge>
<bpmndi:BPMNEdge bpmnElement="_2-_3" >
<di:waypoint x="136" y="112" />
<di:waypoint x="240" y="240" />
<di:waypoint x="282" y="110" />
</bpmndi:BPMNEdge>
</bpmndi:BPMNPlane>
</bpmndi:BPMNDiagram>
</definitions>11.1.3. Supported Elements and Attributes in BPMN 2.0 Specification
- Flow objects
- Events
- Start Event (None, Conditional, Signal, Message, Timer)
- End Event (None, Terminate, Error, Escalation, Signal, Message, Compensation)
- Intermediate Catch Event (Signal, Timer, Conditional, Message)
- Intermediate Throw Event (None, Signal, Escalation, Message, Compensation)
- Non-interrupting Boundary Event (Escalation, Signal, Timer, Conditional, Message)
- Interrupting Boundary Event (Escalation, Error, Signal, Timer, Conditional, Message, Compensation)
- Activities
- Script Task
- Task
- Service Task
- User Task
- Business Rule Task
- Manual Task
- Send Task
- Receive Task
- Reusable Sub-Process (Call Activity)
- Embedded Sub-Process
- Event Sub-Process
- Ad-Hoc Sub-Process
- Data-Object
- Gateways
- Diverging
- Exclusive
- Inclusive
- Parallel
- Event-Based
- Converging
- Exclusive
- Inclusive
- Parallel
- Lanes
- Data
- Java type language
- Process properties
- Embedded Sub-Process properties
- Activity properties
- Connecting objects
- Sequence flow
11.1.4. Loading and Executing a BPMN2 Process Into Repository
import org.kie.api.KieServices;
import org.kie.api.builder.KieRepository;
import org.kie.api.builder.KieFileSystem;
import org.kie.api.builder.KieBuilder;
import org.kie.api.runtime.KieContainer;
import org.kie.api.KieBase;
...
KieServices kServices = KieServices.Factory.get();
KieRepository kRepository = kServices.getRepository();
KieFileSystem kFileSystem = kServices.newKieFileSystem();
kFileSystem.write(ResourceFactory.newClassPathResource("MyProcess.bpmn"));
KieBuilder kBuilder = kServices.newKieBuilder(kFileSystem);
kBuilder.buildAll();
KieContainer kContainer = kServices.newKieContainer(kRepository.getDefaultReleaseId());
KieBase kBase = kContainer.getKieBase();11.2. What Comprises a Business Process
- The header part that comprises global elements such as the name of the process, imports, and variables.
- The nodes section that contains all the different nodes that are part of the process.
- The connections section that links these nodes to each other to create a flow chart.

Figure 11.1. A Business Process
- Using the Business Central or JBoss Developer Studio with BPMN2 modeler
- As an XML file, according to the XML process format as defined in the XML Schema Definition in the BPMN 2.0 specification.
- By directly creating a process using the Process API.
Note
11.2.1. Process Nodes
Event elements represent a particular event that occurs or can occur during process runtime.
Activities represent relatively atomic pieces of work that need to be performed as part of the Process execution.
Gateways represent forking or merging of workflows during Process execution.
11.2.2. Process Properties
- ID: The unique ID of the process
- Name: The display name of the process
- Version: The version number of the process
- Package: The package (namespace) the process is defined in
- Variables (optional): Variables to store data during the execution of your process
- Swimlanes: Swimlanes used in the process for assigning human tasks
11.2.3. Defining Processes Using XML
- The "process" elementThis is the top part of the process XML that contains the definition of the different nodes and their properties. The process XML consist of exactly one <process> element. This element contains parameters related to the process (its type, name, id and package name), and consists of three subsections: a header section (where process-level information like variables, globals, imports and lanes can be defined), a nodes section that defines each of the nodes in the process, and a connections section that contains the connections between all the nodes in the process.
- The "BPMNDiagram" elementThis is the lower part of the process XML that contains all graphical information, like the location of the nodes. In the nodes section, there is a specific element for each node, defining the various parameters and, possibly, sub-elements for that node type.
<?xml version="1.0" encoding="UTF-8"?>
<definitions id="Definition"
targetNamespace="http://www.jboss.org/drools"
typeLanguage="http://www.java.com/javaTypes"
expressionLanguage="http://www.mvel.org/2.0"
xmlns="http://www.omg.org/spec/BPMN/20100524/MODEL"Rule Task
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://www.omg.org/spec/BPMN/20100524/MODEL BPMN20.xsd"
xmlns:g="http://www.jboss.org/drools/flow/gpd"
xmlns:bpmndi="http://www.omg.org/spec/BPMN/20100524/DI"
xmlns:dc="http://www.omg.org/spec/DD/20100524/DC"
xmlns:di="http://www.omg.org/spec/DD/20100524/DI"
xmlns:tns="http://www.jboss.org/drools">
<process processType="Private" isExecutable="true" id="com.sample.hello" name="Hello Process" >
<!-- nodes -->
<startEvent id="_1" name="Start" />
<scriptTask id="_2" name="Hello" >
<script>System.out.println("Hello World");</script>
</scriptTask>
<endEvent id="_3" name="End" >
<terminateEventDefinition/>
</endEvent>
<!-- connections -->
<sequenceFlow id="_1-_2" sourceRef="_1" targetRef="_2" />
<sequenceFlow id="_2-_3" sourceRef="_2" targetRef="_3" />
</process>
<bpmndi:BPMNDiagram>
<bpmndi:BPMNPlane bpmnElement="com.sample.hello" >
<bpmndi:BPMNShape bpmnElement="_1" >
<dc:Bounds x="16" y="16" width="48" height="48" />
</bpmndi:BPMNShape>
<bpmndi:BPMNShape bpmnElement="_2" >
<dc:Bounds x="96" y="16" width="80" height="48" />
</bpmndi:BPMNShape>
<bpmndi:BPMNShape bpmnElement="_3" >
<dc:Bounds x="208" y="16" width="48" height="48" />
</bpmndi:BPMNShape>
<bpmndi:BPMNEdge bpmnElement="_1-_2" >
<di:waypoint x="40" y="40" />
<di:waypoint x="136" y="40" />
</bpmndi:BPMNEdge>
<bpmndi:BPMNEdge bpmnElement="_2-_3" >
<di:waypoint x="136" y="40" />
<di:waypoint x="232" y="40" />
</bpmndi:BPMNEdge>
</bpmndi:BPMNPlane>
</bpmndi:BPMNDiagram>
</definitions>11.3. Activities
- Task: Use this activity type in your business process to implement a single task which can not be further broken into subtasks.
- Subprocess: Use this activity type in your business process when you have a group of tasks to be processed in a sequential order in order to achieve a single result.
11.3.1. Tasks
Table 11.2. Types of Tasks in the Object Library
| Task | Icon | Description |
|---|---|---|
| User |
| Use the User task activity type in your business process when you require a human actor to execute your task.
|
| Send |
| Use the Send task to send a message.
|
| Receive |
| Use the Receive task in your process when your process is relying on a specific message to continue.
|
| Manual |
| Use the Manual task when you require a task to be executed by a human actor that need not be managed by your process.
|
| Service |
| Use the Service task in your business process for specifying the tasks use a service (such as a web service) that must execute outside the process engine.
|
| Business Rule |
| Use the Business Rule task when you want a set of rules to be executed as a task in your business process flow.
|
| Script |
| Use the Script task in your business process when you want a script to be executed within the task.
|
| None |
| A None task type is an abstract undefined task type. |
11.3.2. Subprocesses
Table 11.3. Types of Subprocesses in the Object Library
| Subprocess | Icon | Description |
|---|---|---|
| Reusable |
| Use the Reusable subprocess to invoke another process from the parent process.
|
| Multiple Instances |
| Use the Multiple Instances subprocess when you want to execute the contained subprocess elements multiple number of times.
|
| Embedded |
| Use the Embedded subprocess if you want a decomposable activity inside your process flow that encapsulates a part of your main process.
|
| Ad-Hoc |
| Use an Ad-Hoc subprocess when you want to execute activities inside your process, for which the execution order is irrelevant. An Ad-Hoc subprocess is a group of activities that have no required sequence relationships.
|
| Event |
| Use the Event subprocess in your process flow when you want to handle events that occur within the boundary of a subprocess.
|
11.4. Data
- Process-level variables can be set when starting a process by providing a map of parameters to the invocation of the startProcess method. These parameters will be set as variables on the process scope.
- Script actions can access variables directly simply by using the name of the variable as a local parameter in their script. For example, if the process defines a variable of type "org.jbpm.Person" in the process, a script in the process could access this directly:
// call method on the process variable "person" person.setAge(10);
Changing the value of a variable in a script can be done through the knowledge context:kcontext.setVariable(variableName, value);
- Service tasks (and reusable sub-processes) can pass the value of process variables to the outside world (or another process instance) by mapping the variable to an outgoing parameter. For example, the parameter mapping of a service task could define that the value of the process variable x should be mapped to a task parameter y just before the service is invoked. You can also inject the value of the process variable into a hard-coded parameter String using #{expression}. For example, the description of a human task could be defined as the following:
You need to contact person #{person.getName()}Where person is a process variable. This will replace this expression with the actual name of the person when the service needs to be invoked. Similarl results of a service (or reusable sub-process) can also be copied back to a variable using result mapping. - Various other nodes can also access data. Event nodes, for example, can store the data associated to the event in a variable. Check the properties of the different node types for more information.
ksession.setGlobal(name, value)
kcontext.getKieRuntime().setGlobal(name,value);.
11.5. Events
11.5.1. Start Events
Table 11.4. Types of Start Events in the Object Library
| Event | Icon | Description |
|---|---|---|
| None |
| Use the None start events when your processes do not need a trigger to be initialized.
|
| Message |
| Use the Message start event when you require your process to start, on receiving a particular message.
|
| Timer |
| Use the Timer start event when you require your process to initialize at a specific time, specific points in time, or after a specific time span.
|
| Escalation |
| Use the Escalation start event in your subprocesses when you require your subprocess to initialize as a response to an escalation.
|
| Conditional |
| Use the Conditional start event to start a process instance based on a business condition.
|
| Error |
| Use the Error start event in a subprocess when you require your subprocess to trigger as a response to a specific error object.
|
| Compensation |
| Use the Compensation start event in a subprocess when you require to handle a compensation.
|
| Signal |
| Use the Signal start event to start a process instance based on specific signals received from other processes.
|
11.5.2. End Events
Table 11.5. Types of End Events in the Object Library
| Event | Icon | Description |
|---|---|---|
| None |
| Use the None error end event to mark the end of your process or a subprocess flow. Note that this does not influence the workflow of any parallel subprocesses. |
| Message |
| Use the Message end event to end your process flow with a message to an element in another process. An intermediate catch message event or a start message event in another process can catch this message to further process the flow. |
| Escalation |
| Use the Escalation end event to mark the end of a process as a result of which the case in hand is escalated. This event creates an escalation signal that further triggers the escalation process. |
| Error |
| Use the Error end event in your process or subprocess to end the process in an error state and throw a named error, which can be caught by a Catching Intermediate event. |
| Cancel |
| Use the Cancel end event to end your process as canceled. Note that if your process comprises any compensations, it completes them and then marks the process as canceled. |
| Compensation |
| Use the Compensation end event to end the current process and trigger compensation as the final step. |
| Signal |
| Use the Signal end event to end a process with a signal thrown to an element in one or more other processes. Another process can catch this signal using Catch intermediate events. |
| Terminate |
| Use the Terminate end event to terminate the entire process instance immediately. Note that this terminates all the other parallel execution flows and cancels any running activities. |
11.5.3. Intermediate Events
- Catching Intermediate Events
- Throwing Intermediate Events
11.5.3.1. Catching Intermediate Events
- Message: Use the Message catching intermediate events in your process to implement a reaction to an arriving message. The message that this event is expected to react to, is specified in its properties. It executes the flow only when it receives the specific message.
- Timer: Use the Timer intermediate event to delay the workflow execution until a specified point or duration. A Timer intermediate event has one incoming flow and one outgoing flow and its execution starts when the incoming flow transfers to the event. When placed on an activity boundary, the execution is triggered at the same time as the activity execution.
- Escalation: Use the Escalation catching intermediate event in your process to consume an Escalation object. An Escalation catching intermediate event awaits a specific escalation object defined in its properties. Once it receives the object, it triggers execution of its outgoing flow.
- Conditional: Use the Conditional intermediate event to execute a workflow when a specific business Boolean condition that it defines, evaluates to true. When placed in the Process workflow, a Conditional intermediate event has one incoming flow and one outgoing flow and its execution starts when the incoming flow transfers to the event. When placed on an activity boundary, the execution is triggered at the same time as the activity execution. Note that if the event is non-interrupting, it triggers continuously while the condition is true.
- Error: Use the Error catching intermediate event in your process to execute a workflow when it received a specific error object defined in its properties.
- Compensation: Use the Compensation intermediate event to handle compensation in case of partially failed operations. A Compensation intermediate event is a boundary event that is attached to an activity in a transaction subprocess that may finish with a Compensation end event or a Cancel end event. The Compensation intermediate event must have one outgoing flow that connects to an activity that defines the compensation action needed to compensate for the action performed by the activity.
- Signal: Use the Signal catching intermediate event to execute a workflow once a specified signal object defined in its properties is received from the main process or any other process.
11.5.3.2. Throwing Intermediate Events
- Message: Use the Message throw intermediate event to produce and send a message to a communication partner (such as an element in another process). Once it sends a message, the process execution continues.
- Escalation: Use the Escalation throw intermediate event to produce an escalation object.Once it creates an escalation object, the process execution continues. The escalation object can be consumed by an Escalation Start event or an Escalation intermediate catch event, which is looking for this specific escalation object.
- Signal: Use the Signal throwing intermediate events to produces a signal object. Once it creates a signal object, the process execution continues. The signal object is consumed by a Signal start event or a Signal catching intermediate event, which is looking for this specific signal object.
11.6. Gateways
- Parallel (AND): in a converging gateway, waits for all incoming Flows. In a diverging gateway, takes all outgoing Flows simultaneously;
- Inclusive (OR): in a converging gateway, waits for all incoming Flows whose condition evaluates to true. In a diverging gateway takes all outgoing Flows whose condition evaluates to true;
- Exclusive (XOR): in a converging gateway, only the first incoming Flow whose condition evaluates to true is chosen. In a diverging gateway only one outgoing Flow is chosen.
- Event-based: used only in diverging gateways for reacting to events. See Section 11.6.1.1, “Event-based Gateway”
- Data-based Exclusive: used in both diverging and converging gateways to make decisions based on available data. See Section 11.6.1.4, “Data-based Exclusive Gateway”
11.6.1. Gateway types
11.6.1.1. Event-based Gateway
11.6.1.2. Parallel Gateway
- Diverging
- Once the incoming Flow is taken, all outgoing Flows are taken simultaneously.
- Converging
- The Gateway waits untill all incoming Flows have entered and only then triggers the outgoing Flow.
11.6.1.3. Inclusive Gateway
- Diverging
- Once the incoming Flow is taken, all outgoing Flows whose condition evaluates to true are taken. Connections with lower priority numbers are triggered before triggering higher priority ones; priorities are evaluated but the BPMN2 specification doesn't guarantee this. So for portability reasons it is recommended that you do not depend on this.
Important
Make sure that at least one of the outgoing Flow evaluates to true at runtime; otherwise, the process instance terminates with a runtime exception. - Converging
- The Gateway merges all incoming Flows previously created by a diverging Inclusive Gateway; that is, it serves as a synchronizing entry point for the Inclusive Gateway branches.
Attributes
- Default gate
- The outgoing Flow taken by default if no other Flow can be taken
11.6.1.4. Data-based Exclusive Gateway
- Diverging
- The Gateway triggers exactly one outgoing Flow: the Flow with the constraint evaluated to true and the lowest Priority is taken. After evaluating the constraints that are linked to the outgoing Flows: the constraint with the lowest priority number that evaluates to true is selected.
Important
Make sure that at least one of the outgoing Flows evaluates to true at runtime: if no Flow can be taken, the execution returns a runtime exception. - Converging
- The Gateway allows a workflow branch to continue to its outgoing Flow as soon as it reaches the Gateway; that is, whenever on of the incoming Flows triggers the Gateway, the workflow is sent to the outgoing Flow of the Gateway; if it is triggered from more than one incoming connection, it triggers the next node for each trigger.
Attributes
- Default gate
- The outgoing Flow taken by default if no other Flow can be taken
11.7. Variables
null and a write access produces an error message, and the Process continues its execution. Variables are searched for based on their ID.
- Session context:
Globalsare visible to all Process instances and assets in the given Session and are intended to be used primarily by business rules and by constrains. The are created dynamically by the rules or constrains. - Process context:
Process variablesare defined as properties in the BPMN2 definition file and are visible within the Process instance. They are initialized at Process creation and destroyed on Process finish. - Element context:
Local variablesare available within their Process element, such as an Activity. They are initialized when the element context is initialized, that is, when the execution workflow enters the node and execution of the OnEntry action finished if applicable. They are destroyed when the element context is destroyed, that is, when the execution workflow leaves the element.Values of local variables can be mapped to Global or Process variables using the Assignment mechanism (refer to Section 11.8, “Assignment”). This allows you to maintain relative independence of the parent Element that accommodates the local variable. Such isolation may help prevent technical exceptions.
11.8. Assignment
Note
#{userVariable}, assignment is rather intended for mapping of properties that are not of type String.
11.9. Action scripts
kcontext. Accordingly, kcontext is an instance of ProcessContext class and the interface content can be found at the following location: Interface ProcessContext.
person.getName() is person.name. It also provides other improvements over Java and MVEL expressions are generally more convenient for the business user.
Example 11.1. Action script that prints out the name of the person
// Java dialect System.out.println( person.getName() ); // MVEL dialect System.out.println( person.name );
11.10. Constraints
- Code constraints are boolean expressions evaluated directly whenever they are reached; these constraints are written in either Java or MVEL. Both Java and MVEL code constraints have direct access to the globals and variables defined in the process.Here is an example of a valid Java code constraint, person being a variable in the process:
return person.getAge() > 20;
Here is an example of a valid MVEL code constraint, person being a variable in the process:return person.age > 20;
- Rule constraints are equal to normal Drools rule conditions. They use the Drools Rule Language syntax to express complex constraints. These rules can, like any other rule, refer to data in the working memory. They can also refer to globals directly. Here is an example of a valid rule constraint:
Person( age > 20 )
This tests for a person older than 20 in the working memory.
name of the process:
processInstance : WorkflowProcessInstance()
Person( name == ( processInstance.getVariable("name") ) )
# add more constraints here ...11.11. Timers
A Timer node is set up with a delay and a period. The delay specifies the amount of time to wait after node activation before triggering the timer for the first time. The period defines the time between subsequent trigger activations. A period of 0 results in a one-shot timer. The (period and delay) expression must be of the form [#d][#h][#m][#s][#[ms]]. You can specify the amount of days, hours, minutes, seconds, and milliseconds. Milliseconds is the default value. For example, the expression 1h waits one hour before triggering the timer again.
You can configure timers version 6 timers with valid ISO8601 date format that supports both one shot timers and repeatable timers. You can define timers as date and time representation, time duration or repeating intervals. For example:
Date - 2013-12-24T20:00:00.000+02:00 - fires exactly at Christmas Eve at 8PM
Duration - PT1S - fires once after 1 second
Repeatable intervals - R/PT1S - fires every second, no limit. Alternatively R5/PT1S fires 5 times every second
In addition to the above mentioned configuration options, you can specify timers using process variable that consists of string representation of either delay and period or ISO8601 date format. By specifying #{variable}, the engine dynamically extracts process variable and uses it as timer expression. The timer service is responsible for making sure that timers get triggered at the appropriate times. You can cancel timers so that they are no longer triggered. You can use timers in the following ways inside a process:
- You can add a timer event to a process flow. The process activation starts the timer, and when it triggers, once or repeatedly, it activates the timer node's successor. Subsequently, the outgoing connection of a timer with a positive period is triggered multiple times. Canceling a Timer node also cancels the associated timer, after which no more triggers occur.
- You can associate timer with a sub-process or tasks as a boundary event.
11.12. Multi-threading
11.12.1. Multi-threading
11.12.2. Engine Execution
Thread.sleep(...) as part of a script will not make the engine continue execution elsewhere but will block the engine thread during that period.
completeWorkItem(...) method to return before continuing execution. It is important that your service handler executes your service asynchronously if its execution is not instantaneous.
11.13. Process Fluent API
11.13.1. Using the Process Fluent API to Create Business Process
11.13.2. Process Fluent API Example
RuleFlowProcessFactory factory = RuleFlowProcessFactory.createProcess("org.jbpm.HelloWorld");
factory
// Header
.name("HelloWorldProcess")
.version("1.0")
.packageName("org.jbpm")
// Nodes
.startNode(1).name("Start").done()
.actionNode(2).name("Action")
.action("java", "System.out.println(\"Hello World\");").done()
.endNode(3).name("End").done()
// Connections
.connection(1, 2)
.connection(2, 3);
RuleFlowProcess process = factory.validate().getProcess();
KieServices ks = KieServices.Factory.get();
KieFileSystem kfs = ks.newKieFileSystem();
Resource resource = ks.getResources().newByteArrayResource(
XmlBPMNProcessDumper.INSTANCE.dump(process).getBytes());
resource.setSourcePath("helloworld.bpmn2");
kfs.write(resource);
ReleaseId releaseId = ks.newReleaseId("org.jbpm", "helloworld", "1.0");
kfs.generateAndWritePomXML(releaseId);
ks.newKieBuilder(kfs).buildAll();
ks.newKieContainer(releaseId).newKieSession().startProcess("org.jbpm.HelloWorld");
createProcess() method from the RuleFlowProcessFactory class. This method creates a new process and returns the RuleFlowProcessFactory that can be used to create the process.
- Header: The header section comprises global elements such as the name of the process, imports, and variables.In the above example, the header contains the name and version of the process and the package name.
- Nodes: The nodes section comprises all the different nodes that are part of the process.In the above example, nodes are added to the current process by calling the
startNode(),actionNode()andendNode()methods. These methods return a specificNodeFactorythat allows you to set the properties of that node. Once you have finished configuring that specific node, thedone()method returns you to the currentRuleFlowProcessFactoryso you can add more nodes, if necessary. - Connections: The connections section links the nodes to create a flow chart.In the above example, once you add all the nodes, you must connect them by creating connections between them. This can be done by calling the method
connection, which links the nodes.Finally, you can validate the generated process by calling thevalidate()method and retrieve the createdRuleFlowProcessobject.
11.14. Testing Business Processes
11.14.1. Unit Testing
JbpmJUnitTestCase (in the jbpm-test module) has been included to simplify unit testing. JbpmJUnitTestCase provides the following:
- Helper methods to create a new knowledge base and session for a given set of processes.
- Assert statements to check:
- The state of a process instance (active, completed, aborted).
- Which node instances are currently active.
- Which nodes have been triggered (to check the path that has been followed).
- The value of variables.

Figure 11.2. Example Hello World Process
Example 11.2. example junit Test
public class ProcessPersistenceTest extends JbpmJUnitBaseTestCase {
public ProcessPersistenceTest() {
// setup data source, enable persistence
super(true, true);
}
@Test
public void testProcess() {
// create runtime manager with single process - hello.bpmn
createRuntimeManager("hello.bpmn");
// take RuntimeManager to work with process engine
RuntimeEngine runtimeEngine = getRuntimeEngine();
// get access to KieSession instance
KieSession ksession = runtimeEngine.getKieSession();
// start process
ProcessInstance processInstance = ksession.startProcess("com.sample.bpmn.hello");
// check whether the process instance has completed successfully
assertProcessInstanceCompleted(processInstance.getId(), ksession);
// check what nodes have been triggered
assertNodeTriggered(processInstance.getId(), "StartProcess", "Hello", "EndProcess");
}
}
}JbpmJUnitBaseTestCase method acts as base test case class that you can use for JBoss BPM Suite related tests. It provides four usage areas:
- JUnit life cycle methods:
setUp: This method is executed @Before. It configures data source and EntityManagerFactory and cleans up Singleton's session idtearDown: This method is executed @After. It clears out history, closes EntityManagerFactory and data source and disposes RuntimeEngines and RuntimeManager
- Knowledge Base and KnowledgeSession management methods:
createRuntimeManager: This method creates RuntimeManager for given set of assets and selected strategy.disposeRuntimeManager: This method disposes RuntimeManager currently active in the scope of test.getRuntimeEngine: This method creates new RuntimeEngine for given context.
- Assertions:
assertProcessInstanceCompletedassertProcessInstanceAbortedassertProcessInstanceActiveassertNodeActiveassertNodeTriggeredassertProcessVarExistsassertNodeExistsassertVersionEqualsassertProcessNameEquals
- Helper methods:
getDs: This method returns currently configured data source.getEmf: This method returns currently configured EntityManagerFactory.getTestWorkItemHandler: This method returns test work item handler that might be registered in addition to what is registered by default.clearHistory: This method clears history log.setupPoolingDataSource: This method sets up data source.
JbpmJUnitBaseTestCase supports all the predefined RuntimeManager strategies as part of the unit testing. It is enough to specify which strategy shall be used whenever creating runtime manager as part of single test. The following example uses PerProcessInstance runtime manager strategy and task service to deal with user tasks:
public class ProcessHumanTaskTest extends JbpmJUnitBaseTestCase {
private static final Logger logger = LoggerFactory.getLogger(ProcessHumanTaskTest.class);
public ProcessHumanTaskTest() {
super(true, false);
}
@Test
public void testProcessProcessInstanceStrategy() {
RuntimeManager manager = createRuntimeManager(Strategy.PROCESS_INSTANCE, "manager", "humantask.bpmn");
RuntimeEngine runtimeEngine = getRuntimeEngine(ProcessInstanceIdContext.get());
KieSession ksession = runtimeEngine.getKieSession();
TaskService taskService = runtimeEngine.getTaskService();
int ksessionID = ksession.getId();
ProcessInstance processInstance = ksession.startProcess("com.sample.bpmn.hello");
assertProcessInstanceActive(processInstance.getId(), ksession);
assertNodeTriggered(processInstance.getId(), "Start", "Task 1");
manager.disposeRuntimeEngine(runtimeEngine);
runtimeEngine = getRuntimeEngine(ProcessInstanceIdContext.get(processInstance.getId()));
ksession = runtimeEngine.getKieSession();
taskService = runtimeEngine.getTaskService();
assertEquals(ksessionID, ksession.getId());
// let john execute Task 1
List<TaskSummary> list = taskService.getTasksAssignedAsPotentialOwner("john", "en-UK");
TaskSummary task = list.get(0);
logger.info("John is executing task {}", task.getName());
taskService.start(task.getId(), "john");
taskService.complete(task.getId(), "john", null);
assertNodeTriggered(processInstance.getId(), "Task 2");
// let mary execute Task 2
list = taskService.getTasksAssignedAsPotentialOwner("mary", "en-UK");
task = list.get(0);
logger.info("Mary is executing task {}", task.getName());
taskService.start(task.getId(), "mary");
taskService.complete(task.getId(), "mary", null);
assertNodeTriggered(processInstance.getId(), "End");
assertProcessInstanceCompleted(processInstance.getId(), ksession);
}
}11.14.2. Testing Integration with External Services
TestWorkItemHandler is provided by default that can be registered to collect all work items (each work item represents one unit of work, for example, sending q specific email or invoking q specific service, and it contains all the data related to that task) for a given type. The test handler can be queried during unit testing to check whether specific work was actually requested during the execution of the process and that the data associated with the work was correct.
abortWorkItem(..)), the unit test verifies that the process handles this case successfully by logging this and generating an error, which aborts the process instance in this case.

public void testProcess2() {
// create runtime manager with single process - hello.bpmn
createRuntimeManager("sample-process.bpmn");
// take RuntimeManager to work with process engine
RuntimeEngine runtimeEngine = getRuntimeEngine();
// get access to KieSession instance
KieSession ksession = runtimeEngine.getKieSession();
// register a test handler for "Email"
TestWorkItemHandler testHandler = getTestWorkItemHandler();
ksession.getWorkItemManager().registerWorkItemHandler("Email", testHandler);
// start the process
ProcessInstance processInstance = ksession.startProcess("com.sample.bpmn.hello2");
assertProcessInstanceActive(processInstance.getId(), ksession);
assertNodeTriggered(processInstance.getId(), "StartProcess", "Email");
// check whether the email has been requested
WorkItem workItem = testHandler.getWorkItem();
assertNotNull(workItem);
assertEquals("Email", workItem.getName());
assertEquals("me@mail.com", workItem.getParameter("From"));
assertEquals("you@mail.com", workItem.getParameter("To"));
// notify the engine the email has been sent
ksession.getWorkItemManager().abortWorkItem(workItem.getId());
assertProcessInstanceAborted(processInstance.getId(), ksession);
assertNodeTriggered(processInstance.getId(), "Gateway", "Failed", "Error");
}11.14.3. Configuring Persistence
default: This is the no argument constructor and the most simple test case configuration (does NOT initialize data source and does NOT configure session persistence). It is usually used for in memory process management, without human task interactionsuper(boolean, boolean): This allows to explicitly configure persistence and data source. This is the most common way of bootstrapping test cases for JBoss BPM Suite.super(true, false): To execute with in-memory process management with human tasks persistence.super(true, true): To execute with persistent process management with human tasks persistence
super(boolean, boolean, string): This is same as super(boolean, boolean), however it allows use of another persistence unit name than default (org.jbpm.persistence.jpa).
public class ProcessHumanTaskTest extends JbpmJUnitBaseTestCase {
private static final Logger logger = LoggerFactory.getLogger(ProcessHumanTaskTest.class);
public ProcessHumanTaskTest() {
// configure this tests to not use persistence for process engine but still use it for human tasks
super(true, false);
}
}Chapter 12. Human Tasks Management
12.1. Human Tasks
12.2. Using User Tasks in Processes
- Actors: The actors that are responsible for executing the human task. A list of actor id's can be specified using a comma (',') as separator.
- Group: The group id that is responsible for executing the human task. A list of group id's can be specified using a comma (',') as separator.
- Name: The display name of the node.
- TaskName: The name of the human task. This name is used to link the task to a Form. It also represent the internal name of the Task that can be used for other purposes.
- DataInputSet: all the input variables that the task will receive to work on. Usually you will be interested in copying variables from the scope of the process to the scope of the task.
- DataOutputSet: all the output variables that will be generated by the execution of the task. Here you specify all the name of the variables in the context of the task that you are interested to copy to the context of the process.
- Assignments: here you specify which process variable will be linked to each Data Input and Data Output mapping.
- Comment: A comment associated with the human task. Here you can use expressions.
- Content: The data associated with this task.
- Priority: An integer indicating the priority of the human task.
- Skippable: Specifies whether the human task can be skipped, that is, whether the actor may decide not to execute the task.
- On entry and on exit actions: Action scripts that are executed upon entry and exit of this node, respectively.
- ActorId: The performer of the task to whom the task is assigned.
- GroupId: The group to which the task performer belongs.
- TaskStakeholderId : The person who is responsible for the progress and the outcome of a task.
- BusinessAdministratorId: The default business administrator who performs the role of the task stakeholder at task definition level.
- BusinessAdministratorGroupId : The group to which the administrator belongs.
- ExcludedOwnerId: Anybody who has been excluded to perform the task and become an actual or potential owner.
- RecipientId: A person who is the recipient of notifications related to the task. A notification may have more than one recipients.
12.3. Data Mapping
12.4. Task Lifecycle

Figure 12.1. Human Task Life Cycle
- Delegating or forwarding a task, so that the task is assigned to another actor.
- Revoking a task, so that it is no longer claimed by one specific actor but is (re)available to all actors allowed to take it.
- Temporarily suspending and resuming a task.
- Stopping a task in progress.
- Skipping a task (if the task has been marked as skippable), in which case the task will not be executed.
12.5. Task Permissions
org.jbpm.services.task.exception.PermissionDeniedException when used with information about an unauthorized user. For example, when a user is trying to directly modify the task (for example, by trying to claim or complete the task), the PermissionDeniedException is thrown if that user does not have the correct role for that operation. Also, users are not able to view or retrieve tasks in Business Central that they are not involved with.
12.5.1. Task Permissions Matrix
- a "+ indicates that the user role can do the specified operation.
- a "-" indicates that the user role may not do the specified operation.
- a "_" indicates that the user role may not do the specified operation, and that it is also not an operation that matches the user's role ("not applicable").
Table 12.1. Task Roles in the Permissions Table
| Word | Role | Description |
|---|---|---|
| Initiator | Task Initiator | The user who creates the task instance. |
| Stakeholder | Task Stakeholder | The user involved in the task. This user can influence the progress of a task, by performing administrative actions on the task instance. |
| Potential | Potential Owner | The user who can claim the task before it has been claimed, or after it has been released or forward. Only tasks that have the status Ready may be claimed. A potential owner becomes the actual owner of a task by claiming the task. |
| Actual | Actual Owner | The user who has claimed the task and will progress the task to completion or failure. |
| Administrator | Business Adminstrator | A super user who may modify the status or progress of a task at any point in a task's lifecycle. |
Permissions Matrices
Table 12.2. Main Operations Permissions Matrix
| Operation/Role | Initiator | Stakeholder | Potential | Actual | Administrator |
|---|---|---|---|---|---|
| activate | + | + | _ | _ | + |
| claim | - | + | + | _ | + |
| complete | - | + | _ | + | + |
| delegate | + | + | + | + | + |
| fail | - | + | _ | + | + |
| forward | + | + | + | + | + |
| nominate | + | + | + | + | + |
| release | + | + | + | + | + |
| remove | - | _ | _ | _ | + |
| resume | + | + | + | + | + |
| skip | + | + | + | + | + |
| start | - | + | + | + | + |
| stop | - | + | _ | + | + |
| suspend | + | + | + | + | + |
12.6. Task Permissions
12.6.1. Task Service and Process Engine
12.6.2. Task Service API
org.kie.api.task.TaskService) offers the following methods for managing the life cycle of human tasks:
...
void start( long taskId, String userId );
void stop( long taskId, String userId );
void release( long taskId, String userId );
void suspend( long taskId, String userId );
void resume( long taskId, String userId );
void skip( long taskId, String userId );
void delegate(long taskId, String userId, String targetUserId);
void complete( long taskId, String userId, Map<String, Object> results );
...
- taskId: The ID of the task that we are working with. This is usually extracted from the currently selected task in the user task list in the user interface.
- userId: The ID of the user that is executing the action. This is usually the id of the user that is logged in into the application.
InternalTaskService, you need to manually cast to InternalTaskService. One method that can be useful from this interface is getTaskContent():
Map<String, Object> getTaskContent( long taskId );
ContentMarshallerContext to unmarshall the serialized version of the task content. If you only want to use the stable or public API's, you can use the following method:
Task taskById = taskQueryService.getTaskInstanceById(taskId);
Content contentById = taskContentService.getContentById(taskById.getTaskData().getDocumentContentId());
ContentMarshallerContext context = getMarshallerContext(taskById);
Object unmarshalledObject = ContentMarshallerHelper.unmarshall(contentById.getContent(), context.getEnvironment(), context.getClassloader());
if (!(unmarshalledObject instanceof Map)) {
throw new IllegalStateException(" The Task Content Needs to be a Map in order to use this method and it was: "+unmarshalledObject.getClass());
}
Map<String, Object> content = (Map<String, Object>) unmarshalledObject;
return content;
12.6.3. Interacting with the Task Service
...
RuntimeEngine engine = runtimeManager.getRuntimeEngine(EmptyContext.get());
KieSession kieSession = engine.getKieSession();
// Start a process
kieSession.startProcess("CustomersRelationship.customers", params);
// Do Task Operations
TaskService taskService = engine.getTaskService();
List<TaskSummary> tasksAssignedAsPotentialOwner = taskService.getTasksAssignedAsPotentialOwner("mary", "en-UK");
// Claim Task
taskService.claim(taskSummary.getId(), "mary");
// Start Task
taskService.start(taskSummary.getId(), "mary");
...
LocalHTWorkItemHandler in the session to get the Task Service notify the Process Engine once the task completes. In JBoss BPM Suite, the Task Service runs locally to the Process and Rule Engine. This enables you to create multiple light clients for different Process and Rule Engine's instances. All the clients can share the same database.
12.7. Retrieving Process And Task Information
RuntimeDataService and TaskQueryService. However, the TaskQueryService provides the same functionality as the RuntimeDataService and using it is not the preferred way to query tasks and processes.
RuntimeDataService interface can be used as the main source of information, as it provides an interface for retrieving data associated with the runtime. It can list process definitions, process instances, tasks for given users, node instance information and other. The service should provide all required information and still be as efficient as possible.
Example 12.1. Get All Process Definitions
Collection definitions = runtimeDataService.getProcesses(new QueryContext());
Example 12.2. Get Active Process Instances
Collection<processInstanceDesc> activeInstances = runtimeDataService .getProcessInstances(new QueryContext());
Example 12.3. Get Active Nodes for Given Process Instance
Collection<nodeInstanceDesc> activeNodes = runtimeDataService .getProcessInstanceHistoryActive(processInstanceId, new QueryContext());
Example 12.4. Get Tasks Assigned to Given User
List<taskSummary> taskSummaries = runtimeDataService
.getTasksAssignedAsPotentialOwner("john", new QueryFilter(0, 10));Example 12.5. Get Assigned Tasks as a Business Administrator
List<taskSummary> taskSummaries = runtimeDataService
.getTasksAssignedAsBusinessAdministrator("john", new QueryFilter(0, 10));RuntimeDataService is mentioned also in Chapter 18, CDI Integration.
RuntimeDataService then support two important arguments:
QueryContextQueryFilter(which is an extension ofQueryContext)
QueryContext allows you to set an offset (by using the offset argument), number of results (count), their order (orderBy) and ascending order (asc) as well.
QueryFilter inherits all of the mentioned attributes, it provides the same features, as well as some others: for example, it is possible to set the language, single result, maximum number of results or paging.
Chapter 13. Persistence and Transactions
13.1. Process Instance State
13.1.1. Runtime State
13.1.2. Binary Persistence
- JBoss BPM Suite transforms the process instance information into binary data. Custom serialization is used instead of Java serialization for performance reasons.
- The binary data is stored together with other process instance metadata, such as process instance ID, process ID, and the process start date.
ksession.getId() to get session ID.
13.1.3. Data Model Description

Figure 13.1. Data Model
sessioninfo entity contains the state of the (knowledge) session in which the process instance is running.
Table 13.1. SessionInfo
| Field | Description | Nullable |
|---|---|---|
| id | The primary key. | NOT NULL |
| lastmodificationdate | The last time that the entity was saved to a database. | |
| rulesbytearray | State of a session. | NOT NULL |
| startdate | The start time of a session. | |
| optlock | Version field containing a lock value. |
processinstanceinfo entity contains the state of the process instance.
Table 13.2. ProcessInstanceInfo
| Field | Description | Nullable |
|---|---|---|
| instanceid | The primary key. | NOT NULL |
| lastmodificationdate | The last time that the entity was saved to a database. | |
| lastreaddate | Contains the last time that the entity was retrieved from the database. | |
| processid | The ID of the process. | |
| processinstancebytearray | State of a process instance in form of a binary dataset. | NOT NULL |
| startdate | The start time of the process. | |
| state | An integer representing the state of a process instance. | NOT NULL |
| optlock | Version field containing a lock value. |
eventtypes entity contains information about events that a process instance will undergo or has undergone.
Table 13.3. EventTypes
| Field | Description | Nullable |
|---|---|---|
| instanceid | Reference to the processinstanceinfo primary key, and foreign key constraint on this column. | NOT NULL |
| element | A finished event in the process. |
workiteminfo entity contains the state of a work item.
Table 13.4. WorkItemInfo
| Field | Description | Nullable |
|---|---|---|
| workitemid | The primary key. | NOT NULL |
| name | The name of the work item. | |
| processinstanceid | The (primary key) ID of the process. There is no foreign key constraint on this field. | NOT NULL |
| state | The state of a work item. | NOT NULL |
| optlock | Version field containing a lock value. | |
| workitembytearay | Tthe state of a work item in form of a binary dataset. | NOT NULL |
CorrelationKeyInfo entity contains information about correlation keys assigned to the given process instance - loose relationship as this table is considered optional. Use it only when you require correlation capabilities.
Table 13.5. CorrelationKeyInfo
| Field | Description | Nullable |
|---|---|---|
| keyid | The primary key. | NOT NULL |
| name | Assigned name of the correlation key. | |
| processinstanceid | The id of the process instance which is assigned to the correlation key. | NOT NULL |
| optlock | Version field containing a lock value. |
CorrelationPropertyInfo entity contains information about correlation properties for a correlation key assigned the process instance.
Table 13.6. CorrelationPropertyInfo
| Field | Description | Nullable |
|---|---|---|
| propertyid | The primary key. | NOT NULL |
| name | The name of the property. | |
| value | The value of the property. | NOT NULL |
| optlock | Version field containing a lock value. | |
| correlationKey-keyid | A foreign key to map to the correlation key. | NOT NULL |
ContextMappingInfo entity contains information about the contextual information mapped to a Ksession. This is an internal part of RuntimeManager and can be considered optional when RuntimeManager is not used.
Table 13.7. ContextMappingInfo
| Field | Description | Nullable |
|---|---|---|
| mappingid | The primary key. | NOT NULL |
| context_id | Identifier of the context. | NOT NULL |
| ksession_id | Identifier of a Ksession. | NOT NULL |
| optlock | Version field containing a lock value. |
13.1.4. Safe Points
13.2. Audit Log
- Verify which actions have been executed in a particular process instance.
- Monitor and analyze the efficiency of a particular process.
13.2.1. Audit Data Model
jbpm-audit module contains an event listener that stores process-related information in a database using Java Persistence API (JPA). The data model contains three entities: one for process instance information, one for node instance information, and one for (process) variable instance information:
- The ProcessInstanceLog table contains the basic log information about a process instance.
- The NodeInstanceLog table contains information about which nodes were actually executed inside each process instance. Whenever a node instance is entered from one of its incoming connections or is exited through one of its outgoing connections, that information is stored in this table.
- The VariableInstanceLog table contains information about changes in variable instances. The default is to only generate log entries when (after) a variable changes. It is also possible to log entries before the variable (value) changes.
13.2.2. Audit Data Model Description

Figure 13.2. Audit Data Model
Table 13.8. ProcessInstanceLog
| Field | Description | Nullable |
|---|---|---|
| id | The primary key and id of the log entity. | NOT NULL |
| duration | Duration of a process instance since its start date. | |
| end_date | The end date of a process instance when applicable. | |
| externalId | Optional external identifier used to correlate various elements, for example deployment id. | |
| user_identity | Optional identifier of the user who started the process instance. | |
| outcome | Contains the outcome of a process instance, for example the error code. | |
| parentProcessInstanceId | The process instance id of the parent process instance. | |
| processid | Id of the executed process. | |
| processinstanceid | The process instance id. | NOT NULL |
| processname | The name of the process. | |
| processversion | The version of the process. | |
| start_date | The start date of the process instance. | |
| status | The status of process instance that maps to process instance state. |
Table 13.9. NodeInstanceLog
| Field | Description | Nullable |
|---|---|---|
| id | Primary key and id of the log entity. | NOT NULL |
| connection | Identifier of the sequence flow that led to this node instance. | |
| log_date | Date of the event. | |
| externalId | Optional external identifier used to correlate various elements, for example deployment id. | |
| nodeid | Node id of the corresponding node in the process definition. | |
| nodeinstanceid | Instance id of the node. | |
| nodename | Name of the node. | |
| nodetype | The type of the node. | |
| processid | Id of the executed process. | |
| processinstanceid | Id of the process instance. | NOT NULL |
| type | The type of the event (0 = enter, 1 = exit). | NOT NULL |
| workItemId | Optional identifier of work items available only for certain node types. |
Table 13.10. VariableInstanceLog
| Field | Description | Nullable |
|---|---|---|
| id | Primary key and id of the log entity. | NOT NULL |
| externalId | Optional external identifier used to correlate various elements, for example deployment id. | |
| log_date | Date of the event. | |
| processid | Id of the executed process. | |
| processinstanceid | Id of the process instance. | NOT NULL |
| oldvalue | Previous value of the variable at the time of recording of the log. | |
| value | The value of the variable at the time of recording of the log. | |
| variableid | Variable id in the process definition. | |
| variableinstanceid | The id of the variable instance. |
13.2.3. Storing Process Events in a Database
EntityManagerFactory emf = ...; StatefulKnowledgeSession ksession = ...; AbstractAuditLogger auditLogger = AuditLoggerFactory.newJPAInstance(emf); ksession.addProcessEventListener(auditLogger); // invoke methods one your session here
persistence.xml to specify a database. You need to include audit log classes as well (ProcessInstanceLog, NodeInstanceLog, and VariableInstanceLog). See the example:
<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<persistence
version="2.0"
xsi:schemaLocation="http://java.sun.com/xml/ns/persistence http://java.sun.com/xml/ns/persistence/persistence_2_0.xsd
http://java.sun.com/xml/ns/persistence/orm http://java.sun.com/xml/ns/persistence/orm_2_0.xsd"
xmlns="http://java.sun.com/xml/ns/persistence"
xmlns:orm="http://java.sun.com/xml/ns/persistence/orm"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance>
<persistence-unit name="org.jbpm.persistence.jpa" transaction-type="JTA">
<provider>org.hibernate.ejb.HibernatePersistence</provider>
<jta-data-source>jdbc/jbpm-ds</jta-data-source>
<mapping-file>META-INF/JBPMorm.xml</mapping-file>
<class>org.drools.persistence.info.SessionInfo</class>
<class>org.jbpm.persistence.processinstance.ProcessInstanceInfo</class>
<class>org.drools.persistence.info.WorkItemInfo</class>
<class>org.jbpm.persistence.correlation.CorrelationKeyInfo</class>
<class>org.jbpm.persistence.correlation.CorrelationPropertyInfo</class>
<class>org.jbpm.runtime.manager.impl.jpa.ContextMappingInfo</class>
<class>org.jbpm.process.audit.ProcessInstanceLog</class>
<class>org.jbpm.process.audit.NodeInstanceLog</class>
<class>org.jbpm.process.audit.VariableInstanceLog</class>
<properties>
<property name="hibernate.dialect" value="org.hibernate.dialect.H2Dialect"/>
<property name="hibernate.max_fetch_depth" value="3"/>
<property name="hibernate.hbm2ddl.auto" value="update"/>
<property name="hibernate.show_sql" value="true"/>
<property name="hibernate.transaction.jta.platform"
value="org.hibernate.service.jta.platform.internal.BitronixJtaPlatform"/>
</properties>
</persistence-unit>
</persistence>
13.2.4. Storing Process Events in a JMS Queue
ConnectionFactory factory = ...;
Queue queue = ...;
StatefulKnowledgeSession ksession = ...;
Map<String, Object> jmsProps = new HashMap<String, Object>();
jmsProps.put("jbpm.audit.jms.transacted", true);
jmsProps.put("jbpm.audit.jms.connection.factory", factory);
jmsProps.put("jbpm.audit.jms.queue", queue);
AbstractAuditLogger auditLogger = AuditLoggerFactory.newInstance(Type.JMS, session, jmsProps);
ksession.addProcessEventListener(auditLogger);
// invoke methods of your session here
13.3. Transactions
// create the entity manager factory and register it in the environment EntityManagerFactory emf = Persistence.createEntityManagerFactory( "org.jbpm.persistence.jpa" ); Environment env = KnowledgeBaseFactory.newEnvironment(); env.set( EnvironmentName.ENTITY_MANAGER_FACTORY, emf ); env.set( EnvironmentName.TRANSACTION_MANAGER, TransactionManagerServices.getTransactionManager() ); // create a new knowledge session that uses JPA to store the runtime state StatefulKnowledgeSession ksession = JPAKnowledgeService.newStatefulKnowledgeSession( kbase, null, env ); // start the transaction UserTransaction ut = (UserTransaction) new InitialContext().lookup( "java:comp/UserTransaction" ); ut.begin(); // perform multiple commands inside one transaction ksession.insert( new Person( "John Doe" ) ); ksession.startProcess( "MyProcess" ); // commit the transaction ut.commit();
jndi.properties in your root classpath to register the Bitronix transaction manager in JNDI.
- If you use the
jbpm-testmodule,jndi.propertiesis included by default. - If you are not using
jbpm-testmodule, createjndi.propertiesmanually with the following content:java.naming.factory.initial=bitronix.tm.jndi.BitronixInitialContextFactory
persistence.xml:
<property name="hibernate.transaction.jta.platform" value="org.hibernate.transaction.JBossTransactionManagerLookup" />
13.4. Implementing Container Managed Transaction
- Implement the dedicated transaction manager:
org.jbpm.persistence.jta.ContainerManagedTransactionManager
- Insert the transaction manager and persistence context manager into the environment before you create or load your session:
Environment env = EnvironmentFactory.newEnvironment(); env.set(EnvironmentName.ENTITY_MANAGER_FACTORY, emf); env.set(EnvironmentName.TRANSACTION_MANAGER, new ContainerManagedTransactionManager()); env.set(EnvironmentName.PERSISTENCE_CONTEXT_MANAGER, new JpaProcessPersistenceContextManager(env)); env.set(EnvironmentName.TASK_PERSISTENCE_CONTEXT_MANAGER, new JPATaskPersistenceContextManager(env));
- Configure JPA provider (example hibernate and WebSphere):
<property name="hibernate.transaction.factory_class" value="org.hibernate.transaction.CMTTransactionFactory"/> <property name="hibernate.transaction.jta.platform" value="org.hibernate.service.jta.platform.internal.WebSphereJtaPlatform"/>
Note
If you dispose of your Ksession directly when running in the CMT mode, you may generate exceptions, because JBoss BPM Suite requires transaction synchronization. Use org.jbpm.persistence.jta.ContainerManagedTransactionDisposeCommand to dispose of your session.
13.5. Using Persistence
- Add necessary dependencies.
- Configuring a datasource.
- Configure the JBoss BPM Suite engine.
13.5.1. Adding Dependencies
jbpm-persistence-jpa.jarfile is necessary for saving the runtime state. Therefore, always make sure it is available in your project.- jbpm-persistence-jpa (org.jbpm)
- drools-persistence-jpa (org.drools)
- persistence-api (javax.persistence)
- hibernate-entitymanager (org.hibernate)
- hibernate-annotations (org.hibernate)
- hibernate-commons-annotations (org.hibernate)
- hibernate-core (org.hibernate)
- commons-collections (commons-collections)
- dom4j (dom4j)
- jta (javax.transaction)
- btm (org.codehaus.btm)
- javassist (javassist)
- slf4j-api (org.slf4j)
- slf4j-jdk14 (org.slf4j)
- h2 (com.h2database)
13.5.2. Manually Configuring JBoss BPM Suite Engine to Use Persistence
JPAKnowledgeService to create a knowledge session based on a knowledge base, a knowledge session configuration (if necessary), and the environment. Ensure that the environment contains a reference to your Entity Manager Factory. For example:
// create the entity manager factory and register it in the environment EntityManagerFactory emf = Persistence.createEntityManagerFactory( "org.jbpm.persistence.jpa" ); Environment env = KnowledgeBaseFactory.newEnvironment(); env.set( EnvironmentName.ENTITY_MANAGER_FACTORY, emf ); // create a new knowledge session that uses JPA to store the runtime state StatefulKnowledgeSession ksession = JPAKnowledgeService.newStatefulKnowledgeSession( kbase, null, env ); int sessionId = ksession.getId(); // invoke methods on your method here ksession.startProcess( "MyProcess" ); ksession.dispose();
JPAKnowledgeService to recreate a session based on a specific session id. For example:
// recreate the session from database using the sessionId ksession = JPAKnowledgeService.loadStatefulKnowledgeSession(sessionId, kbase, null, env );
persistence.xml to META-INF to configure JPA. Following example uses Hibernate and H2 database:
<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<persistenc version="2.0" xsi:schemaLocation=
"http://java.sun.com/xml/ns/persistence http://java.sun.com/xml/ns/persistence/persistence_2_0.xsd
http://java.sun.com/xml/ns/persistence/orm http://java.sun.com/xml/ns/persistence/orm_2_0.xsd"
xmlns="http://java.sun.com/xml/ns/persistence"
xmlns:orm="http://java.sun.com/xml/ns/persistence/orm"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance>
<persistence-unit name="org.jbpm.persistence.jpa" transaction-type="JTA">
<provider>org.hibernate.ejb.HibernatePersistence</provider>
<jta-data-source>jdbc/jbpm-ds</jta-data-source>
<mapping-file>META-INF/JBPMorm.xml</mapping-file>
<class>org.drools.persistence.info.SessionInfo</class>
<class>org.jbpm.persistence.processinstance.ProcessInstanceInfo</class>
<class>org.drools.persistence.info.WorkItemInfo</class>
<class>org.jbpm.persistence.correlation.CorrelationKeyInfo</class>
<class>org.jbpm.persistence.correlation.CorrelationPropertyInfo</class>
<class>org.jbpm.runtime.manager.impl.jpa.ContextMappingInfo</class>
<properties>
<property name="hibernate.dialect" value="org.hibernate.dialect.H2Dialect"/>
<property name="hibernate.max_fetch_depth" value="3"/>
<property name="hibernate.hbm2ddl.auto" value="update"/>
<property name="hibernate.show_sql" value="true"/>
<property name="hibernate.transaction.jta.platform"
value="org.hibernate.service.jta.platform.internal.BitronixJtaPlatform"/>
</properties>
</persistence-unit>
</persistence>
persistence.xml refers to a data source called jdbc/jbpm-ds. If you run your application in an application server, these containers typically allow you to use custom configure file for the data sources. Refer your application server documentation for further details.
PoolingDataSource ds = new PoolingDataSource();
ds.setUniqueName("jdbc/jbpm-ds");
ds.setClassName("bitronix.tm.resource.jdbc.lrc.LrcXADataSource");
ds.setMaxPoolSize(3);
ds.setAllowLocalTransactions(true);
ds.getDriverProperties().put("user", "sa");
ds.getDriverProperties().put("password", "sasa");
ds.getDriverProperties().put("URL", "jdbc:h2:mem:jbpm-db");
ds.getDriverProperties().put("driverClassName", "org.h2.Driver");
ds.init();
Chapter 14. Using JBoss Developer Studio to Create and Test Processes
- Wizards for creating:
- a JBoss BPM Suite project
- a BPMN2 process
- JBoss BPM Suite perspective showing the most commonly used views in a predefined layout
14.1. JBoss BPM Suite Runtime
14.1.1. JBoss BPM Suite Runtime
14.1.2. Setting the JBoss BPM Suite Runtime
jboss-bpms-engine.zip archive.
Procedure 14.1. Configure jBPM Runtime
- From the JBoss Developer Studio menu, select Window and click Preferences.
- Select → .
- Click Add...; provide a name for the new runtime, and click Browse to navigate to the directory where the runtime is located.
- Click OK, select the new runtime and click OK again. If you have existing projects, a dialog box will indicate that you have to restart JBoss Developer Studio to update the Runtime.
14.1.3. Configuring the JBoss BPM Suite Server
Procedure 14.2. Configure the JBoss BPM Suite Server
- Open the jBPM view by selecting → → and select and click .
- Add the server view by selecting → → and select → .
- Open the server menu by right clicking the Servers panel and select → .
- Define the server by selecting → and clicking Next.
- Set the home directory by clicking the Browse button. Navigate to and select the installation directory for JBoss EAP which has JBoss BPM Suite installed.
- Provide a name for the server in the Name field, ensure that the configuration file is set, and click Finish.
14.2. Importing Projects from a Git Repository into JBoss Developer Studio
Procedure 14.3. Cloning a Remote Git Repository
- Start the Red Hat JBoss BRMS/BPM Suite server (whichever is applicable) by selecting the server from the server tab and click the start icon.
- Simultaneously, start the Secure Shell server, if not running already, by using the following command. The command is Linux and Mac specific only. On these platforms, if sshd has already been started, this command fails. In that case, you may safely ignore this step.
/sbin/service sshd start
- In JBoss Developer Studio, select → and navigate to the Git folder. Open the Git folder to select and click .
- Select the repository source as and click .
- Enter the details of the Git repository in the next window and click .

Figure 14.1. Git Repository Details
- Select the branch you wish to import in the following window and click .
- To define the local storage for this project, enter (or select) a non-empty directory, make any configuration changes and click .
- Import the project as a general project in the following window and click . Name the project and click .
Procedure 14.4. Importing a Local Git Repository
- Start the Red Hat JBoss BRMS/BPM Suite server (whichever is applicable) by selecting the server from the server tab and click the start icon.
- In JBoss Developer Studio, select → and navigate to the Git folder. Open the Git folder to select and click .
- Select the repository source as and click .

Figure 14.2. Git Repository Details
- Select the repository that is to be configured from the list of available repositories and click .
- In the dialog that opens, select the radio button from the and click . Name the project and click .

Figure 14.3. Wizard for Project Import
14.3. Exploring a JBoss BPM Suite Application
- A set of Java classes that will become process variables or facts in rules.
- A set of services accessed from service tasks in the business process model.
- A business process model definition file in BPMN2 format.
- Rules assets (optional).
- Java class that drives the application, including creation of a knowledge session, starting processes, and firing rules.
src/main/javathat stores the class files (facts).src/main/resources/rulesthat stores the .drl files (rules).src/main/resources/processthat stores the .bpmn files (processes).src/main/resources/jBPMLibrary that holds the generated .jar files required for rule execution.
14.4. Creating a JBoss BPM Suite Project
Procedure 14.5. Creating a New JBoss BPM Suite project inRed Hat JBoss Developer Studio
- From the main menu, select → → .Select → and click Next.
- Enter a name for the project into the Project name: text box and click Next.
Note
JBoss Developer Studio provides the option to add a sample HelloWorld Rule file to the project. Accept this default by clicking Next to test the sample project in the following steps. - Select the jBPM runtime (or use the default).
- Select generate code compatible with jBPM 6 or above, and click Finish.
- To test the project, right click the Java file that contains the main method and select → → .The output will be displayed in the console tab.
14.5. Converting an Existing Java Project to a BPM Suite Project
Procedure 14.6. Task
- Open the Java project in JBoss Developer Studio.
- Right-click the project and under the category, select .
14.6. Creating a Process Using BPMN2 Process Wizard
Procedure 14.7. Create a New Process
- To create a new process, select → → and then select → .
- Select the parent folder for the process.
- Enter a name in the File name: dialogue box and click Finish.
14.7. Building a Process Using BPMN2 Process Editor
Procedure 14.8. Create a New Process
- Create a new process using the BPMN2 Process Wizard in JBoss Developer Studio.
- Right click the process .bpmn file, select Open With and then click the radio button next to BPMN2 Process Editor.
- Add nodes to the process by clicking on the required node in the palette and clicking on the canvas where the node should be placed.
- Connect the nodes with sequence flows. Select Sequence Flow from the palette, then click the nodes to connect them.
- To edit a node's properties, click the node, open the properties tab in the bottom panel of the JBoss Developer Studio workspace, and click the values to be edited.If the properties tab is not already open, right click the bpmn file in the package panel and select → .
- Click the save icon to save the process.
14.8. Creating a Process Using BPMN Maven Process Wizard
pom.xml, and includes a sample process and Java class to execute it.
Procedure 14.9. Create a New Process
- To create a new project, select → → and then select → .
- Enter a name for your project and click Finish.This creates your maven project with a sample process in the
src/main/resourcesdirectory and a Java class that can be used to execute the sample process. In addition to that, the project contains:- A
pom.xmlfile containing the following:<?xml version="1.0" encoding="UTF-8"?> <project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd"> <modelVersion>4.0.0</modelVersion> <groupId>com.sample</groupId> <artifactId>jbpm-example</artifactId> <version>1.0.0-SNAPSHOT</version> <name>jBPM :: Sample Maven Project</name> <description>A sample jBPM Maven project</description> <properties> <jbpm.version>6.0.0.Final</jbpm.version> </properties> <repositories> <repository> <id>redhat-techpreview-all-repository</id> <name>Red Hat Tech Preview repository (all)</name> <url>http://maven.repository.redhat.com/techpreview/all/</url> <releases> <enabled>true</enabled> <updatePolicy>never</updatePolicy> </releases> <snapshots> <enabled>true</enabled> <updatePolicy>daily</updatePolicy> </snapshots> </repository> </repositories> <dependencies> <dependency> <groupId>org.jbpm</groupId> <artifactId>jbpm-test</artifactId> <version>${jbpm.version}</version> </dependency> </dependencies> </project> - A
kmodule.xmlconfiguration file under theMETA-INFfolder. Thekmodule.xmldefines which resources (like processes, rules) are to be loaded as part of your project. In this case, it defines a knowledge base called kbase that loads all the resources in thecom.sampledirectory as shown below:<kmodule xmlns="http://jboss.org/kie/6.0.0/kmodule"> <kbase name="kbase" packages="com.sample"/> </kmodule>
- Update the project properties in the tab and specify the JBoss BPM Suite version.It adds the JBoss Nexus Maven repository (where all the JBoss BPM Suite JARs and their dependencies are located) to your project and configures the dependencies.
Note
By default, only thejbpm-testJAR is specified as a dependency, as this has transitive dependencies to almost all of the core dependencies you will need. You are free to update the dependencies section however to include only the dependencies you need.
14.9. Debugging Business Processes
To validate a process, right click the .bpmn file and select Validate.
To debug a process, right click the .bpmn file and select → ; make any required changes to the test configuration and click Debug.
14.9.1. Using the Debug Perspective
Procedure 14.10. The Debug Perspective
- Open the Process Instance view
- Select under the category
- Use a Java breakpoint to stop your application at a specific point (for example, after starting a new process instance).
- In the Debug perspective, select the ksession you would like to inspect.
- The Process Instances view will show the process instances that are currently active inside that ksession.
- When double-clicking a process instance, the process instance viewer will graphically show the progress of that process instance.
- Sometimes, when double-clicking a process instance, the process instance viewer complains that is cannot find the process. This means that the plug-in was not able to find the process definition of the selected process instance in the cache of parsed process definitions. To solve this, simply change the process definition in question and save again.

Figure 14.4. Process Instance in the Debugger
Note
14.9.2. Debugging Views in JBoss Developer Studio
14.9.2.1. The Process Instances View
Example 14.1. Sample Process Instances View

14.9.2.2. The Human Task View
Example 14.2. Sample Human Task View

14.9.2.3. The Audit View
Example 14.3. Threaded File Logger
KnowledgeRuntimeLogger logger = KnowledgeRuntimeLoggerFactory .newThreadedFileLogger(ksession, "logdir/mylogfile", 1000); // do something with the session here logger.close();

14.10. Synchronizing JBoss Developer Studio Workspace with Business Central Repositories
14.10.1. Importing a Business Central Repository using EGit Import Wizard
Procedure 14.11. Task
- Open JBoss Developer Studio.
- Navigate to → → → and click .
- Select to connect to a repository that is managed by Business Central and click .This opens a dialog box.
- Provide the URI of the repository you would like to import in the field.Provide the following URI to connect to your Business Central repositories:
ssh://<hostname>:8001/<repository_name>
For example, if you are running the Business Central on your local host by using the jbpm-installer, you would use the following URI to import the jbpm-playground repository:ssh://localhost:8001/jbpm-playground
You can change the port used by the server to provide ssh access to the git repository if necessary, using the system property org.uberfire.nio.git.ssh.port. - Click .
- Specify where on your local file system you would like this repository to be created in the field.
- Select the master branch in the field and click .
- Select to import the repository you downloaded as a project in your JBoss Developer Studio workspace and click >.
- Provide a name for the repository and click .
14.10.2. Committing Changes to Business Central
Procedure 14.12. Task
- Open your repository project in JBoss Developer Studio.
- Right-click on your repository project and select → .A new dialog box open showing all the changes you have on your local file system.
- Select the files you want to commit, provide an appropriate commit message, and click .You can double-click each file to get an overview of the changes you did for that file.
- Right-click your project again, and select → .
14.10.3. Retrieving the Changes from the Business Central Repository
Procedure 14.13. Task
- Open your repository project in JBoss Developer Studio.
- Right-click your repository project and select → .This action fetches all the changes from the Business Central repository.
- Right-click your project again and select → .A dialog appears.
- In the dialog box, select branch under .
- Click .
Note
14.10.4. Importing Individual Projects from Repository
- Interpret the information in the project
pom.xmlfile that you created in Business Central. - Download and include any dependencies you specified.
- Compile any Java classes you have in your project.
Procedure 14.14. Task
- In the JBoss Developer Studio, right-click on one of the projects in your repository project and select .
- Under the Maven category, select and click .The Import Maven Project dialog box opens displaying the
pom.xmlfile of the project you selected. - Click .
14.10.5. Adding JBoss BPM Suite libraries to your Project Classpath
Procedure 14.15. Task
- Right-click your project and select → .
Chapter 15. Case Management
15.1. Introduction
15.2. Use Cases
- Clinical decision support is a great use case for Case Management approach. Care plans are used to describe how patients must be treated in specific circumstances, but people like general practitioners still need to have the flexibility to add additional steps and deviate from the proposed plan, as each case is unique. A care plan with tasks to be performed when a patient who has high blood pressure can be designed with this approach. While a large part of the process is still well-structured, the general practitioner can decide which tasks must be performed as part of the sub-process. The practitioner also has the ability to add new tasks during that period, tasks that were not defined as part of the process, or repeat tasks multiple times. The process uses an ad-hoc sub-process to model this kind of flexibility, possibly augmented with rules or event processing to help in deciding which fragments to execute.
- An internet provider can use this approach to handle internet connectivity cases. Instead of having a set process from start to end, the case worker can choose from a number of actions based on the problem at hand. The case worker is responsible for selecting what to do next and can even add new tasks dynamically.
15.3. Case Management in JBoss BPM Suite
casemgmt that focuses on exposing the Case Management concepts. These explain how Case Management can be mapped with the existing constructs inside JBoss BPM Suite:
Case DefinitionA case definition is a very flexible high level process synonymous to the Ad-Hoc process in JBoss BPM Suite. You can define a default empty Ad-Hoc process for maximum flexibility to use when loaded inRuntimeManager. For a more complex case definition, you can define an Ad-Hoc process that may include milestones, predefined tasks to be accomplished and case roles to specify the roles of case participants.Case InstanceIn an Ad-Hoc process definition, a case instance is created that allows the involved roles to create new tasks. You can create a new case instance for an empty case as below:ProcessInstance processInstance = caseMgmtService.startNewCase("CaseName");During the start of a new case, the parameter 'Case Name' is set as a process variable 'name'.Alternatively, you can create a case instance the same way as new process instance:ProcessInstance processInstance = runtimeEngine.getKieSession().startProcess("CaseUserTask", params);Case FileA case file contains all the information required for managing a case. A case file comprises several case file items each representing a piece of information.Case ContextCase context is the audit and related information about a case execution. A case context can be identified based on the unique case id. TheCaseMgmtUtilclass is used to get active tasks, subprocesses, and nodes. TheAuditServiceclass is used to get a list of passed nodes, and anything that is possible to do with processes. And thegetCaseData()andsetCaseData()of case file are used to get and set thedynamicprocess variables.MilestonesYou can define milestones in a case definition and track a cases progress at runtime. A number of events can be captured from processes and tasks executions. Based on these events, you can define milestones in a case definition and track a case's progress at runtime. ThegetAchievedMilestones()is used to get all achieved milestones. The task names of milestones must beMilestone.Case RoleYou can define roles for a case definition and keep track of which users participate with the case in which role at runtime. Case roles are defined in the case definitions as below:<extensionElements> <tns:metaData name="customCaseRoles"> <tns:metaValue> responsible:1,accountable,consulted,informed </tns:metaValue> </tns:metaData> <tns:metaData name="customDescription"> <tns:metaValue> #{name} </tns:metaValue> </tns:metaData> </extensionElements>The number represents the maximum of users in this role. In the example above, only one user is assigned to role responsible. You can add users to case roles as follows:caseMgmtService.addUserToRole(caseId, "responsible", responsiblePerson);
The case roles cannot be used as groups for Human Tasks. The Human Task has to be assigned to some user with the case role, hence a user is selected in the case role based on some heuristics (random):public String getRandomUserInTheRole(long pid, String role) { String[] users = caseMgmtService.getCaseRoleInstanceNames(pid).get(role); Random rand = new Random(); int n = 0; if (users.length > 1) { n = rand.nextInt(users.length - 1); } return users[n]; }Dynamic NodesThis involves creating dynamic process task, human task, and case task.Human Task: The Human Task service inside JBoss BPM Suite that implements the WS-HumanTask specification (defined by the OASIS group) already provides this functionality and can be fully integrate with. This service takes care of the task lifecycle and allows you to access the internal task events.Process Task: You can use normal process definitions and instances to be executed as part of a case by correlating them with the case ID.Case Task: Just like how you can provide business processes to be executed from another process, you can provide the same feature for executing cases from inside another case.Work Task: The work task with defined work item handler.
Part IV. KIE
Chapter 16. KIE API
16.1. KIE
- AuthorIncludes authoring of knowledge using a UI metaphor, such as DRL, BPMN2, decision table, and class models.
- BuildIncludes building the authored knowledge into deployable units. For KIE, this unit is a JAR.
- TestIncludes testing KIE knowedge before it is deployed to the application.
- DeployIncludes deploying the unit to a location where applications may utilize (consume) them. KIE uses Maven style repository.
- UtilizeIncludes loading of a JAR to provide a KIE session (KieSession), for the application to interact with. KIE exposes the JAR at runtime via a KIE container (KieContainer). KieSessions, for the runtimes to interact with, are created from the KieContainer.
- RunIncludes system interaction with the KieSession, via API.
- WorkIncludes user interaction with the KieSession through command line or UI.
- ManageIncludes managing any KieSession or KieContainer.
16.2. KIE Framework
16.2.1. KIE Systems
- Author
- Knowledge author using UI metaphors such as DRL, BPMN2, decision tables, and class models.
- Build
- Builds the authored knowledge into deployable units.
- For KIE this unit is a JAR.
- Test
- Test KIE knowledge before it is deployed to the application.
- Deploy
- Deploys the unit to a location where applications may use them.
- KIE uses Maven style repository.
- Utilize
- The loading of a JAR to provide a KIE session (KieSession), for which the application can interact with.
- KIE exposes the JAR at runtime via a KIE container (KieContainer).
- KieSessions, for the runtimes to interact with, are created from the KieContainer.
- Run
- System interaction with the KieSession, via API.
- Work
- User interaction with the KieSession, via command line or UI.
- Manage
- Manage any KieSession or KieContainer.
16.2.2. KieBase
KieHelper to load processes from various resources (for example, from the classpath or from the file system), and then create a new knowledge base from that helper. The following code snippet shows how to create a knowledge base consisting of only one process definition (using in this case a resource from the classpath):
KieHelper kHelper = new KieHelper();
KieBase kBase = kieHelper.addResource(ResourceFactory.newClassPathResource("MyProcess.bpmn")).build();KieHelper and ResourceFactory that are a part of Internal APIs org.kie.internal.io.ResourceFactory and org.kie.internal.utils.KieHelper. Using RuntimeManager is a recommended way to create knowledge base and knowledge session.
Note
org.kie.internal) are not supported as they are subject to change.
KieBase is a repository of all the application's knowledge definitions. It contains rules, processes, functions, and type models. The KieBase itself does not contain data; instead, sessions are created from the KieBase into which data can be inserted, and, ultimately, process instances may be started. Creating the KieBase can be quite heavy, whereas session creation is very light; therefore, it is recommended that KieBase be cached where possible to allow for repeated session creation. Accordingly, the caching mechanism is automatically provided by the KieContainer.
Table 16.1. kbase Attributes
| Attribute name | Default value | Admitted values | Meaning |
|---|---|---|---|
| name | none | any | The name which retrieves the KieBase from the KieContainer. This is the only mandatory attribute. |
| includes | none | any comma separated list | A comma separated list of other KieBases contained in this kmodule. The artifacts of all these KieBases will also be included in this one. |
| packages | all | any comma separated list | By default all the JBoss BRMS artifacts under the resources folder, at any level, are included into the KieBase. This attribute allows to limit the artifacts that will be compiled in this KieBase to only the ones belonging to the list of packages. |
| default | false | true, false | Defines if this KieBase is the default one for this module, so it can be created from the KieContainer without passing any name to it. There can be at most one default KieBase in each module. |
| equalsBehavior | identity | identity, equality | Defines the behavior of JBoss BRMS when a new fact is inserted into the Working Memory. With identity it always create a new FactHandle unless the same object isn't already present in the Working Memory, while with equality only if the newly inserted object is not equal (according to its equal method) to an already existing fact. |
| eventProcessingMode | cloud | cloud, stream | When compiled in cloud mode the KieBase treats events as normal facts, while in stream mode allow temporal reasoning on them. |
| declarativeAgenda | disabled | disabled, enabled | Defines if the Declarative Agenda is enabled or not. |
16.2.3. KieSession
KieSession ksession = kbase.newKieSession();
ProcessInstance processInstance = ksession.startProcess("com.sample.MyProcess");
import org.kie.api.runtime.process.ProcessRuntime;
KieSession stores and executes on runtime data. It is created from the KieBase, or, more easily, created directly from the KieContainer if it has been defined in the kmodule.xml file.
Note
org.kie.tx.lock.enabled and the environment entry TRANSACTION_LOCK_ENABLED to true. The default value of these properties is false.
Table 16.2. ksession Attributes
| Attribute name | Default value | Admitted values | Meaning |
|---|---|---|---|
| name | none | any | Unique name of this KieSession. Used to fetch the KieSession from the KieContainer. This is the only mandatory attribute. |
| type | stateful | stateful, stateless | A stateful session allows to iteratively work with the Working Memory, while a stateless one is a one-off execution of a Working Memory with a provided data set. |
| default | false | true, false | Defines if this KieSession is the default one for this module, so it can be created from the KieContainer without passing any name to it. In each module there can be at most one default KieSession for each type. |
| clockType | realtime | realtime, pseudo | Defines if events timestamps are determined by the system clock or by a psuedo clock controlled by the application. This clock is specially useful for unit testing temporal rules. |
| beliefSystem | simple | simple, jtms, defeasible | Defines the type of belief system used by the KieSession. |
16.2.3.1. The ProcessRuntime Interface
ProcessRuntime interface defines all the session methods for interacting with processes as shown below.
/**
* Start a new process instance. The process (definition) that should
* be used is referenced by the given process id.
*
* @param processId The id of the process that should be started
* @return the ProcessInstance that represents the instance of the process that was started
*/
ProcessInstance startProcess(String processId);
/**
* Start a new process instance. The process (definition) that should
* be used is referenced by the given process id. Parameters can be passed
* to the process instance (as name-value pairs), and these will be set
* as variables of the process instance.
*
* @param processId the id of the process that should be started
* @param parameters the process variables that should be set when starting the process instance
* @return the ProcessInstance that represents the instance of the process that was started
*/
ProcessInstance startProcess(String processId,
Map<String, Object> parameters);
/**
* Signals the engine that an event has occurred. The type parameter defines
* which type of event and the event parameter can contain additional information
* related to the event. All process instances that are listening to this type
* of (external) event will be notified. For performance reasons, this type of event
* signaling should only be used if one process instance should be able to notify
* other process instances. For internal event within one process instance, use the
* signalEvent method that also include the processInstanceId of the process instance
* in question.
*
* @param type the type of event
* @param event the data associated with this event
*/
void signalEvent(String type,
Object event);
/**
* Signals the process instance that an event has occurred. The type parameter defines
* which type of event and the event parameter can contain additional information
* related to the event. All node instances inside the given process instance that
* are listening to this type of (internal) event will be notified. Note that the event
* will only be processed inside the given process instance. All other process instances
* waiting for this type of event will not be notified.
*
* @param type the type of event
* @param event the data associated with this event
* @param processInstanceId the id of the process instance that should be signaled
*/
void signalEvent(String type,
Object event,
long processInstanceId);
/**
* Returns a collection of currently active process instances. Note that only process
* instances that are currently loaded and active inside the engine will be returned.
* When using persistence, it is likely not all running process instances will be loaded
* as their state will be stored persistently. It is recommended not to use this
* method to collect information about the state of your process instances but to use
* a history log for that purpose.
*
* @return a collection of process instances currently active in the session
*/
Collection<ProcessInstance> getProcessInstances();
/**
* Returns the process instance with the given id. Note that only active process instances
* will be returned. If a process instance has been completed already, this method will return
* null.
*
* @param id the id of the process instance
* @return the process instance with the given id or null if it cannot be found
*/
ProcessInstance getProcessInstance(long processInstanceId);
/**
* Aborts the process instance with the given id. If the process instance has been completed
* (or aborted), or the process instance cannot be found, this method will throw an
* IllegalArgumentException.
*
* @param id the id of the process instance
*/
void abortProcessInstance(long processInstanceId);
/**
* Returns the WorkItemManager related to this session. This can be used to
* register new WorkItemHandlers or to complete (or abort) WorkItems.
*
* @return the WorkItemManager related to this session
*/
WorkItemManager getWorkItemManager();16.2.3.2. Event Listeners
ProcessEventListener class to listen to process-related events, such as starting or completing a process and entering and leaving a node. An event object provides access to related information, like the process instance and node instance linked to the event. You can use this API to register your own event listeners. Here is a list of methods of the ProcessEventListener class:
public interface ProcessEventListener {
void beforeProcessStarted( ProcessStartedEvent event );
void afterProcessStarted( ProcessStartedEvent event );
void beforeProcessCompleted( ProcessCompletedEvent event );
void afterProcessCompleted( ProcessCompletedEvent event );
void beforeNodeTriggered( ProcessNodeTriggeredEvent event );
void afterNodeTriggered( ProcessNodeTriggeredEvent event );
void beforeNodeLeft( ProcessNodeLeftEvent event );
void afterNodeLeft( ProcessNodeLeftEvent event );
void beforeVariableChanged(ProcessVariableChangedEvent event);
void afterVariableChanged(ProcessVariableChangedEvent event);
}ProcessEventListener class:
import org.kie.api.event.process.ProcessEventListener;
16.2.3.3. Before and After Events
beforeNodeLeftEvent and the afterNodeLeftEvent of the node that is left (as the triggering of the second node is a direct result of leaving the first node). This enables you to derive cause relationships between events more easily. Similarly, all node triggered and node left events that are the direct result of starting a process, occur between the beforeProcessStarted and afterProcessStarted events. In general, if you just want to be notified when a particular event occurs, you must check for the before events only (as they occur immediately before the event actually occurs). If you are only looking at the after events, you may get an impression of events firing in the wrong order. As the after events are triggered as a stack, they only fire when all events that were triggered as a result of this event have already fired. Use the after events only if you want to ensure that all processing related to this has ended. For example, when you want to be notified when starting of a particular process instance has ended.
16.2.3.4. Correlation Keys
CorrelationKey that is composed of CorrelationProperties. CorrelationKey can have either a single property describing it or can be represented as multi valued properties set. The correlation feature, generally used for long running processes, requires you to enable persistence in order to permanently store correlation information.
CorrelationAwareProcessRuntime interface. This interface that exposes following methods:
/**
* Start a new process instance. The process (definition) that should
* be used is referenced by the given process id. Parameters can be passed
* to the process instance (as name-value pairs), and these will be set
* as variables of the process instance.
*
* @param processId the id of the process that should be started
* @param correlationKey custom correlation key that can be used to identify process instance
* @param parameters the process variables that should be set when starting the process instance
* @return the ProcessInstance that represents the instance of the process that was started
*/
ProcessInstance startProcess(String processId, CorrelationKey correlationKey, Map<String, Object> parameters);
/**
* Creates a new process instance (but does not yet start it). The process
* (definition) that should be used is referenced by the given process id.
* Parameters can be passed to the process instance (as name-value pairs),
* and these will be set as variables of the process instance. You should only
* use this method if you need a reference to the process instance before actually
* starting it. Otherwise, use startProcess.
*
* @param processId the id of the process that should be started
* @param correlationKey custom correlation key that can be used to identify process instance
* @param parameters the process variables that should be set when creating the process instance
* @return the ProcessInstance that represents the instance of the process that was created (but not yet started)
*/
ProcessInstance createProcessInstance(String processId, CorrelationKey correlationKey, Map<String, Object> parameters);
/**
* Returns the process instance with the given correlationKey. Note that only active process instances
* will be returned. If a process instance has been completed already, this method will return
* null.
*
* @param correlationKey the custom correlation key assigned when process instance was created
* @return the process instance with the given id or null if it cannot be found
*/
ProcessInstance getProcessInstance(CorrelationKey correlationKey);
CorrelationAwareProcessRuntime:
import org.kie.internal.process.CorrelationAwareProcessRuntime;
16.2.3.5. Threads
Thread.sleep(...) as part of a script does not make the engine continue execution elsewhere, but blocks the engine thread during that period. The same principle applies to service tasks. When a service task is reached in a process, the engine invokes the handler of this service synchronously. The engine waits for the completeWorkItem(...) method to return before continuing execution. It is important that your service handler executes your service asynchronously if its execution is not instantaneous. An example of this is a service task that invokes an external service. Since the delay in invoking this service remotely and waiting for the results may take too long, invoking this service asynchronously is advised. This means that the handler only invokes the service and notifies the engine later when the results are available. In the mean time, the process engine then continues execution of the process.
16.2.3.6. Partial Correlation Keys
customerId per customer where each customer can have many applications (process instances) running simultaneously. Now, in order to retrieve a list of all the currently running applications and choose to continue any one of them, it would be beneficial if you use a correlation key with multiple properties (such as customerId and applicationId) and use only customerId to retrieve the entire list.
/**
* Returns active process instance description found for given correlation key if found otherwise null. At the same time it will
* fetch all active tasks (in status: Ready, Reserved, InProgress) to provide information what user task is keeping instance
* and who owns them (if were already claimed).
* @param correlationKey correlation key assigned to process instance
* @return Process instance information, in the form of a {@link ProcessInstanceDesc} instance.
*/
ProcessInstanceDesc getProcessInstanceByCorrelationKey(CorrelationKey correlationKey);
/**
* Returns process instances descriptions (regardless of their states) found for given correlation key if found otherwise empty list.
* This query uses 'like' to match correlation key so it allows to pass only partial keys - though matching
* is done based on 'starts with'
* @param correlationKey correlation key assigned to process instance
* @return A list of {@link ProcessInstanceDesc} instances representing the process instances that match
* the given correlation key
*/
Collection<ProcessInstanceDesc> getProcessInstancesByCorrelationKey(CorrelationKey correlationKey);
16.2.4. KieFileSystem
KieBases and KieSessions belonging to a KieModule programmatically instead of the declarative definition in the kmodule.xml file. The same programmatic API also allows in explicitly adding the file containing the Kie artifacts instead of automatically read them from the resources folder of your project. To do that it is necessary to create a KieFileSystem, a sort of virtual file system, and add all the resources contained in your project to it.
KieFileSystem from the KieServices. The kmodule.xml configuration file must be added to the filesystem. This is a mandatory step. Kie also provides a convenient fluent API, implemented by the KieModuleModel, to programmatically create this file.
KieModuleModel from the KieServices, configure it with the desired KieBases and KieSessions, convert it in XML and add the XML to the KieFileSystem. This process is shown by the following example:
Example 16.1. Creating a kmodule.xml programmatically and adding it to a KieFileSystem
import org.kie.api.KieServices;
import org.kie.api.builder.model.KieModuleModel;
import org.kie.api.builder.model.KieBaseModel;
import org.kie.api.builder.model.KieSessionModel;
import org.kie.api.builder.KieFileSystem;
//...
KieServices kieServices = KieServices.Factory.get();
KieModuleModel kieModuleModel = kieServices.newKieModuleModel();
KieBaseModel kieBaseModel1 = kieModuleModel.newKieBaseModel( "KBase1 ")
.setDefault( true )
.setEqualsBehavior( EqualityBehaviorOption.EQUALITY )
.setEventProcessingMode( EventProcessingOption.STREAM );
KieSessionModel ksessionModel1 = kieBaseModel1.newKieSessionModel( "KSession1" )
.setDefault( true )
.setType( KieSessionModel.KieSessionType.STATEFUL )
.setClockType( ClockTypeOption.get("realtime") );
KieFileSystem kfs = kieServices.newKieFileSystem();KieFileSystem, through its fluent API, all others Kie artifacts composing your project. These artifacts have to be added in the same position of a corresponding usual Maven project.
16.2.5. KieResources
Example 16.2. Adding Kie artifacts to a KieFileSystem
import org.kie.api.builder.KieFileSystem;
KieFileSystem kfs = ...
kfs.write( "src/main/resources/KBase1/ruleSet1.drl", stringContainingAValidDRL )
.write( "src/main/resources/dtable.xls",
kieServices.getResources().newInputStreamResource( dtableFileStream ) );Resources. In the latter case the Resources can be created by the KieResources factory, also provided by the KieServices. The KieResources provides many convenient factory methods to convert an InputStream, a URL, a File, or a String representing a path of your file system to a Resource that can be managed by the KieFileSystem.
Resource can be inferred from the extension of the name used to add it to the KieFileSystem. However it also possible to not follow the Kie conventions about file extensions and explicitly assign a specific ResourceType to a Resource as shown below:
Example 16.3. Creating and adding a Resource with an explicit type
import org.kie.api.builder.KieFileSystem;
KieFileSystem kfs = ...
kfs.write( "src/main/resources/myDrl.txt",
kieServices.getResources().newInputStreamResource( drlStream )
.setResourceType(ResourceType.DRL) );KieFileSystem and build it by passing the KieFileSystem to a KieBuilder
KieFileSystem are successfully built, the resulting KieModule is automatically added to the KieRepository. The KieRepository is a singleton acting as a repository for all the available KieModules.
16.3. Building with Maven
16.3.1. The kmodule
META-INF/kmodule.xml. The kmodule.xml file is the descriptor that selects resources to knowledge bases and configures those knowledge bases and sessions. There is also alternative XML support via Spring and OSGi BluePrints.
kmodule.xml being the simplest configuration. There must always be a kmodule.xml file, even if empty, as it is used for discovery of the JAR and its contents.
mvn install command to deploy a KieModule to the local machine, where all other applications on the local machine use it. Or it can mvn deploy command to push the KieModule to a remote Maven repository. Building the Application will pull in the KieModule and populate the local Maven repository in the process.
kmodule.xml in it. Each found JAR is represented by the KieModule interface. The terms classpath KieModule and dynamic KieModule are used to refer to the two loading approaches. While dynamic modules supports side by side versioning, classpath modules do not. Further once a module is on the classpath, no other version may be loaded dynamically.
kmodule.xml allows to define and configure one or more KieBases and for each KieBase all the different KieSessions that can be created from it, as shown in the following example:
Example 16.4. A sample kmodule.xml file
<kmodule xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xmlns="http://jboss.org/kie/6.0.0/kmodule">
<kbase name="KBase1" default="true" eventProcessingMode="cloud" equalsBehavior="equality" declarativeAgenda="enabled" packages="org.domain.pkg1">
<ksession name="KSession2_1" type="stateful" default="true/">
<ksession name="KSession2_1" type="stateless" default="false/" beliefSystem="jtms">
</kbase>
<kbase name="KBase2" default="false" eventProcessingMode="stream" equalsBehavior="equality" declarativeAgenda="enabled" packages="org.domain.pkg2, org.domain.pkg3" includes="KBase1">
<ksession name="KSession2_1" type="stateful" default="false" clockType="realtime">
<fileLogger file="drools.log" threaded="true" interval="10"/>
<workItemHandlers>
<workItemHandler name="name" type="new org.domain.WorkItemHandler()"/>
</workItemHandlers>
<listeners>
<ruleRuntimeEventListener type="org.domain.RuleRuntimeListener"/>
<agendaEventListener type="org.domain.FirstAgendaListener"/>
<agendaEventListener type="org.domain.SecondAgendaListener"/>
<processEventListener type="org.domain.ProcessListener"/>
</listeners>
</ksession>
</kbase>
</kmodule>KieBases have been defined and it is possible to instantiate two different types of KieSessions from the first one, while only one from the second.
16.3.2. Creating a KIE Project
KieBases and KieSessions that can be created from it. This file has to be placed in the resources/META-INF folder of the Maven project while all the other Kie artifacts, such as DRL or a Excel files, must be stored in the resources folder or in any other subfolder under it.
Example 16.5. An empty kmodule.xml file
<?xml version="1.0" encoding="UTF-8"?> <kmodule xmlns="http://jboss.org/kie/6.0.0/kmodule"/>
KieBase. All Kie assets stored under the resources folder, or any of its subfolders, will be compiled and added to it. To trigger the building of these artifacts it is enough to create a KieContainer for them.
16.3.3. Creating a KIE Container
KieContainer that reads files built from the classpath:
Example 16.6. Creating a KieContainer from the classpath
import org.kie.api.KieServices; import org.kie.api.runtime.KieContainer; KieServices kieServices = KieServices.Factory.get(); KieContainer kContainer = kieServices.getKieClasspathContainer();
Example 16.7. Retriving KieBases and KieSessions from the KieContainer
import org.kie.api.KieServices;
import org.kie.api.runtime.KieContainer;
import org.kie.api.KieBase;
import org.kie.api.runtime.KieSession;
import org.kie.api.runtime.StatelessKieSession;
KieServices kieServices = KieServices.Factory.get();
KieContainer kContainer = kieServices.getKieClasspathContainer();
KieBase kBase1 = kContainer.getKieBase("KBase1");
KieSession kieSession1 = kContainer.newKieSession("KSession2_1");
StatelessKieSession kieSession2 = kContainer.newStatelessKieSession("KSession2_2");KieContainer according to their declared type. If the type of the KieSession requested to the KieContainer doesn't correspond with the one declared in the kmodule.xml file the KieContainer will throw a RuntimeException. Also since a KieBase and a KieSession have been flagged as default is it possible to get them from the KieContainer without passing any name.
Example 16.8. Retriving default KieBases and KieSessions from the KieContainer
import org.kie.api.runtime.KieContainer; import org.kie.api.KieBase; import org.kie.api.runtime.KieSession; KieContainer kContainer = ... KieBase kBase1 = kContainer.getKieBase(); // returns KBase1 KieSession kieSession1 = kContainer.newKieSession(); // returns KSession2_1
ReleaseId that uniquely identifies this project inside your application. This allows creation of a new KieContainer from the project by simply passing its ReleaseId to the KieServices.
Example 16.9. Creating a KieContainer of an existing project by ReleaseId
import org.kie.api.KieServices; import org.kie.api.builder.ReleaseId; import org.kie.api.runtime.KieContainer; KieServices kieServices = KieServices.Factory.get(); ReleaseId releaseId = kieServices.newReleaseId( "org.acme", "myartifact", "1.0" ); KieContainer kieContainer = kieServices.newKieContainer( releaseId );
16.3.4. KieServices
KieServices is the interface from where it possible to access all the Kie building and runtime facilities:
16.3.5. KIE Plug-in
pom.xml as shown below:
Example 16.10. Adding the KIE plug-in to a Maven pom.xml
<build>
<plugins>
<plugin>
<groupId>org.kie</groupId>
<artifactId>kie-maven-plugin</artifactId>
<version>${project.version}</version>
<extensions>true</extensions>
</plugin>
</plugins>
</build>
Note
KieContainer. It also pushes the compilation overhead to the runtime. Hence it is recommended that you must use the Maven plug-in.
Note
org.drools:drools-decisiontables and for processes org.jbpm:jbpm-bpmn2.
16.4. KIE Deployment
16.4.1. KieRepository
KieFileSystem are successfully built, the resulting KieModule is automatically added to the KieRepository. The KieRepository is a singleton acting as a repository for all the available KieModules.
KieServices a new KieContainer for that KieModule using its ReleaseId. However, since in this case the KieFileSystem don't contain any pom.xml file (it is possible to add one using the KieFileSystem.writePomXML method), Kie cannot determine the ReleaseId of the KieModule and assign to it a default one. This default ReleaseId can be obtained from the KieRepository and used to identify the KieModule inside the KieRepository itself. The following example shows this whole process.
Example 16.11. Building the contents of a KieFileSystem and creating a KieContainer
import org.kie.api.KieServices; import org.kie.api.KieServices.Factory; import org.kie.api.builder.KieFileSystem; import org.kie.api.builder.KieBuilder; import org.kie.api.runtime.KieContainer; KieServices kieServices = KieServices.Factory.get(); KieFileSystem kfs = ... kieServices.newKieBuilder( kfs ).buildAll(); KieContainer kieContainer = kieServices.newKieContainer(kieServices.getRepository().getDefaultReleaseId());
KieBases and create new KieSessions from this KieContainer exactly in the same way as in the case of a KieContainer created directly from the classpath.
KieBuilder reports compilation results of 3 different severities: ERROR, WARNING and INFO. An ERROR indicates that the compilation of the project failed and in the case no KieModule is produced and nothing is added to the KieRepository. WARNING and INFO results can be ignored, but are available for inspection.
Example 16.12. Checking that a compilation didn't produce any error
import org.kie.api.builder.KieBuilder; import org.kie.api.KieServices; KieBuilder kieBuilder = kieServices.newKieBuilder( kfs ).buildAll(); assertEquals( 0, kieBuilder.getResults().getMessages( Message.Level.ERROR ).size() );
16.4.2. Session Modification
KieBase is a repository of all the application's knowledge definitions. It will contain rules, processes, functions, and type models. The KieBase itself does not contain data; instead, sessions are created from the KieBase into which data can be inserted and from which process instances may be started. The KieBase can be obtained from the KieContainer containing the KieModule where the KieBase has been defined.
KieBase needs to resolve types that are not in the default class loader. In this case it will be necessary to create a KieBaseConfiguration with an additional class loader and pass it to KieContainer when creating a new KieBase from it.
Example 16.13. Creating a new KieBase with a custom ClassLoader
import org.kie.api.KieServices; import org.kie.api.KieServices.Factory; import org.kie.api.KieBaseConfiguration; import org.kie.api.KieBase; import org.kie.api.runtime.KieContainer; KieServices kieServices = KieServices.Factory.get(); KieBaseConfiguration kbaseConf = kieServices.newKieBaseConfiguration( null, MyType.class.getClassLoader() ); KieBase kbase = kieContainer.newKieBase( kbaseConf );
KieBase creates and returns KieSession objects, and it may optionally keep references to those. When KieBase modifications occur those modifications are applied against the data in the sessions. This reference is a weak reference and it is also optional, which is controlled by a boolean flag.
Note
WEB-INF/classes into WEB-INF/lib/_wl_cls_gen.jar. So when you use KIE-Spring to create KieBase and KieSession from resources stored under WEB-INF/classes, KIE-Spring fails to locate these resources. For this reason, the recommended deployment method in WebLogic, is to use the exploded archives contained within the product's ZIP file.
16.4.3. KieScanner
KieScanner allows continuous monitoring of your Maven repository to check whether a new release of a KIE project has been installed. A new release is deployed in the KieContainer wrapping that project. The use of the KieScanner requires kie-ci.jar to be on the classpath.
KieScanner can be registered on a KieContainer as in the following example.
Example 16.14. Registering and starting a KieScanner on a KieContainer
import org.kie.api.KieServices; import org.kie.api.builder.ReleaseId; import org.kie.api.runtime.KieContainer; import org.kie.api.builder.KieScanner; ... KieServices kieServices = KieServices.Factory.get(); ReleaseId releaseId = kieServices.newReleaseId( "org.acme", "myartifact", "1.0-SNAPSHOT" ); KieContainer kContainer = kieServices.newKieContainer( releaseId ); KieScanner kScanner = kieServices.newKieScanner( kContainer ); // Start the KieScanner polling the Maven repository every 10 seconds kScanner.start( 10000L );
KieScanner is configured to run with a fixed time interval, but it is also possible to run it on demand by invoking the scanNow() method on it. If the KieScanner finds in the Maven repository an updated version of the KIE project used by that KieContainer it automatically downloads the new version and triggers an incremental build of the new project. From this moment all the new KieBases and KieSessions created from that KieContainer will use the new project version.
Maven Settings
always as shown in the following example:
<profile>
<id>guvnor-m2-repo</id>
<repositories>
<repository>
<id>guvnor-m2-repo</id>
<name>BRMS Repository</name>
<url>http://10.10.10.10:8080/business-central/maven2/</url>
<layout>default</layout>
<releases>
<enabled>true</enabled>
<updatePolicy>always</updatePolicy>
</releases>
<snapshots>
<enabled>true</enabled>
<updatePolicy>always</updatePolicy>
</snapshots>
</repository>
</repositories>
</profile>
16.5. Running in KIE
16.5.1. KieRuntime
KieRuntime provides methods that are applicable to both rules and processes, such as setting globals and registering channels. ("Exit point" is an obsolete synonym for "channel".)
16.5.2. Globals in KIE
global java.util.List list
ksession.setGlobal() with the global's name and an object, for any session, to associate the object with the global. Failure to declare the global type and identifier in DRL code will result in an exception being thrown from this call.
List list = new ArrayList();
ksession.setGlobal("list", list);NullPointerException.
16.5.3. Event Packages
KieRuntimeEventManager interface is implemented by the KieRuntime which provides two interfaces, RuleRuntimeEventManager and ProcessEventManager. We will only cover the RuleRuntimeEventManager here.
RuleRuntimeEventManager allows for listeners to be added and removed, so that events for the working memory and the agenda can be listened to.
Example 16.15. Adding an AgendaEventListener
import org.kie.api.runtime.process.EventListener;
ksession.addEventListener( new DefaultAgendaEventListener() {
public void afterMatchFired(AfterMatchFiredEvent event) {
super.afterMatchFired( event );
System.out.println( event );
}
});DebugRuleRuntimeEventListener and DebugAgendaEventListener which implement each method with a debug print statement. To print all Working Memory events, you add a listener like this:
Example 16.16. Adding a DebugRuleRuntimeEventListener
ksession.addEventListener( new DebugRuleRuntimeEventListener() );
KieRuntimeEvent interface which can be used to retrieve the actual KnowlegeRuntime the event originated from.
- MatchCreatedEvent
- MatchCancelledEvent
- BeforeMatchFiredEvent
- AfterMatchFiredEvent
- AgendaGroupPushedEvent
- AgendaGroupPoppedEvent
- ObjectInsertEvent
- ObjectDeletedEvent
- ObjectUpdatedEvent
- ProcessCompletedEvent
- ProcessNodeLeftEvent
- ProcessNodeTriggeredEvent
- ProcessStartEvent
16.5.4. Logger Implementations
- Console logger: This logger writes out all the events to the console. The KieServices provides you a KieRuntimeLogger that you can add to your session. When you create a console logger, pass the knowledge session as an argument.
- File logger: This logger writes out all the events to a file using an XML representation. You can use this log file in your IDE to generate a tree-based visualization of the events that occurs during execution. For the file logger, you need to provide name.
- Threaded file logger: As a file logger writes the events to disk only when closing the logger or when the number of events in the logger reaches a predefined level, you can not ise it when debugging processes at runtime. A threaded file logger writes the events to a file after a specified time interval, making it possible to use the logger to visualize the progress in real-time, while debugging processes. For the threaded file logger, you need to provide the interval (in milliseconds) after which the events must be saved. You must always close the logger at the end of your application.
Example 16.17. FileLogger
import org.kie.api.KieServices; import org.kie.api.logger.KieRuntimeLogger; ... KieRuntimeLogger logger = KieServices.Factory.get().getLoggers().newFileLogger(ksession, "test"); // add invocations to the process engine here, // e.g. ksession.startProcess(processId); ... logger.close();
16.5.5. CommandExecutor Interface
CommandExecutor interface, which both the stateful and stateless interfaces extend. This returns an ExecutionResults:
CommandExecutor allows for commands to be executed on those sessions, the only difference being that the StatelessKieSession executes fireAllRules() at the end before disposing the session. The commands can be created using the CommandExecutor .The Javadocs provide the full list of the allowed comands using the CommandExecutor.
ExecutionResults. If true it uses the same name as the global name. A String can be used instead of the boolean, if an alternative name is desired.
Example 16.18. Set Global Command
import org.kie.api.runtime.StatelessKieSession;
import org.kie.api.runtime.ExecutionResults;
StatelessKieSession ksession = kbase.newStatelessKieSession();
ExecutionResults bresults =
ksession.execute( CommandFactory.newSetGlobal( "stilton", new Cheese( "stilton" ), true);
Cheese stilton = bresults.getValue( "stilton" );
Example 16.19. Get Global Command
import org.kie.api.runtime.StatelessKieSession;
import org.kie.api.runtime.ExecutionResults;
StatelessKieSession ksession = kbase.newStatelessKieSession();
ExecutionResults bresults =
ksession.execute( CommandFactory.getGlobal( "stilton" );
Cheese stilton = bresults.getValue( "stilton" );
BatchExecution represents a composite command, created from a list of commands. It will iterate over the list and execute each command in turn. This means you can insert some objects, start a process, call fireAllRules and execute a query, all in a single execute(...) call, which is quite powerful.
fireAllRules() automatically at the end. However the keen-eyed reader probably has already noticed the FireAllRules command and wondered how that works with a StatelessKieSession. The FireAllRules command is allowed, and using it will disable the automatic execution at the end; think of using it as a sort of manual override function.
ExecutionResults instance.
Example 16.20. BatchExecution Command
import org.kie.api.runtime.StatelessKieSession; import org.kie.api.runtime.ExecutionResults; StatelessKieSession ksession = kbase.newStatelessKieSession(); List cmds = new ArrayList(); cmds.add( CommandFactory.newInsertObject( new Cheese( "stilton", 1), "stilton") ); cmds.add( CommandFactory.newStartProcess( "process cheeses" ) ); cmds.add( CommandFactory.newQuery( "cheeses" ) ); ExecutionResults bresults = ksession.execute( CommandFactory.newBatchExecution( cmds ) ); Cheese stilton = ( Cheese ) bresults.getValue( "stilton" ); QueryResults qresults = ( QueryResults ) bresults.getValue( "cheeses" );
ExecutionResults. The query command defaults to use the same identifier as the query name, but it can also be mapped to a different identifier.
16.5.6. Available API
XStream
DroolsHelperProvider to obtain an XStream instance. It is required because it has the commands converters registered. Also ensure that the drools-compiler library is present on the classpath.
- Marshalling
BatchExecutionHelper.newXStreamMarshaller().toXML(command); - Unmarshalling
BatchExecutionHelperProviderImpl.newXStreamMarshaller().fromXML(xml)
BatchExecutionHelper class is org.kie.internal.runtime.helper.BatchExecutionHelper.
JSON
- Marshalling
BatchExecutionHelper.newJSonMarshaller().toXML(command); - Unmarshalling
BatchExecutionHelper.newJSonMarshaller().fromXML(xml)
JAXB
JAXBContext. In order to do this, you need to use Drools Helper classes. Once you have the JAXBContext, you need to create the Unmarshaller/Marshaller as needed.
Using an XSD file to define the model
ResourceType into the KBase. Finally you can create the JAXBContext using the KBase (created with the KnowledgeBuilder).
Ensure that the drools-compiler and jaxb-xjc
libraries are present on the classpath.
import org.kie.api.conf.Option;
import org.kie.api.KieBase;
Options xjcOpts = new Options();
xjcOpts.setSchemaLanguage(Language.XMLSCHEMA);
JaxbConfiguration jaxbConfiguration = KnowledgeBuilderFactory.newJaxbConfiguration( xjcOpts, "xsd" );
kbuilder.add(ResourceFactory.newClassPathResource("person.xsd", getClass()), ResourceType.XSD, jaxbConfiguration);
KieBase kbase = kbuilder.newKnowledgeBase();
List<String> classesName = new ArrayList<String>();
classesName.add("org.drools.compiler.test.Person");
JAXBContext jaxbContext = KnowledgeBuilderHelper.newJAXBContext(classesName.toArray(new String[classesName.size()]), kbase);Using a POJO model
DroolsJaxbHelperProviderImpl to create the JAXBContext. This class has two parameters:
- classNames: A list with the canonical name of the classes that you want to use in the marshalling/unmarshalling process.
- properties: JAXB custom properties
List<String> classNames = new ArrayList<String>();
classNames.add("org.drools.compiler.test.Person");
JAXBContext jaxbContext = DroolsJaxbHelperProviderImpl.createDroolsJaxbContext(classNames, null);
Marshaller marshaller = jaxbContext.createMarshaller();drools-compiler and jaxb-xjc
libraries are present on the classpath. The fully-qualified class name of the DroolsJaxbHelperProviderImpl class is org.drools.compiler.runtime.pipeline.impl.DroolsJaxbHelperProviderImpl.
16.5.7. Supported JBoss BRMS Commands
BatchExecutionCommandInsertObjectCommandRetractCommandModifyCommandGetObjectCommandInsertElementsCommandFireAllRulesCommandStartProcessCommandSignalEventCommandCompleteWorkItemCommandAbortWorkItemCommandQueryCommandSetGlobalCommandGetGlobalCommandGetObjectsCommand
Note
org.drools.compiler.test.Person with the following fields:
- name: String
- age: Integer
16.5.7.1. BatchExecutionCommand
BatchExecutionCommand command contains a list of commands that are sent to the Decision Server and executed. It has the following attributes:
Table 16.3. BatchExecutionCommand Attributes
| Name | Description | Required |
|---|---|---|
| lookup | Sets the knowledge session id on which the commands are going to be executed | true |
| commands | List of commands to be executed | false |
BatchExecutionCommand command = new BatchExecutionCommand();
command.setLookup("ksession1");
InsertObjectCommand insertObjectCommand = new InsertObjectCommand(new Person("john", 25));
FireAllRulesCommand fireAllRulesCommand = new FireAllRulesCommand();
command.getCommands().add(insertObjectCommand);
command.getCommands().add(fireAllRulesCommand);
XStream:
<batch-execution lookup="ksession1">
<insert>
<org.drools.compiler.test.Person>
<name>john</name>
<age>25</age>
</org.drools.compiler.test.Person>
</insert>
<fire-all-rules/>
</batch-execution>
JSON:
{"lookup":"ksession1","commands":[{"insert":{"object":{"org.drools.compiler.test.Person":{"name":"john","age":25}}}},{"fire-all-rules":""}]}
JAXB:
<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<batch-execution lookup="ksession1">
<insert>
<object xsi:type="person" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
<age>25</age>
<name>john</name>
</object>
</insert>
<fire-all-rules max="-1"/>
</batch-execution>
16.5.7.2. InsertObjectCommand
InsertObjectCommand command is used to insert an object in the knowledge session. It has the following attributes:
Table 16.4. InsertObjectCommand Attributes
| Name | Description | Required |
|---|---|---|
| object | The object to be inserted | true |
| outIdentifier | Id to identify the FactHandle created in the object insertion and added to the execution results | false |
| returnObject | Boolean to establish if the object must be returned in the execution results. Default value: true | false |
| entryPoint | Entrypoint for the insertion | false |
List<Command> cmds = ArrayList<Command>();
Command insertObjectCommand = CommandFactory.newInsert(new Person("john", 25), "john", false, null);
cmds.add( insertObjectCommand );
BatchExecutionCommand command = CommandFactory.createBatchExecution(cmds, "ksession1" );
XStream:
<batch-execution lookup="ksession1">
<insert out-identifier="john" entry-point="my stream" return-object="false">
<org.drools.compiler.test.Person>
<name>john</name>
<age>25</age>
</org.drools.compiler.test.Person>
</insert>
</batch-execution>
JSON:
{"lookup":"ksession1","commands":{"insert":{"entry-point":"my stream", "out-identifier":"john","return-object":false,"object":{"org.drools.compiler.test.Person":{"name":"john","age":25}}}}}
JAXB:
<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<batch-execution lookup="ksession1">
<insert out-identifier="john" entry-point="my stream" >
<object xsi:type="person" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
<age>25</age>
<name>john</name>
</object>
</insert>
</batch-execution>
16.5.7.3. RetractCommand
RetractCommand command is used to retract an object from the knowledge session. It has the following attributes:
Table 16.5. RetractCommand Attributes
| Name | Description | Required |
|---|---|---|
| handle | The FactHandle associated to the object to be retracted | true |
There are two ways to create RetractCommand. You can either create the Fact Handle from a string, with the same output result as shown below:
BatchExecutionCommand command = new BatchExecutionCommand();
command.setLookup("ksession1");
RetractCommand retractCommand = new RetractCommand();
retractCommand.setFactHandleFromString("123:234:345:456:567");
command.getCommands().add(retractCommand);
Or set the Fact Handle that you received when the object was inserted, as shown below:
BatchExecutionCommand command = new BatchExecutionCommand();
command.setLookup("ksession1");
RetractCommand retractCommand = new RetractCommand(factHandle);
command.getCommands().add(retractCommand);
XStream:
<batch-execution lookup="ksession1"> <retract fact-handle="0:234:345:456:567"/> </batch-execution>JSON:
{"lookup":"ksession1","commands":{"retract":{"fact-handle":"0:234:345:456:567"}}}
JAXB:
<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<batch-execution lookup="ksession1">
<retract fact-handle="0:234:345:456:567"/>
</batch-execution>
16.5.7.4. ModifyCommand
ModifyCommand command allows you to modify a previously inserted object in the knowledge session. It has the following attributes:
Table 16.6. ModifyCommand Attributes
| Name | Description | Required |
|---|---|---|
| handle | The FactHandle associated to the object to be retracted | true |
| setters | List of setters object's modifications | true |
BatchExecutionCommand command = new BatchExecutionCommand();
command.setLookup("ksession1");
ModifyCommand modifyCommand = new ModifyCommand();
modifyCommand.setFactHandleFromString("123:234:345:456:567");
List<Setter> setters = new ArrayList<Setter>();
setters.add(new SetterImpl("age", "30"));
modifyCommand.setSetters(setters);
command.getCommands().add(modifyCommand);
XStream:
<batch-execution lookup="ksession1">
<modify fact-handle="0:234:345:456:567">
<set accessor="age" value="30"/>
</modify>
</batch-execution>
JSON:
{"lookup":"ksession1","commands":{"modify":{"fact-handle":"0:234:345:456:567","setters":{"accessor":"age","value":30}}}}
JAXB:
<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<batch-execution lookup="ksession1">
<modify fact-handle="0:234:345:456:567">
<set value="30" accessor="age"/>
</modify>
</batch-execution>
16.5.7.5. GetObjectCommand
GetObjectCommand command is used to get an object from a knowledge session. It has the following attributes:
Table 16.7. BatchExecutionCommand Attributes
| Name | Description | Required |
|---|---|---|
| factHandle | The FactHandle associated to the object to be retracted | true |
| outIdentifier | Id to identify the FactHandle created in the object insertion and added to the execution results | false |
BatchExecutionCommand command = new BatchExecutionCommand();
command.setLookup("ksession1");
GetObjectCommand getObjectCommand = new GetObjectCommand();
getObjectCommand.setFactHandleFromString("123:234:345:456:567");
getObjectCommand.setOutIdentifier("john");
command.getCommands().add(getObjectCommand);
XStream:
<batch-execution lookup="ksession1"> <get-object fact-handle="0:234:345:456:567" out-identifier="john"/> </batch-execution>JSON:
{"lookup":"ksession1","commands":{"get-object":{"fact-handle":"0:234:345:456:567","out-identifier":"john"}}}
JAXB:
<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<batch-execution lookup="ksession1">
<get-object out-identifier="john" fact-handle="0:234:345:456:567"/>
</batch-execution>
16.5.7.6. InsertElementsCommand
InsertElementsCommand command is used to insert a list of objects. It has the following attributes:
Table 16.8. InsertElementsCommand Attributes
| Name | Description | Required |
|---|---|---|
| objects | The list of objects to be inserted on the knowledge session | true |
| outIdentifier | Id to identify the FactHandle created in the object insertion and added to the execution results | false |
| returnObject | Boolean to establish if the object must be returned in the execution results. Default value: true | false |
| entryPoint | Entrypoint for the insertion | false |
List<Command> cmds = ArrayList<Command>();
List<Object> objects = new ArrayList<Object>();
objects.add(new Person("john", 25));
objects.add(new Person("sarah", 35));
Command insertElementsCommand = CommandFactory.newInsertElements( objects );
cmds.add( insertElementsCommand );
BatchExecutionCommand command = CommandFactory.createBatchExecution(cmds, "ksession1" );
XStream:
<batch-execution lookup="ksession1">
<insert-elements>
<org.drools.compiler.test.Person>
<name>john</name>
<age>25</age>
</org.drools.compiler.test.Person>
<org.drools.compiler.test.Person>
<name>sarah</name>
<age>35</age>
</org.drools.compiler.test.Person>
</insert-elements>
</batch-execution>
JSON:
{"lookup":"ksession1","commands":{"insert-elements":{"objects":[{"containedObject":{"@class":"org.drools.compiler.test.Person","name":"john","age":25}},{"containedObject":{"@class":"Person","name":"sarah","age":35}}]}}}
JAXB:
<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<batch-execution lookup="ksession1">
<insert-elements return-objects="true">
<list>
<element xsi:type="person" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
<age>25</age>
<name>john</name>
</element>
<element xsi:type="person" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
<age>35</age>
<name>sarah</name>
</element>
<list>
</insert-elements>
</batch-execution>
16.5.7.7. FireAllRulesCommand
FireAllRulesCommand command is used to allow execution of the rules activations created. It has the following attributes:
Table 16.9. FireAllRulesCommand Attributes
| Name | Description | Required |
|---|---|---|
| max | The max number of rules activations to be executed. default is -1 and will not put any restriction on execution | false |
| outIdentifier | Add the number of rules activations fired on the execution results | false |
| agendaFilter | Allow the rules execution using an Agenda Filter | false |
BatchExecutionCommand command = new BatchExecutionCommand();
command.setLookup("ksession1");
FireAllRulesCommand fireAllRulesCommand = new FireAllRulesCommand();
fireAllRulesCommand.setMax(10);
fireAllRulesCommand.setOutIdentifier("firedActivations");
command.getCommands().add(fireAllRulesCommand);
XStream:
<batch-execution lookup="ksession1"> <fire-all-rules max="10" out-identifier="firedActivations"/> </batch-execution>JSON:
{"lookup":"ksession1","commands":{"fire-all-rules":{"max":10,"out-identifier":"firedActivations"}}}
JAXB:
<?xml version="1.0" encoding="UTF-8" standalone="yes"?> <batch-execution lookup="ksession1"> <fire-all-rules out-identifier="firedActivations" max="10"/> </batch-execution>
16.5.7.8. StartProcessCommand
StartProcessCommand command allows you to start a process using the ID. Additionally, you can pass parameters and initial data to be inserted. It has the following attributes:
Table 16.10. StartProcessCommand Attributes
| Name | Description | Required |
|---|---|---|
| processId | The ID of the process to be started | true |
| parameters | A Map <String>, <Object> to pass parameters in the process startup | false |
| data | A list of objects to be inserted in the knowledge session before the process startup | false |
BatchExecutionCommand command = new BatchExecutionCommand();
command.setLookup("ksession1");
StartProcessCommand startProcessCommand = new StartProcessCommand();
startProcessCommand.setProcessId("org.drools.task.processOne");
command.getCommands().add(startProcessCommand);
XStream:
<batch-execution lookup="ksession1"> <start-process processId="org.drools.task.processOne"/> </batch-execution>JSON:
{"lookup":"ksession1","commands":{"start-process":{"process-id":"org.drools.task.processOne"}}}
JAXB:
<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<batch-execution lookup="ksession1">
<start-process processId="org.drools.task.processOne">
<parameter/>
</start-process>
</batch-execution>
16.5.7.9. SignalEventCommand
SignalEventCommand command is used to send a signal event. It has the following attributes:
Table 16.11. SignalEventCommand Attributes
| Name | Description | Required |
|---|---|---|
| event-type | The type of the incoming event | true |
| processInstanceId | The ID of the process instance to be started | false |
| event | The name of the incoming event | false |
BatchExecutionCommand command = new BatchExecutionCommand();
command.setLookup("ksession1");
SignalEventCommand signalEventCommand = new SignalEventCommand();
signalEventCommand.setProcessInstanceId(1001);
signalEventCommand.setEventType("start");
signalEventCommand.setEvent(new Person("john", 25));
command.getCommands().add(signalEventCommand);
XStream:
<batch-execution lookup="ksession1">
<signal-event process-instance-id="1001" event-type="start">
<org.drools.pipeline.camel.Person>
<name>john</name>
<age>25</age>
</org.drools.pipeline.camel.Person>
</signal-event>
</batch-execution>
JSON:
{"lookup":"ksession1","commands":{"signal-event":{"process-instance-id":1001,"@event-type":"start","event-type":"start","object":{"org.drools.pipeline.camel.Person":{"name":"john","age":25}}}}}
JAXB:
<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<batch-execution lookup="ksession1">
<signal-event event-type="start" process-instance-id="1001">
<event xsi:type="person" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
<age>25</age>
<name>john</name>
</event>
</signal-event>
</batch-execution>
16.5.7.10. CompleteWorkItemCommand
CompleteWorkItemCommand command allows you to complete a WorkItem. It has the following attributes:
Table 16.12. CompleteWorkItemCommand Attributes
| Name | Description | Required |
|---|---|---|
| workItemId | The ID of the WorkItem to be completed | true |
| results | The result of the WorkItem | false |
BatchExecutionCommand command = new BatchExecutionCommand();
command.setLookup("ksession1");
CompleteWorkItemCommand completeWorkItemCommand = new CompleteWorkItemCommand();
completeWorkItemCommand.setWorkItemId(1001);
command.getCommands().add(completeWorkItemCommand);
XStream:
<batch-execution lookup="ksession1"> <complete-work-item id="1001"/> </batch-execution>JSON:
{"lookup":"ksession1","commands":{"complete-work-item":{"id":1001}}}
JAXB:
<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<batch-execution lookup="ksession1">
<complete-work-item id="1001"/>
</batch-execution>
16.5.7.11. AbortWorkItemCommand
AbortWorkItemCommand command allows you abort a WorkItem (same as session.getWorkItemManager().abortWorkItem(workItemId)) It has the following attributes:
Table 16.13. AbortWorkItemCommand Attributes
| Name | Description | Required |
|---|---|---|
| workItemId | The ID of the WorkItem to be completed | true |
BatchExecutionCommand command = new BatchExecutionCommand();
command.setLookup("ksession1");
AbortWorkItemCommand abortWorkItemCommand = new AbortWorkItemCommand();
abortWorkItemCommand.setWorkItemId(1001);
command.getCommands().add(abortWorkItemCommand);
XStream:
<batch-execution lookup="ksession1"> <abort-work-item id="1001"/> </batch-execution>JSON:
{"lookup":"ksession1","commands":{"abort-work-item":{"id":1001}}}
JAXB:
<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<batch-execution lookup="ksession1">
<abort-work-item id="1001"/>
</batch-execution>
16.5.7.12. QueryCommand
QueryCommand command executes a query defined in knowledge base. It has the following attributes:
Table 16.14. QueryCommand Attributes
| Name | Description | Required |
|---|---|---|
| name | The query name | true |
| outIdentifier | The identifier of the query results. The query results are going to be added in the execution results with this identifier | false |
| arguments | A list of objects to be passed as a query parameter | false |
BatchExecutionCommand command = new BatchExecutionCommand();
command.setLookup("ksession1");
QueryCommand queryCommand = new QueryCommand();
queryCommand.setName("persons");
queryCommand.setOutIdentifier("persons");
command.getCommands().add(queryCommand);
XStream:
<batch-execution lookup="ksession1"> <query out-identifier="persons" name="persons"/> </batch-execution>JSON:
{"lookup":"ksession1","commands":{"query":{"out-identifier":"persons","name":"persons"}}}
JAXB:
<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<batch-execution lookup="ksession1">
<query name="persons" out-identifier="persons"/>
</batch-execution>
16.5.7.13. SetGlobalCommand
SetGlobalCommand command allows you to set an object to global state. It has the following attributes:
Table 16.15. SetGlobalCommand Attributes
| Name | Description | Required |
|---|---|---|
| identifier | The identifier of the global defined in the knowledge base | true |
| object | The object to be set into the global | false |
| out | A boolean to add, or not, the set global result into the execution results | false |
| outIdentifier | The identifier of the global execution result | false |
BatchExecutionCommand command = new BatchExecutionCommand();
command.setLookup("ksession1");
SetGlobalCommand setGlobalCommand = new SetGlobalCommand();
setGlobalCommand.setIdentifier("helper");
setGlobalCommand.setObject(new Person("kyle", 30));
setGlobalCommand.setOut(true);
setGlobalCommand.setOutIdentifier("output");
command.getCommands().add(setGlobalCommand);
XStream:
<batch-execution lookup="ksession1">
<set-global identifier="helper" out-identifier="output">
<org.drools.compiler.test.Person>
<name>kyle</name>
<age>30</age>
</org.drools.compiler.test.Person>
</set-global>
</batch-execution>
JSON:
{"lookup":"ksession1","commands":{"set-global":{"identifier":"helper","out-identifier":"output","object":{"org.drools.compiler.test.Person":{"name":"kyle","age":30}}}}}
JAXB:
<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<batch-execution lookup="ksession1">
<set-global out="true" out-identifier="output" identifier="helper">
<object xsi:type="person" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
<age>30</age>
<name>kyle</name>
</object>
</set-global>
</batch-execution>
16.5.7.14. GetGlobalCommand
GetGlobalCommand command allows you to get a previously defined global object. It has the following attributes:
Table 16.16. GetGlobalCommand Attributes
| Name | Description | Required |
|---|---|---|
| identifier | The identifier of the global defined in the knowledge base | true |
| outIdentifier | The identifier to be used in the execution results | false |
BatchExecutionCommand command = new BatchExecutionCommand();
command.setLookup("ksession1");
GetGlobalCommand getGlobalCommand = new GetGlobalCommand();
getGlobalCommand.setIdentifier("helper");
getGlobalCommand.setOutIdentifier("helperOutput");
command.getCommands().add(getGlobalCommand);
XStream:
<batch-execution lookup="ksession1"> <get-global identifier="helper" out-identifier="helperOutput"/> </batch-execution> </batch-execution>JSON:
{"lookup":"ksession1","commands":{"get-global":{"identifier":"helper","out-identifier":"helperOutput"}}}
JAXB:
<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<batch-execution lookup="ksession1">
<get-global out-identifier="helperOutput" identifier="helper"/>
</batch-execution>
16.5.7.15. GetObjectsCommand
GetObjectsCommand command returns all the objects from the current session as a Collection. It has the following attributes:
Table 16.17. GetObjectsCommand Attributes
| Name | Description | Required |
|---|---|---|
| objectFilter | An ObjectFilter to filter the objects returned from the current session | false |
| outIdentifier | The identifier to be used in the execution results | false |
BatchExecutionCommand command = new BatchExecutionCommand();
command.setLookup("ksession1");
GetObjectsCommand getObjectsCommand = new GetObjectsCommand();
getObjectsCommand.setOutIdentifier("objects");
command.getCommands().add(getObjectsCommand);
XStream:
<batch-execution lookup="ksession1"> <get-objects out-identifier="objects"/> </batch-execution>JSON:
{"lookup":"ksession1","commands":{"get-objects":{"out-identifier":"objects"}}}
JAXB:
<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<batch-execution lookup="ksession1">
<get-objects out-identifier="objects"/>
</batch-execution>
16.6. KIE Configuration
16.6.1. Build Result Severity
Example 16.21. Setting the severity using properties
// sets the severity of rule updates drools.kbuilder.severity.duplicateRule = <INFO|WARNING|ERROR> // sets the severity of function updates drools.kbuilder.severity.duplicateFunction = <INFO|WARNING|ERROR>
16.6.2. StatelessKieSession
StatelessKieSession wraps the KieSession, instead of extending it. Its main focus is on the decision service type scenarios. It avoids the need to call dispose(). Stateless sessions do not support iterative insertions and the method call fireAllRules() from Java code; the act of calling execute() is a single-shot method that will internally instantiate a KieSession, add all the user data and execute user commands, call fireAllRules(), and then call dispose(). While the main way to work with this class is via the BatchExecution (a subinterface of Command) as supported by the CommandExecutor interface, two convenience methods are provided for when simple object insertion is all that's required. The CommandExecutor and BatchExecution are talked about in detail in their own section.
Example 16.22. Simple StatelessKieSession execution with a Collection
import org.kie.api.runtime.StatelessKieSession; StatelessKieSession ksession = kbase.newStatelessKieSession(); ksession.execute( collection );
Example 16.23. Simple StatelessKieSession execution with InsertElements Command
ksession.execute( CommandFactory.newInsertElements( collection ) );
CommandFactory.newInsert(collection) would do the job.
StatelessKieSession supports globals, scoped in a number of ways. We cover the non-command way first, as commands are scoped to a specific execution call. Globals can be resolved in three ways.
- The StatelessKieSession method
getGlobals()returns a Globals instance which provides access to the session's globals. These are shared for all execution calls. Exercise caution regarding mutable globals because execution calls can be executing simultaneously in different threads.Example 16.24. Session scoped global
import org.kie.api.runtime.StatelessKieSession; StatelessKieSession ksession = kbase.newStatelessKieSession(); // Set a global hbnSession, that can be used for DB interactions in the rules. ksession.setGlobal( "hbnSession", hibernateSession ); // Execute while being able to resolve the "hbnSession" identifier. ksession.execute( collection );
- Using a delegate is another way of global resolution. Assigning a value to a global (with
setGlobal(String, Object)) results in the value being stored in an internal collection mapping identifiers to values. Identifiers in this internal collection will have priority over any supplied delegate. Only if an identifier cannot be found in this internal collection, the delegate global (if any) will be used. - The third way of resolving globals is to have execution scoped globals. Here, a
Commandto set a global is passed to theCommandExecutor.
CommandExecutor interface also offers the ability to export data via "out" parameters. Inserted facts, globals and query results can all be returned.
Example 16.25. Out identifiers
import org.kie.api.runtime.ExecutionResults; // Set up a list of commands List cmds = new ArrayList(); cmds.add( CommandFactory.newSetGlobal( "list1", new ArrayList(), true ) ); cmds.add( CommandFactory.newInsert( new Person( "jon", 102 ), "person" ) ); cmds.add( CommandFactory.newQuery( "Get People" "getPeople" ); // Execute the list ExecutionResults results = ksession.execute( CommandFactory.newBatchExecution( cmds ) ); // Retrieve the ArrayList results.getValue( "list1" ); // Retrieve the inserted Person fact results.getValue( "person" ); // Retrieve the query as a QueryResults instance. results.getValue( "Get People" );
16.6.3. Marshalling
KieMarshallers are used to marshal and unmarshal KieSessions.
KieMarshallers can be retrieved from the KieServices. A simple example is shown below:
Example 16.26. Simple Marshaller Example
import org.kie.api.runtime.KieSession; import org.kie.api.KieBase; import org.kie.api.marshalling.Marshaller; // ksession is the KieSession // kbase is the KieBase ByteArrayOutputStream baos = new ByteArrayOutputStream(); Marshaller marshaller = KieServices.Factory.get().getMarshallers().newMarshaller( kbase ); marshaller.marshall( baos, ksession ); baos.close();
ObjectMarshallingStrategy interface. Two implementations are provided, but users can implement their own. The two supplied strategies are IdentityMarshallingStrategy and SerializeMarshallingStrategy. SerializeMarshallingStrategy is the default, as shown in the example above, and it just calls the Serializable or Externalizable methods on a user instance. IdentityMarshallingStrategy creates an integer id for each user object and stores them in a Map, while the id is written to the stream. When unmarshalling it accesses the IdentityMarshallingStrategy map to retrieve the instance. This means that if you use the IdentityMarshallingStrategy, it is stateful for the life of the Marshaller instance and will create ids and keep references to all objects that it attempts to marshal. Below is the code to use an Identity Marshalling Strategy.
Example 16.27. IdentityMarshallingStrategy
import org.kie.api.marshalling.KieMarshallers;
import org.kie.api.marshalling.ObjectMarshallingStrategy;
import org.kie.api.marshalling.Marshaller;
ByteArrayOutputStream baos = new ByteArrayOutputStream();
KieMarshallers kMarshallers = KieServices.Factory.get().getMarshallers()
ObjectMarshallingStrategy oms = kMarshallers.newIdentityMarshallingStrategy()
Marshaller marshaller =
kMarshallers.newMarshaller( kbase, new ObjectMarshallingStrategy[]{ oms } );
marshaller.marshall( baos, ksession );
baos.close();
ObjectMarshallingStrategyAcceptor interface can be used. This Marshaller has a chain of strategies, and while reading or writing a user object it iterates the strategies asking if they accept responsibility for marshalling the user object. One of the provided implementations is ClassFilterAcceptor. This allows strings and wild cards to be used to match class names. The default is "*.*", so in the above example the Identity Marshalling Strategy is used which has a default "*.*" acceptor.
Example 16.28. IdentityMarshallingStrategy with Acceptor
import org.kie.api.marshalling.KieMarshallers;
import org.kie.api.marshalling.ObjectMarshallingStrategy;
import org.kie.api.marshalling.Marshaller;
ByteArrayOutputStream baos = new ByteArrayOutputStream();
KieMarshallers kMarshallers = KieServices.Factory.get().getMarshallers()
ObjectMarshallingStrategyAcceptor identityAcceptor =
kMarshallers.newClassFilterAcceptor( new String[] { "org.domain.pkg1.*" } );
ObjectMarshallingStrategy identityStrategy =
kMarshallers.newIdentityMarshallingStrategy( identityAcceptor );
ObjectMarshallingStrategy sms = kMarshallers.newSerializeMarshallingStrategy();
Marshaller marshaller =
kMarshallers.newMarshaller( kbase,
new ObjectMarshallingStrategy[]{ identityStrategy, sms } );
marshaller.marshall( baos, ksession );
baos.close();
Example 16.29. Configuring a trackable timer job factory manager
import org.kie.api.runtime.KieSessionConfiguration;
import org.kie.api.KieServices.Factory;
import org.kie.api.runtime.conf.TimerJobFactoryOption;
KieSessionConfiguration ksconf = KieServices.Factory.get().newKieSessionConfiguration();
ksconf.setOption(TimerJobFactoryOption.get("trackable"));
KSession ksession = kbase.newKieSession(ksconf, null);16.6.4. KIE Persistence
Example 16.30. Simple example using transactions
import org.kie.api.KieServices;
import org.kie.api.runtime.Environment;
import org.kie.api.runtime.EnvironmentName;
import org.kie.api.runtime.KieSessionConfiguration;
KieServices kieServices = KieServices.Factory.get();
Environment env = kieServices.newEnvironment();
env.set( EnvironmentName.ENTITY_MANAGER_FACTORY,
Persistence.createEntityManagerFactory( "emf-name" ) );
env.set( EnvironmentName.TRANSACTION_MANAGER,
TransactionManagerServices.getTransactionManager() );
// KieSessionConfiguration may be null, and a default will be used
KieSession ksession =
kieServices.getStoreServices().newKieSession( kbase, null, env );
int sessionId = ksession.getId();
UserTransaction ut =
(UserTransaction) new InitialContext().lookup( "java:comp/UserTransaction" );
ut.begin();
ksession.insert( data1 );
ksession.insert( data2 );
ksession.startProcess( "process1" );
ut.commit();EntityManagerFactory and the TransactionManager. If rollback occurs the ksession state is also rolled back, hence it is possible to continue to use it after a rollback. To load a previously persisted KieSession you'll need the id, as shown below:
Example 16.31. Loading a KieSession
import org.kie.api.runtime.KieSession;
KieSession ksession =
kieServices.getStoreServices().loadKieSession( sessionId, kbase, null, env );Example 16.32. Configuring JPA
<persistence-unit name="org.drools.persistence.jpa" transaction-type="JTA">
<provider>org.hibernate.ejb.HibernatePersistence</provider>
<jta-data-source>jdbc/BitronixJTADataSource</jta-data-source>
<class>org.drools.persistence.info.SessionInfo</class>
<class>org.drools.persistence.info.WorkItemInfo</class>
<properties>
<property name="hibernate.dialect" value="org.hibernate.dialect.H2Dialect"/>
<property name="hibernate.max_fetch_depth" value="3"/>
<property name="hibernate.hbm2ddl.auto" value="update" />
<property name="hibernate.show_sql" value="true" />
<property name="hibernate.transaction.manager_lookup_class"
value="org.hibernate.transaction.BTMTransactionManagerLookup" />
</properties>
</persistence-unit>
Example 16.33. Configuring JTA DataSource
PoolingDataSource ds = new PoolingDataSource(); ds.setUniqueName( "jdbc/BitronixJTADataSource" ); ds.setClassName( "org.h2.jdbcx.JdbcDataSource" ); ds.setMaxPoolSize( 3 ); ds.setAllowLocalTransactions( true ); ds.getDriverProperties().put( "user", "sa" ); ds.getDriverProperties().put( "password", "sasa" ); ds.getDriverProperties().put( "URL", "jdbc:h2:mem:mydb" ); ds.init();
Example 16.34. JNDI properties
java.naming.factory.initial=bitronix.tm.jndi.BitronixInitialContextFactory
16.7. KIE Sessions
16.7.1. Stateless KIE Sessions
16.7.1.1. Configuring Rules in a Stateless Session
Procedure 16.1. Task
- Create a data model like the driver's license example below:
public class Applicant { private String name; private int age; private boolean valid; // getter and setter methods here } - Write the first rule. In this example, a rule is added to disqualify any applicant younger than 18:
package com.company.license rule "Is of valid age" when $a : Applicant( age < 18 ) then $a.setValid( false ); end - When the
Applicantobject is inserted into the rule engine, each rule's constraints evaluate it and search for a match. (There is always an implied constraint of "object type" after which there can be any number of explicit field constraints.)In theIs of valid agerule there are two constraints:- The fact being matched must be of type Applicant
- The value of Age must be less than eighteen.
$ais a binding variable. It exists to make possible a reference to the matched object in the rule's consequence (from which place the object's properties can be updated).Note
Use of the dollar sign ($) is optional. It helps to differentiate between variable names and field names. - To use this rule, save it in a file with .drl extension (for example,
licenseApplication.drl), and store it in a Kie Project. A Kie Project has the structure of a normal Maven project with an additionalkmodule.xmlfile defining the KieBases and KieSessions. Place this file in theresources/META-INFfolder of the Maven project. Store all the other artifacts, such as thelicenseApplication.drlcontaining any former rule, in the resources folder or in any other subfolder under it. - Create a
KieContainerthat reads the files to be built, from the classpath:KieServices kieServices = KieServices.Factory.get(); KieContainer kContainer = kieServices.getKieClasspathContainer();
This compiles all the rule files found on the classpath and put the result of this compilation, aKieModule, in theKieContainer. - If there are no errors, you can go ahead and create your session from the
KieContainerand execute against some data:StatelessKieSession kSession = kContainer.newStatelessKieSession(); Applicant applicant = new Applicant( "Mr John Smith", 16 ); assertTrue( applicant.isValid() ); ksession.execute( applicant ); assertFalse( applicant.isValid() );
Here, since the applicant is under the age of eighteen, their application will be marked as "invalid".
The preceding code executes the data against the rules. Since the applicant is under the age of 18, the application is marked as invalid.
16.7.1.2. Configuring Rules with Multiple Objects
Procedure 16.2. Task
- To execute rules against any object-implementing
iterable(such as a collection), add another class as shown in the example code below:public class Applicant { private String name; private int age; // getter and setter methods here } public class Application { private Date dateApplied; private boolean valid; // getter and setter methods here } - In order to check that the application was made within a legitimate time-frame, add this rule:
package com.company.license rule "Is of valid age" when Applicant( age < 18 ) $a : Application() then $a.setValid( false ); end rule "Application was made this year" when $a : Application( dateApplied > "01-jan-2009" ) then $a.setValid( false ); end - Use the JDK converter to implement the iterable interface. (This method commences with the line
Arrays.asList(...).) The code shown below executes rules against an iterable list. Every collection element is inserted before any matched rules are fired:StatelessKieSession ksession = kbase.newStatelessKnowledgeSession(); Applicant applicant = new Applicant( "Mr John Smith", 16 ); Application application = new Application(); assertTrue( application.isValid() ); ksession.execute( Arrays.asList( new Object[] { application, applicant } ) ); assertFalse( application.isValid() );Note
Theexecute(Object object)andexecute(Iterable objects)methods are actually "wrappers" around a further method calledexecute(Command command)which comes from theBatchExecutorinterface. - Use the
CommandFactoryto create instructions, so that the following is equivalent toexecute( Iterable it ):ksession.execute( CommandFactory.newInsertIterable( new Object[] { application, applicant } ) ); - Use the
BatchExecutorandCommandFactorywhen working with many different commands or result output identifiers:List<Command> cmds = new ArrayList<Command>(); cmds.add( CommandFactory.newInsert( new Person( "Mr John Smith" ), "mrSmith" ); cmds.add( CommandFactory.newInsert( new Person( "Mr John Doe" ), "mrDoe" ); BatchExecutionResults results = ksession.execute( CommandFactory.newBatchExecution( cmds ) ); assertEquals( new Person( "Mr John Smith" ), results.getValue( "mrSmith" ) );
Note
CommandFactorysupports many other commands that can be used in theBatchExecutor. Some of these areStartProcess,QueryandSetGlobal.
16.7.2. Stateful KIE Sessions
StatelessKnowledgeSession, the StatefulKnowledgeSession supports the BatchExecutor interface. The only difference is the FireAllRules command is not automatically called at the end.
Warning
dispose() method is called after running a stateful session. This is to ensure that there are no memory leaks. This is due to the fact that knowledge bases will obtain references to stateful knowledge sessions when they are created.
16.7.2.1. Common Use Cases for Stateful Sessions
- Monitoring
- For example, you can monitor a stock market and automate the buying process.
- Diagnostics
- Stateful sessions can be used to run fault-finding processes. They could also be used for medical diagnostic processes.
- Logistical
- For example, they could be applied to problems involving parcel tracking and delivery provisioning.
- Ensuring compliance
- For example, to validate the legality of market trades.
16.7.2.2. Stateful Session Monitoring Example
Procedure 16.3. Task
- Create a model of what you want to monitor. In this example involving fire alarms, the rooms in a house have been listed. Each has one sprinkler. A fire can start in any of the rooms:
public class Room { private String name // getter and setter methods here } public class Sprinkler { private Room room; private boolean on; // getter and setter methods here } public class Fire { private Room room; // getter and setter methods here } public class Alarm { } - The rules must express the relationships between multiple objects (to define things such as the presence of a sprinkler in a certain room). To do this, use a binding variable as a constraint in a pattern. This results in a cross-product.
- Create an instance of the
Fireclass and insert it into the session.The rule below adds a binding toFireobject's room field to constrain matches. This so that only the sprinkler for that room is checked. When this rule fires and the consequence executes, the sprinkler activates:rule "When there is a fire turn on the sprinkler" when Fire($room : room) $sprinkler : Sprinkler( room == $room, on == false ) then modify( $sprinkler ) { setOn( true ) }; System.out.println("Turn on the sprinkler for room "+$room.getName()); endWhereas the stateless session employed standard Java syntax to modify a field, the rule above uses themodifystatement. (It acts much like a "with" statement.)
16.8. Runtime Manager
16.8.1. The RuntimeManager Interface
RuntimeManager interface simplifies and empowers the usage of knowledge API in context of processes. It provides configurable strategies that control actual runtime execution and by default provides following:
- Singleton:
RuntimeManagermaintains singleKieSessionregardless of number of processes available. - Per Request:
RuntimeManagerdelivers newKieSessionfor every request. - Per Process Instance:
RuntimeManagermaintains mapping between process instance andKieSessionand always provides sameKieSessionwhenever working with given process instance.
package org.kie.api.runtime.manager;
public interface RuntimeManager {
/**
* Returns <code>RuntimeEngine</code> instance that is fully initialized:
* KiseSession is created or loaded depending on the strategy
* TaskService is initialized and attached to ksession (via listener)
* WorkItemHandlers are initialized and registered on ksession
* EventListeners (process, agenda, working memory) are initialized and added to ksession
* @param context the concrete implementation of the context that is supported by given <code>RuntimeManager</code>
* @return instance of the <code>RuntimeEngine</code>
*/
RuntimeEngine getRuntimeEngine(Context<?> context);
/**
* Unique identifier of the <code>RuntimeManager</code>
* @return
*/
String getIdentifier();
/**
* Disposes <code>RuntimeEngine</code> and notifies all listeners about that fact.
* This method should always be used to dispose <code>RuntimeEngine</code> that is not needed
* anymore.
* ksession.dispose() shall never be used with RuntimeManager as it will break the internal
* mechanisms of the manager responsible for clear and efficient disposal.<br/>
* Dispose is not needed if <code>RuntimeEngine</code> was obtained within active JTA transaction,
* this means that when getRuntimeEngine method was invoked during active JTA transaction then dispose of
* the runtime engine will happen automatically on transaction completion.
* @param runtime
*/
void disposeRuntimeEngine(RuntimeEngine runtime);
/**
* Closes <code>RuntimeManager</code> and releases its resources. Shall always be called when
* runtime manager is not needed any more. Otherwise it will still be active and operational.
*/
void close();
}
RuntimeManager is responsible for managing and delivering instances of RuntimeEngine to the caller. In turn, RuntimeEngine encapsulates two the most important elements of JBoss BPM Suite engine:
KieSessionTaskService
RuntimeManager ensures that regardless of the strategy, it will provide same capabilities when it comes to initialization and configuration of the RuntimeEngine. This means:
- KieSession will be loaded with same factories (either in memory or JPA based)
- WorkItemHandlers will be registered on every KieSession (either loaded from db or newly created)
- Event listeners (Process, Agenda, WorkingMemory) will be registered on every
KieSession(either loaded from db or newly created) TaskServicewill be configured with:- JTA transaction manager
- Same entity manager factory as for the
KieSession UserGroupCallbackfrom environment
RuntimeManager maintains the engine disposal by providing dedicated methods to dispose RuntimeEngine when it is no more required to release any resources it might have acquired.
16.8.2. The RuntimeEngine Interface
RuntimeEngine interface provides the following methods access the engine components:
public interface RuntimeEngine {
/**
* Returns <code>KieSession</code> configured for this <code>RuntimeEngine</code>
* @return
*/
KieSession getKieSession();
/**
* Returns <code>TaskService</code> configured for this <code>RuntimeEngine</code>
* @return
*/
TaskService getTaskService();
}16.8.3. Strategies
This instructs the RuntimeManager to maintain single instance of RuntimeEngine and in turn single instance of KieSession and TaskService. Access to the RuntimeEngine is synchronized and the thread is safe although it comes with a performance penalty due to synchronization. This strategy is similar to what was available by default in JBoss Enterprise BRMS Platform version 5.x and it is considered the easiest strategy and recommended to start with. It has the following characteristics:
- Small memory footprint, that is a single instance of runtime engine and task service.
- Simple and compact in design and usage.
- Good fit for low to medium load on process engine due to synchronized access.
- Due to single
KieSessioninstance, all state objects (such as facts) are directly visible to all process instances and vice versa. - Not contextual, that is when retrieving instances of
RuntimeEnginefrom singletonRuntimeManager, Context instance is not important and usually theEmptyContext.get()method is used, although null argument is acceptable as well. - Keeps track of the ID of the
KieSessionused betweenRuntimeManagerrestarts, to ensure it uses the same session. This ID is stored as serialized file on disc in a temporary location that depends on the environment.
This instructs the RuntimeManager to provide new instance of RuntimeEngine for every request. As the RuntimeManager request considers one or more invocations within single transaction. It must return same instance of RuntimeEngine within single transaction to ensure correctness of state as otherwise the operation in one call would not be visible in the other. This is sort of a stateless strategy that provides only request scope state. Once the request is completed, the RuntimeEngineis permanently destroyed. The KieSession informationis thene removed from the database in case you used persistence. It has following characteristics:
- Completely isolated process engine and task service operations for every request.
- Completely stateless, storing facts makes sense only for the duration of the request.
- A good fit for high load, stateless processes (no facts or timers involved that shall be preserved between requests).
KieSessionis only available during life time of request and at the end is destroyed- Not contextual, that is when retrieving instances of
RuntimeEnginefrom per requestRuntimeManager, Context instance is not important and usually theEmptyContext.get()method is used, although null argument is acceptable as well.
This instructs the RuntimeManager to maintain a strict relationship between KieSession and ProcessInstance. That means that the KieSession will be available as long as the ProcessInstance that it belongs to is active. This strategy provides the most flexible approach to use advanced capabilities of the engine like rule evaluation in isolation (for given process instance only). It provides maximum performance and reduction of potential bottlenecks introduced by synchronization. Additionally, it reduces number of KieSessions to the actual number of process instances, rather than number of requests (in contrast to per request strategy). It has the following characteristics:
- Most advanced strategy to provide isolation to given process instance only.
- Maintains strict relationship between
KieSessionandProcessInstanceto ensure it will always deliver sameKieSessionfor givenProcessInstance. - Merges life cycle of
KieSessionwithProcessInstancemaking both to be disposed on process instance completion (complete or abort). - Allows to maintain data (such as facts, timers) in scope of process instance, that is, only process instance will have access to that data.
- Introduces a bit of overhead due to need to look up and load
KieSessionfor process instance. - Validates usage of
KieSession, so it can not be used for other process instances. In such cases, an exception is thrown. - Is contextual. It accepts
EmptyContext,ProcessInstanceIdContext, andCorrelationKeyContextcontext instances.
16.8.4. Usage Scenario for RuntimeManager Interface
RuntimeManager is:
- At application startup
- Build the
RuntimeManagerand keep it for entire life time of the application. It is thread safe and you can access it concurrently.
- At request
- Get
RuntimeEnginefromRuntimeManagerusing proper context instance dedicated to strategy ofRuntimeManager. - Get
KieSessionorTaskServicefromRuntimeEngine. - Perform operations on
KieSessionorTaskServicesuch asstartProcessandcompleteTask. - Once done with processing, dispose
RuntimeEngineusing theRuntimeManager.disposeRuntimeEnginemethod.
- At application shutdown
- Close
RuntimeManager.
Note
RuntimeEngine is obtained from RuntimeManager within an active JTA transaction, then there is no need to dispose RuntimeEngine at the end, as it automatically disposes the RuntimeEngine on transaction completion (regardless of the completion status commit or rollback).
16.8.5. Building RuntimeManager
RuntimeManager and get RuntimeEngine (that encapsulates KieSession and TaskService) from it:
// first configure environment that will be used by RuntimeManager
RuntimeEnvironment environment = RuntimeEnvironmentBuilder.Factory.get()
.newDefaultInMemoryBuilder()
.addAsset(ResourceFactory.newClassPathResource("BPMN2-ScriptTask.bpmn2"), ResourceType.BPMN2)
.get();
// next create RuntimeManager - in this case singleton strategy is chosen
RuntimeManager manager = RuntimeManagerFactory.Factory.get().newSingletonRuntimeManager(environment);
// then get RuntimeEngine out of manager - using empty context as singleton does not keep track
// of runtime engine as there is only one
RuntimeEngine runtime = manager.getRuntimeEngine(EmptyContext.get());
// get KieSession from runtime runtimeEngine - already initialized with all handlers, listeners, etc that were configured
// on the environment
KieSession ksession = runtimeEngine.getKieSession();
// add invocations to the process engine here,
// e.g. ksession.startProcess(processId);
// and last dispose the runtime engine
manager.disposeRuntimeEngine(runtimeEngine);
16.8.6. RuntimeEnvironment Configuration
RuntimeEnvironment.
public interface RuntimeEnvironment {
/**
* Returns <code>KieBase</code> that shall be used by the manager
* @return
*/
KieBase getKieBase();
/**
* KieSession environment that shall be used to create instances of <code>KieSession</code>
* @return
*/
Environment getEnvironment();
/**
* KieSession configuration that shall be used to create instances of <code>KieSession</code>
* @return
*/
KieSessionConfiguration getConfiguration();
/**
* Indicates if persistence shall be used for the KieSession instances
* @return
*/
boolean usePersistence();
/**
* Delivers concrete implementation of <code>RegisterableItemsFactory</code> to obtain handlers and listeners
* that shall be registered on instances of <code>KieSession</code>
* @return
*/
RegisterableItemsFactory getRegisterableItemsFactory();
/**
* Delivers concrete implementation of <code>UserGroupCallback</code> that shall be registered on instances
* of <code>TaskService</code> for managing users and groups.
* @return
*/
UserGroupCallback getUserGroupCallback();
/**
* Delivers custom class loader that shall be used by the process engine and task service instances
* @return
*/
ClassLoader getClassLoader();
/**
* Closes the environment allowing to close all depending components such as ksession factories, etc
*/
void close();
16.8.7. Building RuntimeEnvironment
RuntimeEnvironment interface provides access to the data kept as part of the environment. You can use the builder style class that provides fluent API to configure RuntimeEnvironment with predefined settings:
package org.kie.api.runtime.manager;
public interface RuntimeEnvironmentBuilder {
public RuntimeEnvironmentBuilder persistence(boolean persistenceEnabled);
public RuntimeEnvironmentBuilder entityManagerFactory(Object emf);
public RuntimeEnvironmentBuilder addAsset(Resource asset, ResourceType type);
public RuntimeEnvironmentBuilder addEnvironmentEntry(String name, Object value);
public RuntimeEnvironmentBuilder addConfiguration(String name, String value);
public RuntimeEnvironmentBuilder knowledgeBase(KieBase kbase);
public RuntimeEnvironmentBuilder userGroupCallback(UserGroupCallback callback);
public RuntimeEnvironmentBuilder registerableItemsFactory(RegisterableItemsFactory factory);
public RuntimeEnvironment get();
public RuntimeEnvironmentBuilder classLoader(ClassLoader cl);
public RuntimeEnvironmentBuilder schedulerService(Object globalScheduler);
RuntimeEnvironmentBuilder via RuntimeEnvironmentBuilderFactory that provides preconfigured sets of builder to simplify and help you build the environment for the RuntimeManager.
public interface RuntimeEnvironmentBuilderFactory {
/**
* Provides completely empty <code>RuntimeEnvironmentBuilder</code> instance that allows to manually
* set all required components instead of relying on any defaults.
* @return new instance of <code>RuntimeEnvironmentBuilder</code>
*/
public RuntimeEnvironmentBuilder newEmptyBuilder();
/**
* Provides default configuration of <code>RuntimeEnvironmentBuilder</code> that is based on:
* DefaultRuntimeEnvironment
* @return new instance of <code>RuntimeEnvironmentBuilder</code> that is already preconfigured with defaults
* @see DefaultRuntimeEnvironment
*/
public RuntimeEnvironmentBuilder newDefaultBuilder();
/**
* Provides default configuration of <code>RuntimeEnvironmentBuilder</code> that is based on:
* DefaultRuntimeEnvironment
* but it does not have persistence for process engine configured so it will only store process instances in memory
* @return new instance of <code>RuntimeEnvironmentBuilder</code> that is already preconfigured with defaults
* @see DefaultRuntimeEnvironment
*/
public RuntimeEnvironmentBuilder newDefaultInMemoryBuilder();
/**
* Provides default configuration of <code>RuntimeEnvironmentBuilder</code> that is based on:
* DefaultRuntimeEnvironment
* This one is tailored to works smoothly with kjars as the notion of kbase and ksessions
* @param groupId group id of kjar
* @param artifactId artifact id of kjar
* @param version version number of kjar
* @return new instance of <code>RuntimeEnvironmentBuilder</code> that is already preconfigured with defaults
* @see DefaultRuntimeEnvironment
*/
public RuntimeEnvironmentBuilder newDefaultBuilder(String groupId, String artifactId, String version);
/**
* Provides default configuration of <code>RuntimeEnvironmentBuilder</code> that is based on:
* DefaultRuntimeEnvironment
* This one is tailored to works smoothly with kjars as the notion of kbase and ksessions
* @param groupId group id of kjar
* @param artifactId artifact id of kjar
* @param version version number of kjar
* @param kbaseName name of the kbase defined in kmodule.xml stored in kjar
* @param ksessionName name of the ksession define in kmodule.xml stored in kjar
* @return new instance of <code>RuntimeEnvironmentBuilder</code> that is already preconfigured with defaults
* @see DefaultRuntimeEnvironment
*/
public RuntimeEnvironmentBuilder newDefaultBuilder(String groupId, String artifactId, String version, String kbaseName, String ksessionName);
/**
* Provides default configuration of <code>RuntimeEnvironmentBuilder</code> that is based on:
* DefaultRuntimeEnvironment
* This one is tailored to works smoothly with kjars as the notion of kbase and ksessions
* @param releaseId <code>ReleaseId</code> that described the kjar
* @return new instance of <code>RuntimeEnvironmentBuilder</code> that is already preconfigured with defaults
* @see DefaultRuntimeEnvironment
*/
public RuntimeEnvironmentBuilder newDefaultBuilder(ReleaseId releaseId);
/**
* Provides default configuration of <code>RuntimeEnvironmentBuilder</code> that is based on:
* DefaultRuntimeEnvironment
* This one is tailored to works smoothly with kjars as the notion of kbase and ksessions
* @param releaseId <code>ReleaseId</code> that described the kjar
* @param kbaseName name of the kbase defined in kmodule.xml stored in kjar
* @param ksessionName name of the ksession define in kmodule.xml stored in kjar
* @return new instance of <code>RuntimeEnvironmentBuilder</code> that is already preconfigured with defaults
* @see DefaultRuntimeEnvironment
*/
public RuntimeEnvironmentBuilder newDefaultBuilder(ReleaseId releaseId, String kbaseName, String ksessionName);
/**
* Provides default configuration of <code>RuntimeEnvironmentBuilder</code> that is based on:
* DefaultRuntimeEnvironment
* It relies on KieClasspathContainer that requires to have kmodule.xml present in META-INF folder which
* defines the kjar itself.
* Expects to use default kbase and ksession from kmodule.
* @return new instance of <code>RuntimeEnvironmentBuilder</code> that is already preconfigured with defaults
* @see DefaultRuntimeEnvironment
*/
public RuntimeEnvironmentBuilder newClasspathKmoduleDefaultBuilder();
/**
* Provides default configuration of <code>RuntimeEnvironmentBuilder</code> that is based on:
* DefaultRuntimeEnvironment
* It relies on KieClasspathContainer that requires to have kmodule.xml present in META-INF folder which
* defines the kjar itself.
* @param kbaseName name of the kbase defined in kmodule.xml
* @param ksessionName name of the ksession define in kmodule.xml
* @return new instance of <code>RuntimeEnvironmentBuilder</code> that is already preconfigured with defaults
*@see DefaultRuntimeEnvironment
*/
public RuntimeEnvironmentBuilder newClasspathKmoduleDefaultBuilder(String kbaseName, String ksessionName);KieSession, Runtime Manager also provides access to TaskService. The default builder comes with predefined set of elements that consists of:
- Persistence unit name: It is set to
org.jbpm.persistence.jpa(for both process engine and task service). - Human Task handler: This is automatically registered on the
KieSession. - JPA based history log event listener: This is automatically registered on the
KieSession. - Event listener to trigger rule task evaluation (fireAllRules): This is automatically registered on the
KieSession.
16.8.8. Registering Handlers and Listeners through RegisterableItemsFactory
RegisterableItemsFactory provides you a dedicated mechanism to create your own handlers or listeners.
/**
* Returns new instances of <code>WorkItemHandler</code> that will be registered on <code>RuntimeEngine</code>
* @param runtime provides <code>RuntimeEngine</code> in case handler need to make use of it internally
* @return map of handlers to be registered - in case of no handlers empty map shall be returned.
*/
Map<String, WorkItemHandler> getWorkItemHandlers(RuntimeEngine runtime);
/**
* Returns new instances of <code>ProcessEventListener</code> that will be registered on <code>RuntimeEngine</code>
* @param runtime provides <code>RuntimeEngine</code> in case listeners need to make use of it internally
* @return list of listeners to be registered - in case of no listeners empty list shall be returned.
*/
List<ProcessEventListener> getProcessEventListeners(RuntimeEngine runtime);
/**
* Returns new instances of <code>AgendaEventListener</code> that will be registered on <code>RuntimeEngine</code>
* @param runtime provides <code>RuntimeEngine</code> in case listeners need to make use of it internally
* @return list of listeners to be registered - in case of no listeners empty list shall be returned.
*/
List<AgendaEventListener> getAgendaEventListeners(RuntimeEngine runtime);
/**
* Returns new instances of <code>WorkingMemoryEventListener</code> that will be registered on <code>RuntimeEngine</code>
* @param runtime provides <code>RuntimeEngine</code> in case listeners need to make use of it internally
* @return list of listeners to be registered - in case of no listeners empty list shall be returned.
*/
List<WorkingMemoryEventListener> getWorkingMemoryEventListeners(RuntimeEngine runtime);RegisterableItemsFactory provides a mechanism to define custom handlers and listeners. Following is a list of available implementations ordered in the hierarchy of inheritance:
org.jbpm.runtime.manager.impl.SimpleRegisterableItemsFactory: This is the simplest possible implementation that comes empty and is based on a reflection to produce instances of handlers and listeners based on given class names.org.jbpm.runtime.manager.impl.DefaultRegisterableItemsFactory: This is an extension of the simple implementation(org.jbpm.runtime.manager.impl.SimpleRegisterableItemsFactory) that introduces defaults described above and still provides same capabilities as theorg.jbpm.runtime.manager.impl.SimpleRegisterableItemsFactoryimplementation.org.jbpm.runtime.manager.impl.KModuleRegisterableItemsFactory: This is an extension of the default implementation (org.jbpm.runtime.manager.impl.DefaultRegisterableItemsFactory) that provides specific capabilities for kmodule and still provides same capabilities as the Simple implementation (org.jbpm.runtime.manager.impl.SimpleRegisterableItemsFactory).org.jbpm.runtime.manager.impl.cdi.InjectableRegisterableItemsFactory: This is an extension of the default implementation (org.jbpm.runtime.manager.impl.DefaultRegisterableItemsFactory) that is tailored for CDI environments and provides CDI style approach to finding handlers and listeners through producers.
16.8.9. Registering Handlers through Configuration Files
KieSession) work item handlers by defining them as part of CustomWorkItem.conf file and update the class path. To use this approach do the following:
- Create a file called
drools.session.confinsideMETA-INFof the root of the class path (WEB-INF/classes/META-INFfor web applications). - Add the following line to the
drools.session.conf file:drools.workItemHandlers = CustomWorkItemHandlers.conf
- Create a file called
CustomWorkItemHandlers.confinsideMETA-INFof the root of the class path (WEB-INF/classes/META-INFfor web applications). - Define custom work item handlers in MVEL format inside the
CustomWorkItemHandlers.conffile:[ "Log": new org.jbpm.process.instance.impl.demo.SystemOutWorkItemHandler(), "WebService": new org.jbpm.process.workitem.webservice.WebServiceWorkItemHandler(ksession), "Rest": new org.jbpm.process.workitem.rest.RESTWorkItemHandler(), "Service Task" : new org.jbpm.process.workitem.bpmn2.ServiceTaskHandler(ksession) ]
KieSession created by the application, regardless of it using the RuntimeManager or not.
16.8.10. Registering Handlers and Listeners in CDI Environment
RuntimeManager in a CDI environment, you can use the dedicated interfaces to provide custom WorkItemHandlers and EventListeners to the RuntimeEngine.
public interface WorkItemHandlerProducer {
/**
* Returns map of (key = work item name, value work item handler instance) of work items
* to be registered on KieSession
* Parameters that might be given are as follows:
* ksessiontaskService
* runtimeManager
* @param identifier - identifier of the owner - usually RuntimeManager that allows the producer to filter out
* and provide valid instances for given owner
* @param params - owner might provide some parameters, usually KieSession, TaskService, RuntimeManager instances
* @return map of work item handler instances (recommendation is to always return new instances when this method is invoked)
*/
Map<String, WorkItemHandler> getWorkItemHandlers(String identifier, Map<String, Object> params);
}- @Process for
ProcessEventListener - @Agenda for
AgendaEventListener - @WorkingMemory for
WorkingMemoryEventListener
public interface EventListenerProducer<T> {
/**
* Returns list of instances for given (T) type of listeners
* <br/>
* Parameters that might be given are as follows:
* ksession
* taskServiceruntimeManager
* @param identifier - identifier of the owner - usually RuntimeManager that allows the producer to filter out
* and provide valid instances for given owner
* @param params - owner might provide some parameters, usually KieSession, TaskService, RuntimeManager instances
* @return list of listener instances (recommendation is to always return new instances when this method is invoked)
*/
List<T> getEventListeners(String identifier, Map<String, Object> params);
}beans.xml inside META-INF folder and update the application classpath (for example, WEB-INF/lib for web application). This enables the CDI based RuntimeManager to discover them and register on every KieSession that is created or loaded from the data store.
KieSession, TaskService, and RuntimeManager) are provided to the producers to allow handlers or listeners to be more stateful and be able to do more advanced things with the engine. You can also apply filtering based on the identifier (that is given as argument to the methods) to decide if the given RuntimeManager can receive handlers or listeners or not.
Note
RuntimeManager and retrieve RuntimeEngine (and then KieSession or TaskService) from it as that ensures a proper state.
16.8.11. Control Parameters to Alter Default Engine Behavior
Table 16.18. Table Title
| Name | Possible Values | Default Value | Description |
|---|---|---|---|
jbpm.ut.jndi.lookup | String | Alternative JNDI name to be used when there is no access to the default one (java:comp/UserTransaction). | |
jbpm.enable.multi.con | true|false | false | Enables multiple incoming/outgoing sequence flows support for activities. |
jbpm.business.calendar.properties | String | /jbpm.business.calendar.properties | Allows to provide alternative classpath location of business calendar configuration file. |
jbpm.overdue.timer.delay | Long | 2000 | Specifies delay for overdue timers to allow proper initialization, in milliseconds. |
jbpm.process.name.comparator | String | Allows to provide alternative comparator class to empower start process by name feature. If not set, NumberVersionComparator is used. | |
jbpm.loop.level.disabled | true|false | true | Allows to enable or disable loop iteration tracking, to allow advanced loop support when using XOR gateways. |
org.kie.mail.session | String | mail/jbpmMailSession | Allows to provide alternative JNDI name for mail session used by Task Deadlines. |
jbpm.usergroup.callback.properties | String | /jbpm.usergroup.callback.properties | Allows to provide alternative classpath location for user group callback implementation (LDAP, DB). |
jbpm.user.group.mapping | String | ${jboss.server.config.dir}/roles.properties | Allows to provide alternative classpath location of user info configuration (used by LDAPUserInfoImpl). |
jbpm.user.info.properties | String | /jbpm.user.info.properties | Allows to provide alternative classpath location for user group callback implementation (LDAP, DB). |
org.jbpm.ht.user.separator | String | , | Allows to provide alternative separator of actors and groups for user tasks, default is comma (,). |
org.quartz.properties | String | Allows to provide location of the quartz config file to activate quartz based timer service. | |
jbpm.data.dir | String | ${jboss.server.data.dir} is available otherwise ${java.io.tmpdir} | Allows to provide location where data files produced by JBoss BPM Suite must be stored. |
org.kie.executor.pool.size | Integer | 1 | Allows to provide thread pool size for JBoss BPM Suite executor. |
org.kie.executor.retry.count | Integer | 3 | Allows to provide number of retries attempted in case of error by JBoss BPM Suite executor. |
org.kie.executor.interval | Integer | 3 | Allows to provide frequency used to check for pending jobs by JBoss BPM Suite executor, in seconds. |
org.kie.executor.disabled | true|false | true | Enables or disable JBoss BPM Suite executor. |
16.8.12. Storing Process Variables Without Serialization
Variable Persistence Strategy - that is, it uses serialization for objects that do implement the java.io.Serializable interface but uses the Java Persistence Architecture (JPA) based JPAPlaceholderResolverStrategy class to work on objects that are entities (not implementing the java.io.Serializable interface).
Configuring Variable Persistence Strategy
// create entity manager factory
EntityManagerFactory emf = Persistence.createEntityManagerFactory("com.redhat.sample");
RuntimeEnvironment environment = RuntimeEnvironmentBuilder.Factory.get().newDefaultBuilder()
.entityManagerFactory(emf)
.addEnvironmentEntry(EnvironmentName.OBJECT_MARSHALLING_STRATEGIES,
new ObjectMarshallingStrategy[] {
// set the entity manager factory to JPA strategy so it knows how to store and read entities
new JPAPlaceholderResolverStrategy(emf),
// set the serialization based strategy as last one to deal with non entity classes
new SerializablePlaceholderResolverStrategy(ClassObjectMarshallingStrategyAcceptor.DEFAULT)})
.addAsset(ResourceFactory.newClassPathResource("example.bpmn"), ResourceType.BPMN2)
.get();
// now create the runtime manager and start using entities as part of your process
RuntimeManager manager = RuntimeManagerFactory.Factory.get().newSingletonRuntimeManager(environment);
Note
persistence.xml configuration file that will be used by the JPA strategy.
How Does the JPA Strategy Work?
@Id annotation (javax.persistence.Id) This is the unique id that is used to retrieve the variable. On the other hand, a serialization based strategy simply accepts all variables by default.
marshal() method, while the unmarshal() method will retrieve the variable from the storage.
Creating Your Own Persistence Strategy
marshal() and unmarshal() objects. These methods are part of the org.kie.api.marshalling.ObjectMarshallingStrategy interface and you can implement this interface to create a custom persistence strategy.
public interface ObjectMarshallingStrategy {
public boolean accept(Object object);
public void write(ObjectOutputStream os,
Object object) throws IOException;
public Object read(ObjectInputStream os) throws IOException, ClassNotFoundException;
public byte[] marshal(Context context,
ObjectOutputStream os,
Object object ) throws IOException;
public Object unmarshal(Context context,
ObjectInputStream is,
byte[] object,
ClassLoader classloader ) throws IOException, ClassNotFoundException;
public Context createContext();
}
read() and write() are for backwards compatibility. Use the methods accept(), marshal() and unmarshal() to create your strategy.
Chapter 17. Remote API
17.1. REST API
- Knowledge Store (Artifact Repository) REST API calls are calls to the static data (definitions) and are asynchronous, that is, they continue running after the call as a job. These calls return a job ID, which can be used after the REST API call was performed to request the job status and verify whether the job finished successfully. Parameters of these calls are provided in the form of JSON entities.The following two API's are only available in Red Hat JBoss BPM Suite.
- Deployment REST API calls are asynchronous or synchronous, depending on the operation performed. These calls perform actions on the deployments or retrieve information about one ore more deployments.
- Runtime REST API calls are calls to the Execution Server and to the Process Execution Engine, Task Execution Engine, and Business Rule Engine. They are synchronous and return the requested data as JAXB objects.
http://SERVER_ADDRESS:PORT/business-central/rest/REQUEST_BODY
Note
17.1.1. Knowledge Store REST API
POST and DELETE return details of the request as a well as a job id that can be used to request the job status and verify whether the job finished successfully. The GET operations return information about repositories, projects and organizational units.
17.1.1.1. Job calls
- ACCEPTED: the job was accepted and is being processed.
- BAD_REQUEST: the request was not accepted as it contained incorrect content.
- RESOURCE_NOT_EXIST: the requested resource (path) does not exist.
- DUPLICATE_RESOURCE: the resource already exists.
- SERVER_ERROR: an error on the server occurred.
- SUCCESS: the job finished successfully.
- FAIL: the job failed.
- APPROVED: the job was approved.
- DENIED: the job was denied.
- GONE: the job ID could not be found.A job can be GONE in the following cases:
- The job was explicitly removed.
- The job finished and has been deleted from the status cache (the job is removed from status cache after the cache has reached its maximum capacity).
- The job never existed.
job calls are provided:
- [GET] /jobs/{jobID}
- returns the job status - [GET]
Example 17.1. Response of the job call on a repository clone request
"{"status":"SUCCESS","jodId":"1377770574783-27","result":"Alias: testInstallAndDeployProject, Scheme: git, Uri: git://testInstallAndDeployProject","lastModified":1377770578194,"detailedResult":null}" - [DELETE] /jobs/{jobID}
- removes the job - [DELETE]
17.1.1.2. Repository calls
repositories calls are provided:
- [GET] /repositories
- This returns a list of the repositories in the Knowledge Store as a JSON entity - [GET]
Example 17.2. Response of the repositories call
[{"name":"bpms-assets","description":"generic assets","userName":null,"password":null,"requestType":null,"gitURL":"git://bpms-assets"},{"name":"loanProject","description":"Loan processes and rules","userName":null,"password":null,"requestType":null,"gitURL":"git://loansProject"}] - [GET] /repositories/{repositoryName}
- This returns information on a specific repository - [GET]
- [DELETE] /repositories/{repositoryName}
- This deletes the repository - [DELETE]
- [POST] /repositories/
- This creates or clones the repository defined by the JSON entity - [POST]
Example 17.3. JSON entity with repository details of a repository to be cloned
{"name":"myClonedRepository", "description":"", "userName":"", "password":"", "requestType":"clone", "gitURL":"git://localhost/example-repository"} - [GET] /repositories/{repositoryName}/projects/
- This returns a list of the projects in a specific repository as a JSON entity - [POST]
Example 17.4. JSON entity with details of existing projects
[ { "name" : "my-project-name", "description" : "Project to illustrate REST output", "groupId" : "com.acme", "version" : "1.0" }, { "name" : "yet-another-project-name", "description" : "Yet Another Project to illustrate REST output", "groupId" : "com.acme", "version" : "2.2.1" } ] - [POST] /repositories/{repositoryName}/projects/
- This creates a project in the repository - [POST]
Example 17.5. Request body that defines the project to be created
"{"name":"myProject","description": "my project"}" - [DELETE] /repositories/{repositoryName}/projects/
- This deletes the project in the repository - [DELETE]
Example 17.6. Request body that defines the project to be deleted
"{"name":"myProject","description": "my project"}"
17.1.1.3. Organizational unit calls
organizationalUnits calls are provided:
- [GET] /organizationalunits/
- This returns a list of all the organizational units - [GET].
Example 17.7. Organizational unit list in JSON
[ { "name" : "EmployeeWage", "description" : null, "owner" : "Employee", "defaultGroupId" : "org.bpms", "repositories" : [ "EmployeeRepo", "OtherRepo" ] }, { "name" : "OrgUnitName", "description" : null, "owner" : "OrgUnitOwner", "defaultGroupId" : "org.group.id", "repositories" : [ "repository-name-1", "repository-name-2" ] } ] - [GET] /organizationalunits/{organizationalUnitName}
- This returns a JSON entity with info about a specific organizational unit - [GET].
- [POST] /organizationalunits/
- This creates an organizational unit in the Knowledge Store - [POST]. The organizational unit is defined as a JSON entity. This consumes an
OrganizationalUnitinstance and returns aCreateOrganizationalUnitRequestinstance.Example 17.8. Organizational unit in JSON
{ "name":"testgroup", "description":"", "owner":"tester", "repositories":["testGroupRepository"] } - [POST] /organizationalunits/{organizationalUnitName}
- This updates the details of an existing organizational unit - [POST].Both the
nameandownerfields in the consumedUpdateOrganizationalUnitinstance can be left empty. Both thedescriptionfield and the repository association can not be updated via this operation.Example 17.9. Update organizational unit input in JSON
{ "owner" : "NewOwner", "defaultGroupId" : "org.new.default.group.id" } - [DELETE] /organizationalunits/{organizationalUnitName}
- This deletes an organizational unit - [GET].
- [POST] /organizationalunits/{organizationalUnitName}/repositories/{repositoryName}
- This adds the repository to the organizational unit - [POST].
- [DELETE] /organizationalunits/{organizationalUnitName}/repositories/{repositoryName}
- This removes a repository from the organizational unit - [POST].
17.1.1.4. Maven calls
maven calls are provided below:
- [POST] /repositories/{repositoryName}/projects/{projectName}/maven/compile/
- This compiles the project (equivalent to
mvn compile) - [POST]. It consumes aBuildConfiginstance, which must be supplied but is not needed for the operation and may be left blank. It also returns aCompileProjectRequestinstance. - [POST] /repositories/{repositoryName}/projects/{projectName}/maven/install/
- This installs the project (equivalent to
mvn install) - [POST]. It consumes aBuildConfiginstance, which must be supplied but is not needed for the operation and may be left blank. It also returns aInstallProjectRequestinstance. - [POST] /repositories/{repositoryName}/projects/{projectName}/maven/test/
- This compiles and runs the tests - [POST]. It consumes a
BuildConfiginstance and returns aTestProjectRequestinstance. - [POST] /repositories/{repositoryName}/projects/{projectName}/maven/deploy/
- This deploys the project (equivalent to mvn deploy) - [POST]. It consumes a
BuildConfiginstance, which must be supplied but is not needed for the operation and may be left blank. It also returns aDeployProjectRequestinstance.
17.1.2. Deployment REST API
activate and deactivate operations are available. When a deployment is deployed, it is "activated" by default: that means that new process instances can be started using the process definitions and other information in the deployment. However, at later point in time, users may want to make sure that a deployment is no longer used without necessarily aborting or stopping the existing (running) process instances. In order to do this, the deployment can first be deactivated before it is removed at a later date.
Note
[\w\.-]+(:[\w\.-]+){2,2}(:[\w\.-]*){0,2}
- [A-Z]
- [a-z]
- [0-9]
- _
- .
- -
- Group Id
- Artifact Id
- Version
- kbase Id (optional)
- ksession Id (optional)
17.1.2.1. Asynchronous calls
/deployment/{deploymentId}/deploy/deployment/{deploymentId}/undeploy
- The posted request would have been successfully accepted but the actual operation (deploying or undeploying the deployment unit) may have failed.
- The deployment information retrieved on calling the GET operations may even have changed (including the status of the deployment unit).
17.1.2.2. Deployment calls
- [GET] /deployment/
- returns a list of all available deployed instances [GET]
- [GET] /deployment/{deploymentId}
- Returns a JaxbDeploymentUnit instance containing the information (including the configuration) of the deployment unit [GET]
- [POST] /deployment/{deploymentId}/deploy
- Deploys the deployment unit which is referenced by the deploymentid and returns a JaxbDeploymentJobResult instance with the status of the request [POST]
- [POST] /deployment/{deploymentId}/undeploy
- Undeploys the deployment unit referenced by the deploymentId and returns a JaxbDeploymentJobResult instance with the status of the request [POST]
Note
- An identical job has already been submitted to the queue and has not yet completed.
- The amount of (deploy/undeploy) jobs submitted but not yet processed exceeds the job cache size.
- [POST] /deployment/{deploymentId}/activate
- Activates a deployment [POST]
- [POST] /deployment/{deploymentId}/deactivate
- Deactivates a deployment [POST]
Note
activate and deactivate operations:
- The
deactivateoperation ensures that no new process instances can be started with the existing deployment. - If users decide that a deactivated deployment should be reactivated (instead of deleted), they can then use the
activateoperation to reactivate the deployment. A deployment is always "activated" by default when it is initially deployed.
17.1.3. Runtime REST API
? symbol to the URL and the parameter with the parameter value; for example, http://localhost:8080/business-central/rest/task/query?workItemId=393 returns a TaskSummary list of all tasks based on the work item with ID 393. Note that parameters and their values are case-sensitive.
map_ keyword; for example,
map_age=5000
{ "age" => Long.parseLong("5000") }Example 17.10. A GET call that returns all tasks to a locally running application using curl
curl -v -H 'Accept: application/json' -u eko 'localhost:8080/kie/rest/tasks/'
newRuntimeEngine() object from the RemoteRestSessionFactory. The RuntimeEngine can be then used to create a KieSession.
Example 17.11. A GET call that returns a task details to a locally running application in Java with the direct tasks/TASKID request
package rest; import java.io.InputStream; import java.net.URL; import javax.xml.bind.JAXBContext; import javax.xml.transform.stream.StreamSource; import org.apache.http.auth.AuthScope; import org.apache.http.auth.UsernamePasswordCredentials; import org.apache.http.impl.client.DefaultHttpClient; import org.jboss.resteasy.client.ClientExecutor; import org.jboss.resteasy.client.ClientRequest; import org.jboss.resteasy.client.ClientRequestFactory; import org.jboss.resteasy.client.ClientResponse; import org.jboss.resteasy.client.core.executors.ApacheHttpClient4Executor; import org.jboss.resteasy.spi.ResteasyProviderFactory; import org.kie.api.task.model.Task; import org.kie.services.client.serialization.jaxb.impl.task.JaxbTaskResponse; import org.slf4j.Logger; import org.slf4j.LoggerFactory;
public Task getTaskInstanceInfo(long taskId) throws Exception {
URL address = new URL(url + "/task/" + taskId);
ClientRequest restRequest = createRequest(address);
ClientResponse<JaxbTaskResponse> responseObj = restRequest.get(JaxbTaskResponse.class);
ClientResponse<InputStream> taskResponse = responseObj.get(InputStream.class);
JAXBContext jaxbTaskContext = JAXBContext.newInstance(JaxbTaskResponse.class);
StreamSource source = new StreamSource(taskResponse.getEntity());
return jaxbTaskContext.createUnmarshaller().unmarshal(source, JaxbTaskResponse.class).getValue();
}
private ClientRequest createRequest(URL address) {
return getClientRequestFactory().createRequest(address.toExternalForm());
}
private ClientRequestFactory getClientRequestFactory() {
DefaultHttpClient httpClient = new DefaultHttpClient();
httpClient.getCredentialsProvider().setCredentials(new AuthScope(AuthScope.ANY_HOST,
AuthScope.ANY_PORT, AuthScope.ANY_REALM), new UsernamePasswordCredentials(userId, password));
ClientExecutor clientExecutor = new ApacheHttpClient4Executor(httpClient);
return new ClientRequestFactory(clientExecutor, ResteasyProviderFactory.getInstance());
}task, consider using the execute call (refer to Section 17.1.5, “Execute Operations”).
17.1.3.1. Usage Information
17.1.3.1.1. Pagination
pageorp- number of the page to be returned (by default set to
1, that is, page number1is returned) pageSizeors- number of items per page (default value
10)
/task/query
/history/instances
/history/instance/{id: [0-9]+}
/history/instance/{id: [0-9]+}/child
/history/instance/{id: [0-9]+}/node
/history/instance/{id: [0-9]+}/node/{id: [a-zA-Z0-9-:\\.]+}
/history/instance/{id: [0-9]+}/variable/
/history/instance/{id: [0-9]+}/variable/{id: [a-zA-Z0-9-:\\.]+}
/history/process/{id: [a-zA-Z0-9-:\\.]+}Example 17.12. REST request body with the pagination parameter
/history/instances?page=3&pageSize=20 /history/instances?p=3&s=20
17.1.3.1.2. Object data type parameters
\d+i: Integer\d+l: Long
Example 17.13. REST request body with the Integer mySignal parameter
/rest/runtime/business-central/process/org.jbpm.test/start?map_var1=1234i
startProcess command in the execute call; refer to Section 17.1.5, “Execute Operations”).
17.1.3.2. Runtime calls
/runtime/{deploymentId}/execute/{CommandObject}; refer to Section 17.1.5, “Execute Operations”).
17.1.3.2.1. Process calls
/runtime/{deploymentId}/process/ calls are send to the Process Execution Engine.
process calls are provided:
- /runtime/{deploymentId}/process/{processDefId}/start
- creates and starts a Process instance of the provided Process definition [POST]
- /runtime/{deploymentId}/process/{processDefId}/startform
- Checks to see if the process defined by the
processDefIdexists, and if it does, returns a URL to show the form as aJaxbProcessInstanceFormResponseon a remote application [POST]. - /runtime/{deploymentId}/process/instance/{procInstanceID}
- returns the details of the given Process instance [GET]
- /runtime/{deploymentId}/process/instance/{procInstanceID}/signal
- sends a signal event to the given Process instance [POST]The call accepts query map parameter with the Signal details.
Example 17.14. A local signal invocation and its REST version
ksession.signalEvent("MySignal", "value", 23l);curl -v -u admin 'localhost:8080/business-central/rest/runtime/myDeployment/process/instance/23/signal?signal=MySignal&event=value'
- /runtime/{deploymentId}/process/instance/{procInstanceID}/abort
- aborts the Process instance [POST]
- /runtime/{deploymentId}/process/instance/{procInstanceID}/variables
- returns variable of the Process instance [GET]Variables are returned as JaxbVariablesResponse objects. Note that the returned variable values are strings.
17.1.3.2.2. Signal calls
signal/ calls send a signal defined by the provided query map parameters either to the deployment or to a particular process instance.
signal calls are provided:
- /runtime/{deploymentId}/process/instance/{procInstanceID}/signal
- sends a signal to the given process instance [POST]See the previous subsection for an example of this call.
- /runtime/{deploymentId}/signal
- This operation takes a signal and a event query parameter and sends a signal to the deployment [POST].- The
signalparameter value is used as the name of the signal. This parameter is required.- Theeventparameter value is used as the value of the event. This value may use the number query parameter syntax described earlier.Example 17.15. Signal Call Example
/runtime/{deploymentId}/signal?signal={signalCode}This call is equivalent to theksession.signal("signalName", eventValue)method.
17.1.3.2.3. Work item calls
/runtime/{deploymentId}/workitem/ calls allow you to complete or abort a particular work item.
task calls are provided:
- /runtime/{deploymentId}/workitem/{workItemID}/complete
- completes the given work item [POST]The call accepts query map parameters containing information about the results.
Example 17.16. A local invocation and its REST version
Map<String, Object> results = new HashMap<String, Object>(); results.put("one", "done"); results.put("two", 2); kieSession.getWorkItemManager().completeWorkItem(23l, results);curl -v -u admin 'localhost:8080/business-central/rest/runtime/myDeployment/workitem/23/complete?map_one=done&map_two=2i'
- /runtime/{deploymentId}/workitem/{workItemID}/abort
- aborts the given work item [POST]
17.1.3.2.4. History calls
/history/ calls administer logs of process instances, their nodes, and process variables.
Note
/history/calls specified in 6.0.0.GA of JBoss BPM Suite are still available, as of 6.0.1.GA, the /history/ calls have been made independent of any deployment, which is also reflected in the URLS used.
history calls are provided:
- /history/clear
- clears all process, variable, and node logs [POST]
- /history/instances
- returns logs of all Process instances [GET]
- /history/instance/{procInstanceID}
- returns all logs of Process instance (including child logs) [GET]
- /history/instance/{procInstanceID}/child
- returns logs of child Process instances [GET]
- /history/instance/{procInstanceID}/node
- returns logs of all nodes of the Process instance [GET]
- /history/instance/{procInstanceID}/node/{nodeID}
- returns logs of the node of the Process instance [GET]
- /history/instance/{procInstanceID}/variable
- returns variables of the Process instance with their values [GET]
- /history/instance/{procInstanceID}/variable/{variableID}
- returns the log of the process instance that have the given variable id [GET]
- /history/process/{procInstanceID}
- returns the logs of the given Process instance excluding logs of its nodes and variables [GET]
History calls that search by variable
activeProcesses parameter that limits the selection to information from active process instances.
- /history/variable/{varId}
- returns the variable logs of the specified process variable [GET]
- /history/variable/{varId}/instances
- returns the process instance logs for processes that contain the specified process variable [GET]
- /history/variable/{varId}/value/{value}
- returns the variable logs for specified process variable with the specified value [GET]
Example 17.17. A local invocation and its REST version
auditLogService.findVariableInstancesByNameAndValue("countVar", "three", true);curl -v -u admin 'localhost:8080/business-central/rest/history/variable/countVar/value/three?activeProcesses=true'
- /history/variable/{varId}/value/{value}/instances
- returns the process instance logs for process instances that contain the specified process variable with the specified value [GET]
17.1.3.2.5. Calls to process variables
/runtime/{deploymentId}/withvars/ calls allow you to work with Process variables. Note that all variable values are returned as strings in the JaxbVariablesResponse object.
withvars calls are provided:
- /runtime/{deploymentId}/withvars/process/{procDefinitionID}/start
- creates and starts Process instance and returns the Process instance with its variables Note that even if a passed variable is not defined in the underlying Process definition, it is created and initialized with the passed value. [POST]
- /runtime/{deploymentId}/withvars/process/instance/{procInstanceID}
- returns Process instance with its variables [GET]
- /runtime/{deploymentId}/withvars/process/instance/{procInstanceID}/signal
- sends signal event to Process instance (accepts query map parameters).
17.1.3.3. Task calls
task calls are provided:
- /task/{taskId: \\d+}
- returns the task in JAXB format [GET]Further call paths are provided to perform other actions on tasks; refer to Section 17.1.3.3.1, “Task ID operations”)
- /task/query
- returns a TaskSummary list returned [GET]Further call paths are provided to perform other actions on task/query; refer to Section 17.1.3.3.3, “Query operations”).
- /task/content/{contentId: \\d+}
- returns the task content in the JAXB format [GET]For further information, refer to Section 17.1.3.3.2, “Content operations”)
17.1.3.3.1. Task ID operations
task/{taskId: \\d+}/ACTION calls allow you to execute an action on the given task (if no action is defined, the call is a GET call that returns the JAXB representation of the task).
Table 17.1. Task Actions
| Task | Action |
|---|---|
activate | activate task (taskId as query param) |
claim | claim task [POST] (The user used in the authentication of the REST url call claims it.) |
claimnextavailable | claim next available task [POST] (This operation claims the next available task assigned to the user.) |
complete | complete task [POST] (accepts "query map parameters".) |
delegate | delegate task [POST] (Requires a targetIdquery parameter, which identifies the user to which the task is delegated.) |
exit | exit task [POST]
Note
The exit operation can be performed by any user or group specified as the administrator of a human task. If the task does not specify any values, the system automatically adds user Administrator and group Administrators to a task.
|
fail | fail task [POST] |
forward | forward task [POST] |
release | release task [POST] |
resume | resume task [POST] |
skip | skip task [POST] |
start | start task [POST] |
stop | stop task [POST] |
suspend | suspend task [POST] |
nominate | nominate task [POST] (Requires at least one of either the user or group query parameter, which identify the user(s) or group(s) that are nominated for the task.) |
17.1.3.3.2. Content operations
task/content/{contentId: \\d+} and task/{taskId: \\d+}/content operations return the serialized content associated with the given task.
org.jbpm.services.task.utils.ContentMarshallerHelper class.
org.jbpm.services.task.utils.ContentMarshallerHelper class, they cannot deserialize the task content. When using the REST call to obtain task content, the content is first deserialized usint the ContentMarshallerHelper class and then serialized with the common Java serialization mechanism.
- The requested objects are instances of a class that implements the
Serializableinterface. In the case of Map objects, they only contain values that implement theSerializableinterface. - The objects are not instances of a local class, an anonymous class or arrays of a local or anonymous class.
- The object classes are present on the class path of the server .
17.1.3.3.3. Query operations
/task/query call is a GET call that returns a TaskSummary list of the tasks that meet the criteria defined in the call parameters. Note that you can use the pagination feature to define the amount of data to be return.
Parameters
task/query call:
workItemId: returns only tasks based on the work item.taskId: returns only the task with the particular ID.businessAdministrator: returns task with an identified business administrator.potentialOwner: returns tasks that can be claimed by the potentialOwner user.status: returns tasks that are in the given status (Created,Ready,Reserved,InProgress,CompletedorFailed);taskOwner: returns tasks assigned to the particular user (Created,Ready,Reserved,InProgress,Suspended,Completed,Failed,Error,Exited, orObsolete).processInstanceId: returns tasks generated by the Process instance.union: specifies whether the query should query the union or intersection of the parameters.
Example 17.18. Query usage
http://server:port/rest/task/query?workItemId=3&workItemId=4&workItemId=5
http://server:port/rest/task/query?workItemId=11&taskId=27
union parameter is being used here so that the union of the two queries (the work item id query and the task id query) is returned.
http://server:port/rest/task/query?workItemId=11&taskId=27&union=true
Created` and the potential owner of the task is `Bob`. Note that the letter case for the status parameter value is case-insensitve.
http://server:port/rest/task/query?status=creAted&potentialOwner=Bob
Created` and the potential owner of the task is `bob`. Note that the potential owner parameter is case-sensitive. `bob` is not the same user id as `Bob`!
http://server:port/rest/task/query?status=created&potentialOwner=bob
Created` or `Ready`.
http://server:port/rest/task/query?status=created&status=ready&potentialOwner=bob&processInstanceId=201
- process instance id 201, potential owner `bob`, status `Ready`
- process instance id 201, potential owner `bob`, status `Created`
- process instance id 183, potential owner `bob`, status `Created`
- process instance id 201, potential owner `mary`, status `Ready`
- process instance id 201, potential owner `bob`, status `Complete`
Usage
workItemId, taskId, businessAdministrator, potentialOwner, taskOwner, and processInstanceId. If entering the status parameter multiple times, the intersection of tasks that have any of the status values and union of tasks that satisfy the other criteria.
language parameter is required and if not defined the en-UK value is used. The parameter can be defined only once.
17.1.4. The REST Query API
17.1.4.1. URL Layout
http://server.address:port/{application-id}/rest/query/
runtime
task * [GET] rich query for task summaries and process variables
process * [GET] rich query for process instances and process variables17.1.4.2. Query Parameters
- "query parameters" are strings like
processInstanceId,taskIdandtid. The case (lowercase or uppercase) of these parameters does not matter, except when the query parameter also specifies the name of a user-defined variable. - "parameters" are the values that are passed with some query parameters. These are values like
org.process.frombulator,29andharry.
http://localhost:8080/business-central/rest/query/runtime/process?processId=org.process.frombulator&piid=29
Example 17.19. Repeated query parameters
processId=org.example.process&processInstanceId=27&processInstanceId=29
- only contains information about process instances with the org.example.process process definition
- only contains information about process instances that have an id of 27 or 29
17.1.4.2.1. Range and Regular Expression Parameters
- can be given in ranges have an
Xin the min/max column in the table below. - use regular expressions have an
Xin the regex column below.
17.1.4.2.2. Range Query Parameters
_min to end of the parameter name. In order to pass the upper end or end of a range, add _max to end of the parameter name.
Example 17.20. Range parameters
processId=org.example.process&taskId_min=50&taskId_max=53
- only contains information about tasks associated with the org.example.process process definition
- only contains information about tasks that have a task id between 50 and 53, inclusive.
processId=org.example.process&taskId_min=52
- only contains information about tasks associated with the org.example.process process definition
- only contains information about tasks that have a task id that is larger than or equal to 52
17.1.4.2.3. Regular Expression Query Parameters
_re to the end of the parameter name.
*means 0 or more characters.means 1 character
\) is not interpreted.
Example 17.21. Regular expression parameters
processId_re=org.example.*&processVersion=2.0
- only contains information about process instances associated with a process definition whose name matches the regular expression org.example.*. This includes:
org.example.processorg.example.process.definition.example.long.nameorgXexampleX
- only contains information about process instances that have a process (definition) version of 2.0
17.1.4.3. Parameter Table
task or process column describes whether or not a query parameter can be used with the task and/or the process instance query operations.
Table 17.2. Query Parameters
| parameter | short form | description | regex | min / max | task or process |
|---|---|---|---|---|---|
processinstanceid
| piid
|
Process instance id
|
X
|
T,P
| |
processid
| pid
|
Process id
|
X
|
T,P
| |
workitemid
| wid
|
Work item id
|
T,P
| ||
deploymentid
| did
|
Deployment id
|
X
|
T,P
| |
taskid
| tid
|
Task id
|
X
|
T
| |
initiator
| init
|
Task initiator/creator
|
X
|
T
| |
stakeholder
| stho
|
Task stakeholder
|
X
|
T
| |
potentialowner
| po
|
Task potential owner
|
X
|
T
| |
taskowner
| to
|
Task owner
|
X
|
T
| |
businessadmin
| ba
|
Task business admin
|
X
|
T
| |
taskstatus
| tst
|
Task status
|
T
| ||
processinstancestatus | pist
|
Process instance status
|
T,P
| ||
processversion
| pv
|
Process version
|
X
|
T,P
| |
startdate
| stdt
|
Process instance start date1
|
X
|
T,P
| |
enddate
| edt
|
Process instance end date1
|
X
|
T,P
| |
varid
| vid
|
Variable id
|
X
|
T,P
| |
varvalue
| vv
|
Variable value
|
X
|
T,P
| |
var
| var
|
Variable id and value 2
|
T,P
| ||
varregex
| vr
|
Variable id and value 3
|
X
|
T,P
| |
all
| all
|
Which variable history logs 4
|
T,P
|
yy-MM-dd_HH:mm:ss. However, users can also submit only part of the date:
- Submitting only the date (
yy-MM-dd) means that a time of 00:00:00 is used (the beginning of the day). - Submitting only the time (
HH:mm:ss) means that the current date is used.
Table 17.3. Example date strings
| Date str ing | Actual meaning |
|---|---|
15-05-29_13:40:12
|
May 29th, 2015, 13:40:12 (1:40:12 PM)
|
14-11-20
|
November 20th, 2014, 00:00:00
|
9:30:00
|
Today, 9:30:00 (AM)
|
var query parameter is used differently than other parameters. If you want to specify both the variable id and value of a variable (as opposed to just the variable id), then you can do it by using the var query parameter. The syntax is var_<variable-id>=<variable-value>
Example 17.22. var_X=Y example
var_myVar=value3 queries for process instances with variables4 that are called myVar and that have the value value3
varreggex (or shortened version vr) parameter works similarly to the var query parameter. However, the value part of the query parameter can be a regular expression.
17.1.4.4. Parameter Examples
Table 17.4. Query parameters examples
| parameter | short form | example |
|---|---|---|
processinstanceid
| piid
| piid=23
|
processid
| pid
| processid=com.acme.example
|
workitemid
| wid
| wid_max=11
|
deploymentid
| did
| did_re=com.willy.loompa.*
|
taskid
| tid
| taskid=4
|
initiator
| init
| init_re=Davi*
|
stakeholder
| stho
| stho=theBoss&stho=theBossesAssistant
|
potentialowner
| po
| potentialowner=sara
|
taskowner
| to
| taskowner_re=*anderson
|
businessadmin
| ba
| ba=admin
|
taskstatus
| tst
| tst=Reserved
|
processinstancestatus
| pist
| pist=3&pist=4
|
processversion
| pv
| processVersion_re=4.2*
|
startdate
| stdt
| stdt_min=00:00:00
|
enddate
| edt
| edt_max=15-01-01
|
varid
| vid
| varid=numCars
|
varvalue
| vv
| vv=abracadabra
|
var
| var
| var_numCars=10
|
varregex
| vr
| vr_nameCar=chitty*
|
all
| all
| all
|
17.1.4.5. Query Output Format
- a list of process instance info (JaxbQueryProcessInstanceInfo) objects
- or a list of task instance info (JaxbQueryTaskInfo) objects
- a process instance object
- a list of 0 or more variable objects
- the process instance id
- a list of 0 or more task summary obejcts
- a list of 0 or more variable objects
17.1.5. Execute Operations
org.kie.remote.client.api.RemoteRestRuntimeEngineFactory and is shipped with JBoss BPM Suite. For performing runtime operations (such as, starting a process instance with process variables, or completing a task with task variables) that involves passing a custom Java object used in the process, you can use the approach mentioned in the section Section 17.4.3, “Custom Model Objects and Remote API”.
execute operation. However, the Rest API accepts only String or Integer values as parameters. The execute operation enables you to send complex Java objects to perform JBoss BPM Suite runtime operations.
execute operations are created in order to support the Java Remote Runtime API. Use the execute operations only in exceptional circumstances such as:
- When you need to pass complex objects as parameters
- When it is not possible to use
/runtimeor/taskendpoints
execute operations in cases when you are running any other client besides Java.
org.MyPOJO as a parameter to start a process:
package com.redhat.gss.jbpm;
import java.io.StringReader;
import java.io.StringWriter;
import java.net.URL;
import java.nio.charset.Charset;
import java.util.ArrayList;
import java.util.List;
import javax.ws.rs.core.MediaType;
import javax.xml.bind.JAXBContext;
import javax.xml.bind.JAXBException;
import javax.xml.bind.Marshaller;
import javax.xml.bind.Unmarshaller;
import org.MyPOJO;
import org.apache.commons.codec.binary.Base64;
import org.jboss.resteasy.client.ClientRequest;
import org.jboss.resteasy.client.ClientRequestFactory;
import org.jboss.resteasy.client.ClientResponse;
import org.kie.api.command.Command;
import org.kie.remote.client.jaxb.JaxbCommandsRequest;
import org.kie.remote.client.jaxb.JaxbCommandsResponse;
import org.kie.remote.jaxb.gen.JaxbStringObjectPairArray;
import org.kie.remote.jaxb.gen.StartProcessCommand;
import org.kie.remote.jaxb.gen.util.JaxbStringObjectPair;
import org.kie.services.client.serialization.JaxbSerializationProvider;
import org.kie.services.client.serialization.jaxb.impl.JaxbCommandResponse;
public class StartProcessWithPOJO {
/*
* Set the parameters according your installation
*/
private static final String DEPLOYMENT_ID = "org.kie.example:project1:3.0";
private static final String PROCESS_ID = "project1.proc_start";
private static final String PROCESS_PARAM_NAME = "myPOJO";
private static final String APP_URL = "http://localhost:8080/business-central/rest";
private static final String USER = "jesuino";
private static final String PASSWORD = "redhat2014!";
public static void main(String[] args) throws Exception {
// List of commands to be executed;
List<Command> commands = new ArrayList<Command>();
// a sample command to start a process:
StartProcessCommand startProcessCommand = new StartProcessCommand();
JaxbStringObjectPairArray params = new JaxbStringObjectPairArray();
params.getItems().add(new JaxbStringObjectPair(PROCESS_PARAM_NAME, new MyPOJO("My POJO TESTING")));
startProcessCommand.setProcessId(PROCESS_ID);
startProcessCommand.setParameter(params);
commands.add(startProcessCommand);
List<JaxbCommandResponse<?>> response = executeCommand(DEPLOYMENT_ID,
commands);
System.out.printf("Command %s executed.\n", response.toString());
System.out.println("commands1" + commands);
}
static List<JaxbCommandResponse<?>> executeCommand(String deploymentId,
List<Command> commands) throws Exception {
URL address = new URL(APP_URL + "/execute");
ClientRequest request = createRequest(address);
request.header(JaxbSerializationProvider.EXECUTE_DEPLOYMENT_ID_HEADER, DEPLOYMENT_ID);
JaxbCommandsRequest commandMessage = new JaxbCommandsRequest();
commandMessage.setCommands(commands);
commandMessage.setDeploymentId(DEPLOYMENT_ID);
String body = convertJaxbObjectToString(commandMessage);
request.body(MediaType.APPLICATION_XML, body);
ClientResponse<String> responseObj = request.post(String.class);
String strResponse = responseObj.getEntity();
System.out.println("RESPONSE FROM THE SERVER: \n" + strResponse);
JaxbCommandsResponse cmdsResp = convertStringToJaxbObject(strResponse);
return cmdsResp.getResponses();
}
static private ClientRequest createRequest(URL address) {
return new ClientRequestFactory().createRequest(
address.toExternalForm()).header("Authorization",
getAuthHeader());
}
static private String getAuthHeader() {
String auth = USER + ":" + PASSWORD;
byte[] encodedAuth = Base64.encodeBase64(auth.getBytes(Charset.forName("US-ASCII")));
return "Basic " + new String(encodedAuth);
}
static String convertJaxbObjectToString(Object object) throws JAXBException {
// TODO: Add your classes here
Class<?>[] classesToBeBound = { JaxbCommandsRequest.class, MyPOJO.class };
Marshaller marshaller = JAXBContext.newInstance(classesToBeBound)
.createMarshaller();
marshaller.setProperty(Marshaller.JAXB_FORMATTED_OUTPUT, Boolean.TRUE);
StringWriter stringWriter = new StringWriter();
marshaller.marshal(object, stringWriter);
String output = stringWriter.toString();
System.out.println("REQUEST CONTENT: \n" + output);
return output;
}
static JaxbCommandsResponse convertStringToJaxbObject(String str)
throws JAXBException {
Unmarshaller unmarshaller = JAXBContext.newInstance(
JaxbCommandsResponse.class).createUnmarshaller();
return (JaxbCommandsResponse) unmarshaller.unmarshal(new StringReader(
str));
}
}
Kie-Deployment-Id that is also available using the Java constant JaxbSerializationProvider.EXECUTE_DEPLOYMENT_ID_HEADER.
/execute call takes the JaxbCommandsRequest object as its parameter. The JaxbCommandsRequest object contains a list of org.kie.api.command.Command objects. The commands are stored in the JaxbCommandsRequest object, which are converted to String representation and sent to the execute REST call. The JaxbCommandsRequest parameters are deploymentId and a Command object.
/execute, you use Java code to convert the Command(which is a Java object) to String (which is in the XML format). Once you generate the XML, you can use any Java or non-Java client to send it to a Rest endpoint exposed by Business Central.
org.MyPOJO class is the same on both your client code as well as the server side. One way of achieving this is by sharing it through the maven dependency. You can create the org.MyPOJO class using the data modeler tool of Business Central and in your Rest client, import the dependency of the business-central project, which includes this org.MyPOJO class. Here is an example of the pom.xml file with the Business Central project dependency (that contains the org.MyPOJO class) and other required dependencies:
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion>
<groupId>org.redhat.gss</groupId>
<artifactId>sample-rest-client</artifactId>
<version>1</version>
<name>Rest Client - Execute</name>
<properties>
<version.org.jboss.bom.eap>6.4.4.GA</version.org.jboss.bom.eap>
<!-- Define the version of brms/bpmsuite core artifacts -->
<version.org.jboss.bom.brms>6.2.1.GA-redhat-2</version.org.jboss.bom.brms>
<maven.compiler.target>1.7</maven.compiler.target>
<maven.compiler.source>1.7</maven.compiler.source>
</properties>
<dependencyManagement>
<dependencies>
<dependency>
<groupId>org.jboss.bom.eap</groupId>
<artifactId>jboss-javaee-6.0-with-resteasy</artifactId>
<version>${version.org.jboss.bom.eap}</version>
<type>pom</type>
<scope>import</scope>
</dependency>
<dependency>
<groupId>org.jboss.bom.brms</groupId>
<artifactId>jboss-brms-bpmsuite-bom</artifactId>
<version>${version.org.jboss.bom.brms}</version>
<type>pom</type>
<scope>import</scope>
</dependency>
</dependencies>
</dependencyManagement>
<dependencies>
<dependency>
<groupId>org.kie.remote</groupId>
<artifactId>kie-remote-client</artifactId>
</dependency>
<dependency>
<groupId>org.drools</groupId>
<artifactId>drools-core</artifactId>
</dependency>
<dependency>
<groupId>org.jboss.resteasy</groupId>
<artifactId>resteasy-jaxrs</artifactId>
</dependency>
<dependency>
<groupId>commons-logging</groupId>
<artifactId>commons-logging</artifactId>
<version>1.2</version>
</dependency>
<dependency>
<groupId>commons-codec</groupId>
<artifactId>commons-codec</artifactId>
<version>1.9</version>
</dependency>
<!-- STARTING MY BUSINESS CENTRAL PROJECT DEPENDENCY WHICH CONTAINS THE POJO -->
<dependency>
<artifactId>ExecuteProject</artifactId>
<groupId>org.redhat.gss</groupId>
<version>1.0</version>
</dependency>
<!-- ENDING MY BUSINESS CENTRAL PROJECT DEPENDENCY WHICH CONTAINS THE POJO -->
</dependencies>
</project>com.redhat.gss:remote-process-start-with-bean:1.0 is the kjar created by Business Central. This kjar includes the org.MyPOJO class. Therefore by sharing the dependency, you ensure that your org.MyPOJO class on the server matches with that on the client.
17.1.5.1. Execute Operation Commands
execute operation accepts. See the constructor and set methods on the actual command classes for further information about which parameters these commands accept.
| AbortWorkItemCommand | SignalEventCommand |
| CompleteWorkItemCommand | StartCorrelatedProcessCommand |
| GetWorkItemCommand | StartProcessCommand |
| AbortProcessInstanceCommand | GetVariableCommand |
| GetProcessIdsCommand | GetFactCountCommand |
| GetProcessInstanceByCorrelationKeyCommand | GetGlobalCommand |
| GetProcessInstanceCommand | GetIdCommand |
| GetProcessInstancesCommand | FireAllRulesCommand |
| SetProcessInstanceVariablesCommand |
| ActivateTaskCommand | GetTaskAssignedAsPotentialOwnerCommand |
| AddTaskCommand | GetTaskByWorkItemIdCommand |
| CancelDeadlineCommand | GetTaskCommand |
| ClaimNextAvailableTaskCommand | GetTasksByProcessInstanceIdCommand |
| ClaimTaskCommand | GetTasksByStatusByProcessInstanceIdCommand |
| CompleteTaskCommand | GetTasksOwnedCommand |
| CompositeCommand | NominateTaskCommand |
| DelegateTaskCommand | ProcessSubTaskCommand |
| ExecuteTaskRulesCommand | ReleaseTaskCommand |
| ExitTaskCommand | ResumeTaskCommand |
| FailTaskCommand | SkipTaskCommand |
| ForwardTaskCommand | StartTaskCommand |
| GetAttachmentCommand | StopTaskCommand |
| GetContentCommand | SuspendTaskCommand |
| GetTaskAssignedAsBusinessAdminCommand |
| ClearHistoryLogsCommand | FindSubProcessInstancesCommand |
| FindActiveProcessInstancesCommand | FindSubProcessInstancesCommand |
| FindNodeInstancesCommand | FindVariableInstancesByNameCommand |
| FindProcessInstanceCommand | FindVariableInstancesCommand |
| FindProcessInstancesCommand |
17.1.6. REST summary
http://server:port/business-central/rest
Table 17.5. Knowledge Store REST calls
| URL Template | Type | Description |
|---|---|---|
| /jobs/{jobID} | GET | return the job status |
| /jobs/{jobID} | DELETE | remove the job |
| /organizationalunits | GET | return a list of organizational units |
| /organizationalunits | POST |
create an organizational unit in the Knowledge Store described by the JSON
OrganizationalUnit entity
|
| /organizationalunits/{organizationalUnitName}/repositories/{repositoryName} | POST | add a repository to an organizational unit |
| /repositories/ | POST |
add the repository to the organizational unit described by the JSON
RepositoryReqest entity
|
| /repositories | GET | return the repositories in the Knowledge Store |
| /repositories/{repositoryName} | DELETE | remove the repository from the Knowledge Store |
| /repositories/ | POST | create or clone the repository defined by the JSON RepositoryRequest entity |
| /repositories/{repositoryName}/projects/ | POST | create the project defined by the JSON entity in the repository |
| /repositories/{repositoryName}/projects/{projectName}/maven/compile/ | POST | compile the project |
| /repositories/{repositoryName}/projects/{projectName}/maven/install | POST | install the project |
| /repositories/{repositoryName}/projects/{projectName}/maven/test/ | POST |
compile the project and run tests as part of compilation
|
| /repositories/{repositoryName}/projects/{projectName}/maven/deploy/ | POST | deploy the project |
Table 17.6. runtime REST calls
| URL Template | Type | Description |
|---|---|---|
| /runtime/{deploymentId}/process/{procDefID}/start | POST | start a process instance based on the Process definition (accepts query map parameters) |
| /runtime/{deploymentId}/process/instance/{procInstanceID} | GET | return a process instance details |
| /runtime/{deploymentId}/process/instance/{procInstanceID}/abort | POST | abort the process instance |
| /runtime/{deploymentId}/process/instance/{procInstanceID}/signal | POST | send a signal event to process instance (accepts query map parameters) |
| /runtime/{deploymentId}/process/instance/{procInstanceID}/variable/{varId} | GET | return a variable from a process instance |
| /runtime/{deploymentId}/signal/{signalCode} | POST | send a signal event to deployment |
| /runtime/{deploymentId}/workitem/{workItemID}/complete | POST | complete a work item (accepts query map parameters) |
| /runtime/{deploymentId}/workitem/{workItemID}/abort | POST | abort a work item |
| /runtime/{deploymentId}/withvars/process/{procDefinitionID}/start | POST |
start a process instance and return the process instance with its variables
Note that even if a passed variable is not defined in the underlying process definition, it is created and initialized with the passed value.
|
| /runtime/{deploymentId}/withvars/process/instance/{procInstanceID}/ | GET |
return a process instance with its variables
|
| /runtime/{deploymentId}/withvars/process/instance/{procInstanceID}/signal | POST |
send a signal event to the process instance (accepts query map parameters)
The following query parameters are accepted:
|
Table 17.7. task REST calls
| URL Template | Type | Description |
|---|---|---|
| /task/query | GET |
return a TaskSummary list
|
| /task/content/{contentID} | GET |
returns the content of a task
|
| /task/{taskID} | GET |
return the task
|
| /task/{taskID}/activate | POST |
activate the task
|
| /task/{taskID}/claim | POST |
claim the task
|
| /task/{taskID}/claimnextavailable | POST |
claim the next available task
|
| /task/{taskID}/complete | POST |
complete the task (accepts query map paramaters)
|
| /task/{taskID}/delegate | POST |
delegate the task
|
| /task/{taskID}/exit | POST |
exit the task
|
| /task/{taskID}/fail | POST |
fail the task
|
| /task/{taskID}/forward | POST |
forward the task
|
| /task/{taskID}/nominate | POST |
nominate the task
|
| /task/{taskID}/release | POST |
release the task
|
| /task/{taskID}/resume | POST |
resume the task (after suspending)
|
| /task/{taskID}/skip | POST |
skip the task
|
| /task/{taskID}/start | POST |
start the task
|
| /task/{taskID}/stop | POST |
stop the task
|
| /task/{taskID}/suspend | POST |
suspend the task
|
| /task/{taskID}/showTaskForm | GET |
Generates a URL to show the task form on a remote application as a
JaxbTaskFormResponse instance.
|
| /task/{taskID}/content | GET |
returns the content of a task
|
Table 17.8. history REST calls
| URL Template | Type | Description |
|---|---|---|
| /history/clear/ | POST | delete all process, node and history records |
| /history/instances | GET | return the list of all process instance history records |
| /history/instance/{procInstId} | GET | return a list of process instance history records for a process instance |
| /history/instance/{procInstId}/child | GET | return a list of process instance history records for the subprocesses of the process instance |
| /history/instance/{procInstId}/node | GET | return a list of node history records for a process instance |
| /history/instance/{procInstId}/node/{nodeId} | GET | return a list of node history records for a node in a process instance |
| /history/instance/{procInstId}/variable | GET | return a list of variable history records for a process instance |
| /history/instance/{procInstId}/variable/{variableId} | GET | return a list of variable history records for a variable in a process instance |
| /history/process/{procDefId} | GET | return a list of process instance history records for process instances using a given process definition |
| /history/variable/{varId} | GET | return a list of variable history records for a variable |
| /history/variable/{varId}/instances | GET | return a list of process instance history records for process instances that contain a variable with the given variable id |
| /history/variable/{varId}/value/{value} | GET | return a list of variable history records for variable(s) with the given variable id and given value |
| /history/variable/{varId}/value/{value}/instances | GET | return a list of process instance history records for process instances with the specified variable that contains the specified variable value |
Table 17.9. deployment REST calls
| URL Template | Type | Description |
|---|---|---|
| /deployment | GET | return a list of (deployed) deployments |
| /deployment/{deploymentId} | GET | return the status and information about the deployment |
| /deployment/{deploymentId}/deploy | POST |
submit a request to deploy a deployment
|
| /deployment/{deploymentId}/undeploy | POST |
submit a request to undeploy a deployment
|
| /deployment/{deploymentId}/deactivate | POST |
deactivate a deployment
|
| /deployment/{deploymentId}/activate | POST |
activate a deployment
|
Table 17.10. query REST calls
| URL Template | Type | Description |
|---|---|---|
| /query/runtime/process | GET | query process instances and process variables |
| /query/runtime/task | GET | query tasks and process variables |
| /query/task | POST |
query tasks
|
17.1.7. Control of the REST API
web.xml.
Table 17.11. Available roles, their type and scope of access
| Name | Type | Scope of access |
|---|---|---|
| rest-all | GET, POST, DELETE | All direct REST calls (excluding remote client) |
| rest-project | GET, POST, DELETE | Knowledge store REST calls |
| rest-deployment | GET, POST | Deployment unit REST calls |
| rest-process | GET, POST | Runtime and history REST calls |
| rest-process-read-only | GET | Runtime and history REST calls |
| rest-task | GET, POST | Task REST calls |
| rest-task-read-only | GET | Task REST calls |
| rest-query | GET | REST query API calls |
| rest-client | POST | Remote client calls |
17.2. JMS
17.2.1. JMS Queue Setup
jms/queue/KIE.SESSIONjms/queue/KIE.TASKjms/queue/KIE.RESPONSE
KIE.SESSION and KIE.TASK queues should be used to send request messages to the JMS API. Command response messages will be then placed on the KIE.RESPONSE queues. Command request messages that involve starting and managing business processes should be sent to the KIE.SESSION and command request messages that involve managing human tasks, should be sent to the KIE.TASK queue.
KIE.SESSION and KIE.TASK, this is only in order to provide multiple input queues so as to optimize processing: command request messages will be processed in the same manner regardless of which queue they're sent to. However, in some cases, users may send many more requests involving human tasks than requests involving business processes, but then not want the processing of business process-related request messages to be delayed by the human task messages. By sending the appropriate command request messages to the appropriate queues, this problem can be avoided.
JaxbCommandsRequest object. At the moment, only XML serialization (as opposed to, JSON or protobuf, for example) is supported.
17.2.2. Serialization issues
- The user-defined class satisfy the following in order to be property serialized and deserialized by the JMS or REST API:
- The user-defined class must be correctly annotated with JAXB annotations, including the following:
- The user-defined class must be annotated with a
javax.xml.bind.annotation.XmlRootElementannotation with a non-emptynamevalue - All fields or getter/setter methods must be annotated with a
javax.xml.bind.annotation.XmlElementorjavax.xml.bind.annotation.XmlAttributeannotations.
Furthermore, the following usage of JAXB annotations is recommended:- Annotate the user-defined class with a
javax.xml.bind.annotation.XmlAccessorTypeannotation specifying that fields should be used, (javax.xml.bind.annotation.XmlAccessType.FIELD). This also means that you should annotate the fields (instead of the getter or setter methods) with@XmlElementor@XmlAttributeannotations. - Fields annotated with
@XmlElementor@XmlAttributeannotations should also be annotated withjavax.xml.bind.annotation.XmlSchemaTypeannotations specifying the type of the field, even if the fields contain primitive values. - Use objects to store primitive values. For example, use the
java.lang.Integerclass for storing an integer value, and not theintclass. This way it will always be obvious if the field is storing a value.
- The user-defined class definition must implement a no-arg constructor.
- Any fields in the user-defined class must either be object primitives (such as a
LongorString) or otherwise be objects that satisfy the first 2 requirements in this list (correct usage of JAXB annotations and a no-arg constructor).
- The class definition must be included in the deployment jar of the deployment that the JMS message content is meant for.
Note
If you create your class definitions from an XSD schema, you may end up creating classes that inconsistently (among classes) refer to a namespace. This inconsistent use of a namespace can end up preventing a these class instance from being correctly deserialized when received as a parameter in a command on the server side.For example, you may create a class that is used in a BPMN2 process, and add an instance of this class as a parameter when starting the process. While sending the command/operation request (via the Remote (client) Java API) will succeed, the parameter will not be correctly deserialized on the server side, leading the process to eventually throw an exception about an unexpected type for the class. - The sender must set a “deploymentId” string property on the JMS bytes message to the name of the deploymentId. This property is necessary in order to be able to load the proper classes from the deployment itself before deserializing the message on the server side.
Note
17.2.3. Example JMS Usage
// normal java imports skipped
import org.drools.core.command.runtime.process.StartProcessCommand;
import org.jbpm.services.task.commands.GetTaskAssignedAsPotentialOwnerCommand;
import org.kie.api.command.Command;
import org.kie.api.runtime.process.ProcessInstance;
import org.kie.api.task.model.TaskSummary;
// 1
import org.kie.services.client.api.command.exception.RemoteCommunicationException;
import org.kie.services.client.serialization.JaxbSerializationProvider;
import org.kie.services.client.serialization.SerializationConstants;
import org.kie.services.client.serialization.SerializationException;
import org.kie.services.client.serialization.jaxb.impl.JaxbCommandResponse;
import org.kie.services.client.serialization.jaxb.impl.JaxbCommandsRequest;
import org.kie.services.client.serialization.jaxb.impl.JaxbCommandsResponse;
import org.kie.services.client.serialization.jaxb.rest.JaxbExceptionResponse;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
public class DocumentationJmsExamples {
protected static final Logger logger = LoggerFactory.getLogger(DocumentationJmsExamples.class);
public void sendAndReceiveJmsMessage() {
String USER = "charlie";
String PASSWORD = "ch0c0licious";
String DEPLOYMENT_ID = "test-project";
String PROCESS_ID_1 = "oompa-processing";
URL serverUrl;
try {
serverUrl = new URL("http://localhost:8080/business-central/");
} catch (MalformedURLException murle) {
logger.error("Malformed URL for the server instance!", murle);
return;
}
// Create JaxbCommandsRequest instance and add commands
Command<?> cmd = new StartProcessCommand(PROCESS_ID_1);
int oompaProcessingResultIndex = 0;
//5
JaxbCommandsRequest req = new JaxbCommandsRequest(DEPLOYMENT_ID, cmd);
//2
req.getCommands().add(new GetTaskAssignedAsPotentialOwnerCommand(USER, "en-UK"));
int loompaMonitoringResultIndex = 1;
//5
// Get JNDI context from server
InitialContext context = getRemoteJbossInitialContext(serverUrl, USER, PASSWORD);
// Create JMS connection
ConnectionFactory connectionFactory;
try {
connectionFactory = (ConnectionFactory) context.lookup("jms/RemoteConnectionFactory");
} catch (NamingException ne) {
throw new RuntimeException("Unable to lookup JMS connection factory.", ne);
}
// Setup queues
Queue sendQueue, responseQueue;
try {
sendQueue = (Queue) context.lookup("jms/queue/KIE.SESSION");
responseQueue = (Queue) context.lookup("jms/queue/KIE.RESPONSE");
} catch (NamingException ne) {
throw new RuntimeException("Unable to lookup send or response queue", ne);
}
// Send command request
Long processInstanceId = null; // needed if you're doing an operation on a PER_PROCESS_INSTANCE deployment
String humanTaskUser = USER;
JaxbCommandsResponse cmdResponse = sendJmsCommands(
DEPLOYMENT_ID, processInstanceId, humanTaskUser, req,
connectionFactory, sendQueue, responseQueue,
USER, PASSWORD, 5);
// Retrieve results
ProcessInstance oompaProcInst = null;
List<TaskSummary> charliesTasks = null;
//6
for (JaxbCommandResponse<?> response : cmdResponse.getResponses()) {
if (response instanceof JaxbExceptionResponse) {
// something went wrong on the server side
JaxbExceptionResponse exceptionResponse = (JaxbExceptionResponse) response;
throw new RuntimeException(exceptionResponse.getMessage());
}
//5
if (response.getIndex() == oompaProcessingResultIndex) {
oompaProcInst = (ProcessInstance) response.getResult();
//6
} else if (response.getIndex() == loompaMonitoringResultIndex) {
//5
charliesTasks = (List<TaskSummary>) response.getResult();
//6
}
}
}
private JaxbCommandsResponse sendJmsCommands(String deploymentId, Long processInstanceId, String user, JaxbCommandsRequest req,
ConnectionFactory factory, Queue sendQueue, Queue responseQueue, String jmsUser, String jmsPassword, int timeout) {
req.setProcessInstanceId(processInstanceId);
req.setUser(user);
Connection connection = null;
Session session = null;
String corrId = UUID.randomUUID().toString();
String selector = "JMSCorrelationID = '" + corrId + "'";
JaxbCommandsResponse cmdResponses = null;
try {
// setup
MessageProducer producer;
MessageConsumer consumer;
try {
if (jmsPassword != null) {
connection = factory.createConnection(jmsUser, jmsPassword);
} else {
connection = factory.createConnection();
}
session = connection.createSession(false, Session.AUTO_ACKNOWLEDGE);
producer = session.createProducer(sendQueue);
consumer = session.createConsumer(responseQueue, selector);
connection.start();
} catch (JMSException jmse) {
throw new RemoteCommunicationException("Unable to setup a JMS connection.", jmse);
}
JaxbSerializationProvider serializationProvider = new JaxbSerializationProvider();
// if necessary, add user-created classes here:
// xmlSerializer.addJaxbClasses(MyType.class, AnotherJaxbAnnotatedType.class);
// Create msg
BytesMessage msg;
try {
msg = session.createBytesMessage();
//3
// set properties
msg.setJMSCorrelationID(corrId);
//3
msg.setIntProperty(SerializationConstants.SERIALIZATION_TYPE_PROPERTY_NAME, JaxbSerializationProvider.JMS_SERIALIZATION_TYPE);
//3
Collection<Class<?>> extraJaxbClasses = serializationProvider.getExtraJaxbClasses();
if (!extraJaxbClasses.isEmpty()) {
String extraJaxbClassesPropertyValue = JaxbSerializationProvider
.classSetToCommaSeperatedString(extraJaxbClasses);
msg.setStringProperty(SerializationConstants.EXTRA_JAXB_CLASSES_PROPERTY_NAME, extraJaxbClassesPropertyValue);
msg.setStringProperty(SerializationConstants.DEPLOYMENT_ID_PROPERTY_NAME, deploymentId);
}
// serialize request
String xmlStr = serializationProvider.serialize(req);
msg.writeUTF(xmlStr);
//3
} catch (JMSException jmse) {
throw new RemoteCommunicationException("Unable to create and fill a JMS message.", jmse);
} catch (SerializationException se) {
throw new RemoteCommunicationException("Unable to deserialze JMS message.", se.getCause());
}
// send
try {
producer.send(msg);
} catch (JMSException jmse) {
throw new RemoteCommunicationException("Unable to send a JMS message.", jmse);
}
// receive
Message response;
//4
try {
response = consumer.receive(timeout);
} catch (JMSException jmse) {
throw new RemoteCommunicationException("Unable to receive or retrieve the JMS response.", jmse);
}
if (response == null) {
logger.warn("Response is empty, leaving");
return null;
}
// extract response
assert response != null : "Response is empty.";
try {
String xmlStr = ((BytesMessage) response).readUTF();
cmdResponses = (JaxbCommandsResponse) serializationProvider.deserialize(xmlStr);
} catch (JMSException jmse) {
throw new RemoteCommunicationException("Unable to extract " + JaxbCommandsResponse.class.getSimpleName()
+ " instance from JMS response.", jmse);
} catch (SerializationException se) {
throw new RemoteCommunicationException("Unable to extract " + JaxbCommandsResponse.class.getSimpleName()
+ " instance from JMS response.", se.getCause());
}
assert cmdResponses != null : "Jaxb Cmd Response was null!";
} finally {
if (connection != null) {
try {
connection.close();
session.close();
} catch (JMSException jmse) {
logger.warn("Unable to close connection or session!", jmse);
}
}
}
return cmdResponses;
}
private InitialContext getRemoteJbossInitialContext(URL url, String user, String password) {
Properties initialProps = new Properties();
initialProps.setProperty(InitialContext.INITIAL_CONTEXT_FACTORY, "org.jboss.naming.remote.client.InitialContextFactory");
String jbossServerHostName = url.getHost();
initialProps.setProperty(InitialContext.PROVIDER_URL, "remote://"+ jbossServerHostName + ":4447");
initialProps.setProperty(InitialContext.SECURITY_PRINCIPAL, user);
initialProps.setProperty(InitialContext.SECURITY_CREDENTIALS, password);
for (Object keyObj : initialProps.keySet()) {
String key = (String) keyObj;
System.setProperty(key, (String) initialProps.get(key));
}
try {
return new InitialContext(initialProps);
} catch (NamingException e) {
throw new RemoteCommunicationException("Unable to create " + InitialContext.class.getSimpleName(), e);
}
}
}kie-services-client and the kie-services-jaxb JAR.
JaxbCommandsRequest instance is the "holder" object in which you can place all of the commands you want to execute in a particular request. By using the JaxbCommandsRequest.getCommands() method, you can retrieve the list of commands in order to add more commands to the request.
- It must be a JMS byte message.
- It must have a filled JMS Correlation ID property.
- It must have an int property with the name of "serialization" set to an acceptable value (only 0 at the moment).
- It must contain a serialized instance of a
JaxbCommandsRequest, added to the message as a UTF string
index field of the returned JaxbCommandResponse instances. This index field will match the index of the initial command. Because not all commands will return a result, it's possible to send 3 commands with a command request message, and then receive a command response message that only includes one JaxbCommandResponse message with an index value of 1. That 1 then identifies it as the response to the second command.
JaxbCommandResponse interface. The JaxbCommandResponse.getResult() method then returns the JAXB equivalent to the actual result, which will conform to the interface of the result.
StartProcessCommand returns a ProcessInstance. In order to return this object to the requester, the ProcessInstance is converted to a JaxbProcessInstanceResponse and then added as a JaxbCommandResponse to the command response message. The same applies to the List<TaskSummary> that's returned by the GetTaskAssignedAsPotentialOwnerCommand.
ProcessInstance can be called on the JaxbProcessInstanceResponse because the JaxbProcessInstanceResponse is simply a representation of a ProcessInstance object. This applies to various other command response as well. In particular, methods which require an active (backing) KieSession, such as ProcessInstance.getProess() or ProcessInstance.signalEvent(String type, Object event) will throw an UnsupportedOperationException.
17.3. EJB Interface
KieSession and TaskService remotely. This allows for close transaction integration between the execution engine and remote customer applications.
RuleService at this time, but the ProcessService class exposes an execute method that allows you to use various rule related commands, like InsertCommand and FireAllRulesCommand.
Deployment of EJB Client
jbpm-services-ejb-client-VERSION-redhat-MINOR.jar.
17.3.1. EJB Interface Methods
- org.jbpm.services.ejb.api: the extension to the Services API for EJB needs.
- org.jbpm.services.ejb.impl: EJB wrappers on top of the core service implementation.
- org.jbpm.services.ejb.client: The EJB remote client implementation that works on JBoss EAP only.
DefinitionService: Use this interface to gather information about processes (id, name and version), process variables (name and type), defined reusable subprocesses, domain specific service and user tasks and user task input and outputs.DeploymentService: Use this interface to initiate deployments and un-deployments. Methods includedeploy,undeploy,getRuntimeManager,getDeployedUnits,isDeployed,activate,deactivateandgetDeployedUnit. Calling thedeploymethod with an instance ofDeploymentUnitdeploys it into the runtime engine by buildingRuntimeManagerinstance for the deployed unit. Upon successful deployment an instance ofDeployedUnitinstance is created and cached for further usage.These methods only work if the artifact/project is already installed in a Maven repository.ProcessService: Use this interface to control the lifecycle of one or more Processes and Work Items.RuntimeDataService: Use this interface to retrieve data about the runtime: process instances, process definitions, node instance information and variable information. It includes several convenience methods for gathering task information based on owner, status and time.UserTaskService: Use this interface to control the lifecycle of a user task. Methods include all the usual ones:activate,start,stop,executeamongst others.
Note
17.3.2. Generating the EJB Services WAR
- Update the
persistence.xmlfile in Business Central. Edit the property hibernate.hbm2ddl.auto and set its value toupdate(instead ofcreate). - Register the Human Task CallBack using a startup class:
@Singleton @Startup public class StartupBean { @PostConstruct public void init() { System.setProperty("org.jbpm.ht.callback", "jaas"); } } - Generate the WAR file:
mvn assembly:assembly - Deploy the generated war file (
sample-war-ejb-app.war) in the JBoss EAP instance that JBoss BPM Suite 6.1 is running in.Note
If deploying on a JBoss EAP container separate from the one where JBoss BPM Suite is running, you need to:- You need to configure your application/app server to invoke a remote EJB.
- You need to configure your application/app server to propagate the security context.
Warning
When you deploy your EJB WAR on the same instance of JBoss EAP, avoid using theSingletonstrategy for your runtime sessions. If you use theSingletonstrategy, both applications will load the sameksessioninstance from the underlying file system and cause optimistic lock exceptions. - To test, create a simple web application and inject the EJB Services:
@EJB(lookup = "ejb:/sample-war-ejb-app/ProcessServiceEJBImpl!org.jbpm.services.ejb.api.ProcessServiceEJBRemote") private ProcessServiceEJBRemote processService; @EJB(lookup = "ejb:/sample-war-ejb-app/UserTaskServiceEJBImpl!org.jbpm.services.ejb.api.UserTaskServiceEJBRemote") private UserTaskServiceEJBRemote userTaskService; @EJB(lookup = "ejb:/sample-war-ejb-app/RuntimeDataServiceEJBImpl!org.jbpm.services.ejb.api.RuntimeDataServiceEJBRemote") private RuntimeDataServiceEJBRemote runtimeDataService;
17.4. Remote Java API
KieSession, TaskService and AuditService interfaces to the JMS and REST APIs.
KieSession or TaskService interface, without having to deal with the underlying transport and serialization details.
Important
KieSession, TaskService and AuditService instances provided by the Remote Java API may "look" and "feel" like local instances of the same interfaces, make sure to remember that these instances are only wrappers around a REST or JMS client that interacts with a remote REST or JMS API.
RuntimeException indicating that the REST call failed. This is different from the behavior of a "real" (or local) instance of a KieSession, TaskService and AuditService instance because the exception the local instances will throw will relate to how the operation failed. Also, while local instances require different handling (such as having to dispose of a KieSession), client instances provided by the Remote Java API hold no state and thus do not require any special handling.
TaskService.claim(taskId, userId) operation when called by a user who is not a potential owner), will now throw a RuntimeException instead when the requested operation fails on the server.
RemoteRuntimeEngine instance. The recommended way is to use RemoteRestRuntimeEngineBuilder or RemoteJmsRuntimeEngineBuilder. There are a number of different methods for both the JMS and REST client builders that allow the configuration of parameters such as the base URL of the REST API, JMS queue location or timeout while waiting for responses.
Procedure 17.1. Creating the RemoteRuntimeEngine Instance
- Instantiate the
RemoteRestRuntimeEngineBuilderorRemoteJmsRuntimeEngineBuilderby calling eitherRemoteRuntimeEngineFactory.newRestBuilder()orRemoteRuntimeEngineFactory.newJmsBuilder(). - Set the required parameters.
- Finally, call the
build()method.
RemoteRuntimeEngine instance has been created, there are a couple of methods that can be used to instantiate the client classes you want to use:
Remote Java API Methods
KieSession RemoteRuntimeEngine.getKieSession()- This method instantiates a new (client)
KieSessioninstance. TaskService RemoteRuntimeEngine.getTaskService()- This method instantiates a new (client)
TaskServiceinstance. AuditService RemoteRuntimeEngine.getAuditService()- This method instantiates a new (client)
AuditServiceinstance.
To start your own project, it is important to specify the BPM Suite BOM in the project's pom.xml file. Also, make sure you add the kie-remote-client dependency. See the following example:
<dependencyManagement> <dependencies> <dependency> <groupId>org.jboss.bom.brms</groupId> <artifactId>jboss-brms-bpmsuite-bom</artifactId> <version>6.2.0.GA-redhat-1</version> <type>pom</type> <scope>import</scope> </dependency> </dependencies> </dependencyManagement> <dependencies> <dependency> <groupId>org.kie.remote</groupId> <artifactId>kie-remote-client</artifactId> </dependency> </dependencies>
17.4.1. The REST Remote Java RuntimeEngine Factory
RemoteRuntimeEngineFactory class is the starting point for building and configuring a new RemoteRuntimeEngine instance that can interact with the remote API. This class creates an instance of a REST client builder using the newRestBuilder() method. This builder is then used to create a RemoteRuntimeEngine instance that acts as a client to the remote REST API. The RemoteRestRuntimeEngineBuilder exposes the following properties for configuration:
Table 17.12. RemoteRestRuntimeEngineBuilder Methods
| Method Name | Parameter Type | Description |
|---|---|---|
Url | java.net.URL |
URL of the deployed Business Central. For example: http://localhost:8080/business-central/.
|
UserName | java.lang.String |
The user name to access the REST API.
|
Password | java.lang.String |
The password to access the REST API.
|
DeploymentId | java.lang.String |
The name (id) of the deployment the
RuntimeEngine must interact with. This can be an empty String in case you are only interested in task operations.
|
Timeout | int |
The maximum number of seconds the engine must wait for a response from the server.
|
ProcessInstanceId | long |
The method that adds the process instance id, which may be necessary when interacting with deployments that employ the per process instance runtime strategy.
|
ExtraJaxbClasses | class |
The method that adds extra classes to the classpath available to the serialization mechanisms. When passing instances of user-defined classes in a Remote Java API call, it is important to have added the classes using this method first so that the class instances can be serialized correctly.
|
build() to get access to the RemoteRuntimeEngine.
Important
RemoteRuntimeEngine calls has to have the rest-client and rest-all roles assigned.
The following example illustrates how the Remote Java API can be used with the REST API.
import java.net.MalformedURLException;
import java.net.URL;
import java.util.List;
import org.kie.api.runtime.KieSession;
import org.kie.api.runtime.manager.RuntimeEngine;
import org.kie.api.runtime.process.ProcessInstance;
import org.kie.api.task.TaskService;
import org.kie.api.task.model.TaskSummary;
import org.kie.remote.client.api.RemoteRuntimeEngineFactory;
public void startProcessAndHandleTaskViaRestRemoteJavaAPI(
URL instanceUrl, String deploymentId, String user, String password) {
// The serverRestUrl should contain a URL similar to
// "http://localhost:8080/business-central/".
// Set up the factory class with the necessary information to communicate
// with the REST services.
RuntimeEngine engine = RemoteRuntimeEngineFactory
.newRestBuilder()
.addUrl(instanceUrl)
.addUserName(user)
.addPassword(password)
.addDeploymentId(deploymentId)
.build();
KieSession ksession = engine
.getKieSession();
TaskService taskService = engine
.getTaskService();
// Each operation on a KieSession, TaskService or AuditService (client)
// instance sends a request for the operation to the server side
// and waits for the response. If something goes wrong on the server side,
// the client will throw an exception.
ProcessInstance processInstance = ksession
.startProcess("project1.start_and_task_test");
String taskUserId = user;
long procId = processInstance
.getId();
taskService = engine
.getTaskService();
List<TaskSummary> tasks = taskService
.getTasksAssignedAsPotentialOwner(user, "en-UK");
long taskId = -1;
for (TaskSummary task : tasks) {
if (task.getProcessInstanceId() == procId) {
taskId = task.getId();
}
}
if (taskId == -1) {
throw new IllegalStateException(
"Unable to find task for "
+ user
+ " in process instance "
+ procId);
}
taskService.start(taskId, taskUserId);
}
Note
getPotentialOwners() method of the TaskSummary class does not return the list of potential owners of a task.
Task is from the org.kie.api.task.model.Task package. Also notice that the method getTaskById() uses the task ID as a parameter.
import org.kie.api.task.model.OrganizationalEntity;
import org.kie.api.task.model.Task;
Task task = taskService.getTaskById(TASK_ID);
List<OrganizationalEntity> org = task.getPeopleAssignments().getPotentialOwners();
for (OrganizationalEntity ent : org) {
System.out.println("org: " + ent.getId());
}
getActualOwnerId() and getCreatedById() methods.
17.4.2. Calling Tasks Without Deployment ID
addDeploymentId() method called on the RemoteRestRuntimeEngineBuilder requires the calling application to pass the deploymentId parameter to connect to Business Central. The deploymentId is the ID of the deployment with which the RuntimeEngine interacts. However, there may be applications that require working with human tasks and dealing with processes across multiple deployments. In such cases, where providing deploymentId parameters for multiple deployments to connect to Business Central is not feasible, it is possible to skip the parameter when using the fluent API of the RemoteRestRuntimeEngineBuilder.
deploymentId parameter. If a request requires the deploymentId parameter, but does not have it configured, an exception is thrown.
RuntimeEngine engine = RemoteRuntimeEngineFactory
.newRestBuilder()
.addUrl(instanceUrl)
.addUserName(user)
.addPassword(password)
.build();
// This call does not require the deployment ID and ends successfully:
engine.getTaskService().claim(23l, "user");
// This code throws a "MissingRequiredInfoException" because the
// deployment ID is required:
engine.getKieSession().startProcess("org.test.process");
17.4.3. Custom Model Objects and Remote API
Note
Procedure 17.2. Accessing custom model objects using the Remote API
- Ensure that the custom model objects have been installed into the local Maven repository of the project that they are a part of. To achieve that, the project has to be built successfully.
- If your client application is a Maven based project include the custom model objects project as a Maven dependency in the
pom.xmlconfiguration file of the client application.<dependency> <groupId>${groupid}</groupId> <artifactId>${artifactid}</artifactId> <version>${version}</version> </dependency>The value of these fields can be found in your Project Editor within Business Central: → in the main menu and then → from the perspective menu.- If the client application is not a Maven based project download the JBoss BPM Suite project, which includes the model classes, from Business Central by clicking on → . Add this jar file of the project on the build path of your client application so that the model object classes can be found and used.
- You can now use the custom model objects within your client application and invoke methods on them using the Remote API. The following listing shows an example of this, where
Personis a custom model object.RuntimeEngine engine = RemoteRuntimeEngineFactory .newRestBuilder() .addUrl(instanceUrl) .addUserName(user) .addPassword(password) .addExtraJaxbClasses(Person.class) .addDeploymentId(deploymentId) .build(); KieSession kSession = engine.getKieSession(); Map<String, Object> params = new HashMap<>(); Person person = new Person(); person.setName("anton"); params.put("pVar", person); ProcessInstance pi = kSession.startProcess(PROCESS2_ID, params); System.out.println("Process Started: " + pi.getId());Ensure that your client application has imported the correct JBoss BPM Suite libraries for the example to work.
@org.kie.api.remote.Remotable annotation. The @org.kie.api.remote.Remotable annotation makes the entity available for use with JBoss BPM Suite remote services such as REST, JMS, and WS.
- On the Drools & jBPM screen of the data object in Business Central, select the Remotable check box.

Figure 17.1.
Remotablecheck box on the Drools & jBPM screen in Business Central.You can also add the annotation manually. On the right side of the Data Object editor screen in Business Central, choose theAdvancedtab and click add annotation. In the Add new annotation dialog window, define the annotatiton class name asorg.kie.api.remote.Remotableand click the search button. - It is also possible to edit the source of the class directly. See the following example:
package org.bpms.helloworld; @org.kie.api.remote.Remotable public class Person implements java.io.Serializable { ...
17.4.4. The JMS Remote Java RuntimeEngine Factory
RemoteRuntimeEngineFactory works similarly to the REST variation in that it is a starting point for building and configuring a new RemoteRuntimeEngine instance that can interact with the remote JMS API. The main use for this class is to create a builder instance of JMS using the newJmsBuilder() method. This builder is then used to create a RemoteRuntimeEngine instance that will act as a client to the remote JMS API. Illustrated in the table below are the various methods available for the RemoteJmsRuntimeEngineBuilder:
Table 17.13. RemoteJmsRuntimeEngineBuilder Methods
| Method Name | Parameter Type | Description |
|---|---|---|
addDeploymentId | java.lang.String |
Name (ID) of the deployment the
RuntimeEngine should interact with.
|
addProcessInstanceId | long |
Name (ID) of the process the
RuntimeEngine should interact with.
|
addUserName | java.lang.String |
User name needed to access the JMS queues (in your application server configuration).
|
addPassword | java.lang.String |
Password needed to access the JMS queues (in your application server configuration).
|
addTimeout | int |
Maximum number of seconds allowed when waiting for a response from the server.
|
addExtraJaxbClasses | class |
Adds extra classes to the classpath available to serialization mechanisms.
|
addRemoteInitialContext | javax.jms.InitialContext |
Remote InitialContext instance (created using JNDI) from the server.
|
addConnectionFactory | javax.jms.ConnectionFactory | ConnectionFactory instance used to connect to the ksessionQueue or taskQueue.
|
addKieSessionQueue | javax.jms.Queue |
Instance of the
Queue for requests related to a process instance.
|
addTaskServiceQueue | javax.jms.Queue |
Instance of the
Queue for requests related to the task service usage.
|
addResponseQueue | javax.jms.Queue |
Instance of the
Queue used for receiving responses.
|
addJbossServerUrl | java.net.URL |
URL for the JBoss Server and Websphere.
|
addJbossServerHostName | java.lang.String |
Host name for the JBoss Server.
|
addHostName | java.lang.String |
Host name of the JMS queues.
|
addJmsConnectorPort | int |
Port for the JMS Connector.
|
addKeystorePassword | java.lang.String |
JMS Keystore password.
|
addKeystoreLocation | java.lang.String |
JMS Keystore location.
|
addTruststorePassword | java.lang.String |
JMS Truststore password.
|
addTruststoreLocation | java.lang.String |
JMS Truststore location.
|
useKeystoreAsTruststore | - |
Should be used if the Keystore and Truststore are both located in the same file. Configures the client to use the file for both Keystore and Truststore.
|
useSsl | boolean |
Sets whether this client instance uses secured connection.
|
disableTaskSecurity | - |
Suitable only if you do not want to use SSL while communicating with Business Central.
|
Example Usage
import org.kie.api.runtime.KieSession;
import org.kie.api.task.TaskService;
import org.kie.api.runtime.process.ProcessInstance;
import org.kie.services.client.api.RemoteJmsRuntimeEngineFactory;
import org.kie.services.client.api.command.RemoteRuntimeEngine;
public void javaRemoteApiJmsExample
(String deploymentId, Long processInstanceId, String user, String password) {
// Create a factory class with all the values:
RemoteJmsRuntimeEngineFactory jmsRuntimeFactory = RemoteRuntimeEngineFactory
.newJmsBuilder()
.addDeploymentId(deploymentId)
.addProcessInstanceId(processInstanceId)
.addUserName(user)
.addPassword(password)
.addRemoteInitialContext(remoteInitialContext)
.addTimeout(3)
.addExtraJaxbClasses(MyType.class)
.useSsl(false)
.build();
RemoteRuntimeEngine engine = jmsRuntimeFactory.newRuntimeEngine();
// Create KieSession and TaskService instances and use them:
KieSession ksession = engine.getKieSession();
TaskService taskService = engine.getTaskService();
// Each operation on a KieSession, TaskService or AuditService (client) instance
// sends a request for the operation to the server side and waits for the response.
// If something goes wrong on the server side, the client will throw an exception.
ProcessInstance processInstance
= ksession.startProcess("com.burns.reactor.maintenance.cycle");
long procId = processInstance.getId();
String taskUserId = user;
taskService = engine.getTaskService();
List<TaskSummary> tasks = taskService
.getTasksAssignedAsPotentialOwner(user, "en-UK");
long taskId = -1;
for (TaskSummary task : tasks) {
if (task.getProcessInstanceId() == procId) {
taskId = task.getId();
}
}
if (taskId == -1) {
throw new IllegalStateException
("Unable to find task for "
+ user
+ " in process instance "
+ procId);
}
taskService.start(taskId, taskUserId);
} Configuration using an InitialContext instance
RemoteJmsRuntimeEngineFactory with an InitialContext instance as a parameter for Red Hat JBoss EAP 6, it is necessary to retrieve the (remote) InitialContext instance first from the remote server. The following code illustrates how to do this.
private InitialContext getRemoteJbossInitialContext(URL url, String user, String password) {
Properties initialProps = new Properties();
initialProps.setProperty
(InitialContext.INITIAL_CONTEXT_FACTORY,
"org.jboss.naming.remote.client.InitialContextFactory");
String jbossServerHostName = url.getHost();
initialProps.setProperty
(InitialContext.PROVIDER_URL, "remote://"+ jbossServerHostName + ":4447");
initialProps.setProperty(InitialContext.SECURITY_PRINCIPAL, user);
initialProps.setProperty(InitialContext.SECURITY_CREDENTIALS, password);
for (Object keyObj : initialProps.keySet()) {
String key = (String) keyObj;
System.setProperty(key, (String) initialProps.get(key));
}
try {
return new InitialContext(initialProps);
} catch (NamingException e) {
throw new RemoteCommunicationException
("Unable to create " + InitialContext.class.getSimpleName(), e);
}
}RemoteRuntimeEngine. For more information, see the How to Use JMS Queues Without the RemoteRuntimeEngine in Red Hat JBoss BPMS article. However, this approach is not a recommended way to use the provided JMS interface.
17.4.5. Supported Methods
RuntimeEngine, KieSession, TaskService and AuditService interfaces. This means that while many of the methods in those interfaces are available, some are not. The following tables list the available methods. Methods not listed in the tables below throw an UnsupportedOperationException explaining that the called method is not available.
Table 17.14. Available process-related KieSession methods
| Returns | Method signature | Description |
|---|---|---|
void
| abortProcessInstance(long processInstanceId)
|
Abort the process instance
|
ProcessInstance
| getProcessInstance(long processInstanceId)
|
Return the process instance
|
ProcessInstance
| getProcessInstance(long processInstanceId, boolean readonly)
|
Return the process instance
|
Collection<ProcessInstance>
| getProcessInstances()
|
Return all (active) process instances
|
void
| signalEvent(String type, Object event)
|
Signal all (active) process instances
|
void
| signalEvent(String type, Object event, long processInstanceId)
|
Signal the process instance
|
ProcessInstance
| startProcess(String processId)
|
Start a new process and return the process instance (if the process instance has not immediately completed)
|
ProcessInstance
| startProcess(String processId, Map<String, Object> parameters);
|
Start a new process and return the process instance (if the process instance has not immediately completed)
|
Table 17.15. Available rules-related KieSession methods
| Returns | Method signature | Description |
|---|---|---|
Long
| getFactCount()
|
Return the total fact count
|
Object
| getGlobal(String identifier)
|
Return a global fact
|
void
| setGlobal(String identifier, Object value)
|
Set a global fact
|
Table 17.16. Available WorkItemManager methods
| Returns | Method signature | Description |
|---|---|---|
void
| abortWorkItem(long id)
|
Abort the work item
|
void
| completeWorkItem(long id, Map<String, Object> results)
|
Complete the work item
|
void | registerWorkItemHandler(String workItemName, WorkItemHandler handler) |
Register the work items
|
WorkItem
| getWorkItem(long workItemId)
|
Return the work item
|
Table 17.17. Available task operation TaskService methods
| Returns | Method signature | Description |
|---|---|---|
Long
| addTask(Task task, Map<String, Object> params)
|
Add a new task
|
void
| activate(long taskId, String userId)
|
Activate a task
|
void
| claim(long taskId, String userId)
|
Claim a task
|
void
| claimNextAvailable(String userId, String language)
|
Claim the next available task for a user
|
void
| complete(long taskId, String userId, Map<String, Object> data)
|
Complete a task
|
void
| delegate(long taskId, String userId, String targetUserId)
|
Delegate a task
|
void
| exit(long taskId, String userId)
|
Exit a task
|
void
| fail(long taskId, String userId, Map<String, Object> faultData)
|
Fail a task
|
void
| forward(long taskId, String userId, String targetEntityId)
|
Forward a task
|
void
| nominate(long taskId, String userId, List<OrganizationalEntity> potentialOwners)
|
Nominate a task
|
void
| release(long taskId, String userId)
|
Release a task
|
void
| resume(long taskId, String userId)
|
Resume a task
|
void
| skip(long taskId, String userId)
|
Skip a task
|
void
| start(long taskId, String userId)
|
Start a task
|
void
| stop(long taskId, String userId)
|
Stop a task
|
void
| suspend(long taskId, String userId)
|
Suspend a task
|
Table 17.18. Available task retrieval and query TaskService methods
| Returns | Method signature |
|---|---|
Task
| getTaskByWorkItemId(long workItemId)
|
Task
| getTaskById(long taskId)
|
List<TaskSummary>
| getTasksAssignedAsBusinessAdministrator(String userId, String language)
|
List<TaskSummary>
| getTasksAssignedAsPotentialOwner(String userId, String language)
|
List<TaskSummary>
| getTasksAssignedAsPotentialOwnerByStatus(String userId, List<Status>gt; status, String language)
|
List<TaskSummary>
| getTasksOwned(String userId, String language)
|
List<TaskSummary>
| getTasksOwnedByStatus(String userId, List<Status> status, String language)
|
List<TaskSummary>
| getTasksByStatusByProcessInstanceId(long processInstanceId, List<Status> status, String language)
|
List<TaskSummary>
| getTasksByProcessInstanceId(long processInstanceId)
|
Content
| getContentById(long contentId)
|
Attachment
| getAttachmentById(long attachId)
|
Note
language parameter is not used for task retrieval and query TaskService methods anymore. However, the method signatures still contain it to support backward compatibility. This parameter will be removed in future releases.
Table 17.19. Available AuditService methods
| Returns | Method signature |
|---|---|
List<ProcessInstanceLog>
| findProcessInstances()
|
List<ProcessInstanceLog>
| findProcessInstances(String processId)
|
List<ProcessInstanceLog>
| findActiveProcessInstances(String processId)
|
ProcessInstanceLog
| findProcessInstance(long processInstanceId)
|
List<ProcessInstanceLog>
| findSubProcessInstances(long processInstanceId)
|
List<NodeInstanceLog>
| findNodeInstances(long processInstanceId)
|
List<NodeInstanceLog>
| findNodeInstances(long processInstanceId, String nodeId)
|
List<VariableInstanceLog>
| findVariableInstances(long processInstanceId)
|
List<VariableInstanceLog>
| findVariableInstances(long processInstanceId, String variableId)
|
List<VariableInstanceLog>
| findVariableInstancesByName(String variableId, boolean onlyActiveProcesses)
|
List<VariableInstanceLog>
| findVariableInstancesByNameAndValue(String variableId, String value, boolean onlyActiveProcesses)
|
void
| clear()
|
Chapter 18. CDI Integration
18.1. JBoss BPM Suite with CDI Integration
jbpm-services-cdi module is designed with CDI framework for CDI containers. It provides CDI wrappers on top of the core BPM Suite services.
DeploymentServiceProcessServiceUserTaskServiceRuntimeDataServiceDefinitionService
18.2. Deployment Service
DeploymentService service is responsible for deploying and undeploying deployment units into the runtime environment. Deployment units includes resources such as rule, processes, and forms. The DeploymentService can be used to retrieve:
- a RuntimeManager instance for given deployment id
- a deployed unit that represents complete deployment process for given deployment id
- list of all deployed units known to the deployment
DeploymentServiceservice
DeploymentService service fires CDI events in case of deployment or undeployment of deployment units. This allows application components to react real time to the CDI events and store or remove deployment details from the memory. The deployment event with qualifier @Deploy is fired on deployment and the deployment event with qualifier @Undeploy is fired on undeployment. You can use CDI observer mechanism to get a notification on these events.
18.2.1. Saving and Removing Deployments from Database
public void saveDeployment(@Observes @Deploy DeploymentEvent event) {
// store deployed unit info for further needs
DeployedUnit deployedUnit = event.getDeployedUnit();
}
public void removeDeployment(@Observes @Undeploy DeploymentEvent event) {
// remove deployment with id event.getDeploymentId()
}
Note
18.2.2. Available Deployment Services
@Kjar: This Kmodule deployment service is tailored to work with KmoduleDeploymentUnits that is a small descriptor on top of a kjar.@Vfs: This VFS deployment service allows you to deploy assets directly from VFS (Virtual File System).
18.2.3. FormProviderService Service
FormProviderService service provides access to form representations for the user and process forms. It is built on the concept of isolated FormProviders.
FormProvider interface must define a priority, as this is the main driver for the FormProviderService service to ask for the content of the form of a given provider. FormProviderService service collects all available providers and iterates over them asking for the form content in the order of the specified priority. The lower the priority number, the higher priority it gets during evaluation. For example, a provider with priority 5 is evaluated before provider with priority 10. FormProviderService service iterates over available providers as long as one delivers the content. In a worse case scenario, it returns simple text based forms.
FormProvider interface shown below describes contract for the implementations:
public interface FormProvider {
int getPriority();
String render(String name, ProcessDesc process, Map<String, Object> renderContext);
String render(String name, Task task, ProcessDesc process, Map<String, Object> renderContext);
}
FormProvidersService out of the box:
- Additional
FormProviderServiceavailable with the form modeler. The priority number of thisFormProviderServiceis 2. - Fremarker based implementation to support process and task forms. The priority number of this
FormProvideServiceris 3. - Default forms provider. This is has the lowest priority and considered as a last resort if none of the other providers deliver content. This provider provides simplest possible forms.
18.2.4. RuntimeDataService Service
RuntimeDataService service provides access to actual data that is available on runtime such as:
- Available processes to be executed
- Active process instances
- Process instance history
- Process instance variables
- Active and completed nodes of process instance
RuntimeDataService service observes deployment events and indexes all deployed processes to expose them to the calling components.
18.2.5. DefinitionService Service
DefinitionService is a service that provides access to process details stored as part of BPMN2 XML. Before using any method that provides information, you must invoke the buildProcessDefinition method to populate repository with process information taken from BPMN2 content.
- Overall description of process for given process definition
- Collection of all user tasks found in the process definition
- Information about defined inputs for user task node
- Information about defined outputs for user task node
- IDs of reusable processes (call activity) defined within the given process definition
- Information about process variables defined within given process definition
18.3. Configuring CDI Integration
jbpm-services-cdi API in your system, you need to provide some JavaBeans for the out of the box services to satisfy all dependencies, such as:
- Entity manager and entity manager factory
- User group callback for human tasks
- Identity provider to pass authenticated user information to the services
jbpm-services-cdi API in a JEE environment like the JBoss Application Server:
public class EnvironmentProducer {
@PersistenceUnit(unitName = "org.jbpm.domain")
private EntityManagerFactory emf;
@Inject
@Selectable
private UserGroupInfoProducer userGroupInfoProducer;
@Inject
@Kjar
private DeploymentService deploymentService;
@Produces
public EntityManagerFactory getEntityManagerFactory() {
return this.emf;
}
@Produces
public org.kie.api.task.UserGroupCallback produceSelectedUserGroupCalback() {
return userGroupInfoProducer.produceCallback();
}
@Produces
public UserInfo produceUserInfo() {
return userGroupInfoProducer.produceUserInfo();
}
@Produces
@Named("Logs")
public TaskLifeCycleEventListener produceTaskAuditListener() {
return new JPATaskLifeCycleEventListener(true);
}
@Produces
public DeploymentService getDeploymentService() {
return this.deploymentService;
}
@Produces
public IdentityProvider produceIdentityProvider {
return new IdentityProvider() {
// implement IdentityProvider
};
}
}
beans.xml. For example, the org.jbpm.kie.services.cdi.producer.JAASUserGroupInfoProducer class allows JBoss Application Server to reuse security settings on application server regardless of what it actually is (such as LDAP and DB):
<beans xmlns="http://java.sun.com/xml/ns/javaee" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://java.sun.com/xml/ns/javaee http://docs.jboss.org/cdi/beans_1_0.xsd"> <alternatives> <class>org.jbpm.kie.services.cdi.producer.JAASUserGroupInfoProducer</class> </alternatives> </beans>
WorkingMemory event listeners, and WorkItemHandlers. To provide these components, you need to implement the following interfaces:
/** * Allows to provide custom implementations to deliver WorkItem name and WorkItemHandler instance pairs * for the runtime. * <br/> * It will be invoked by RegisterableItemsFactory implementation (especially InjectableRegisterableItemsFactory * in CDI world) for every KieSession. Recommendation is to always produce new instances to avoid unexpected * results. * */ public interface WorkItemHandlerProducer { /** * Returns map of (key = work item name, value work item handler instance) of work items * to be registered on KieSession * <br/> * Parameters that might be given are as follows: * <ul> * <li>ksession</li> * <li>taskService</li> * <li>runtimeManager</li> * </ul> * * @param identifier - identifier of the owner - usually RuntimeManager that allows the producer to filter out * and provide valid instances for given owner * @param params - owner might provide some parameters, usually KieSession, TaskService, RuntimeManager instances * @return map of work item handler instances (recommendation is to always return new instances when this method is invoked) */ Map<String, WorkItemHandler> getWorkItemHandlers(String identifier, Map<String, Object> params); }/** * Allows do define custom producers for know EventListeners. Intention of this is that there might be several * implementations that might provide different listener instance based on the context they are executed in. * <br/> * It will be invoked by RegisterableItemsFactory implementation (especially InjectableRegisterableItemsFactory * in CDI world) for every KieSession. Recommendation is to always produce new instances to avoid unexpected * results. * * @param <T> type of the event listener - ProcessEventListener, AgendaEventListener, WorkingMemoryEventListener */ public interface EventListenerProducer<T> { /** * Returns list of instances for given (T) type of listeners * <br/> * Parameters that might be given are as follows: * <ul> * <li>ksession</li> * <li>taskService</li> * <li>runtimeManager</li> * </ul> * @param identifier - identifier of the owner - usually RuntimeManager that allows the producer to filter out * and provide valid instances for given owner * @param params - owner might provide some parameters, usually KieSession, TaskService, RuntimeManager instances * @return list of listener instances (recommendation is to always return new instances when this method is invoked) */ List<T> getEventListeners(String identifier, Map<String, Object> params);}
KieSession by RuntimeManager.
18.4. RuntimeManager as CDI Bean
RuntimeManager as CDI bean into any other CDI bean within your application. RuntimeManager comes with the following predefined strategies and each of them have CDI qualifiers:
- @Singleton
- @PerRequest
- @PerProcessInstance
Note
RuntimeManager as CDI bean, it is recommended to utilize JBoss BPM Suite services when frameworks like CDI, ejb or Spring are used. JBoss BPM Suite services provide significant amount of features that encapsulate best practices when using RuntimeManager.
RuntimeEnvironment:
public class EnvironmentProducer {
//add same producers as for services
@Produces
@Singleton
@PerRequest
@PerProcessInstance
public RuntimeEnvironment produceEnvironment(EntityManagerFactory emf) {
RuntimeEnvironment environment = RuntimeEnvironmentBuilder.Factory.get()
.newDefaultBuilder()
.entityManagerFactory(emf)
.userGroupCallback(getUserGroupCallback())
.registerableItemsFactory(InjectableRegisterableItemsFactory.getFactory(beanManager, null))
.addAsset(ResourceFactory.newClassPathResource("BPMN2-ScriptTask.bpmn2"), ResourceType.BPMN2)
.addAsset(ResourceFactory.newClassPathResource("BPMN2-UserTask.bpmn2"), ResourceType.BPMN2)
.get();
return environment;
}
}
RuntimeEnvironment for all strategies of RuntimeManager by specifying all qualifiers on the method level. Once a complete producer is available, you can inject RuntimeManager into the application's CDI bean as shown below:
public class ProcessEngine {
@Inject
@Singleton
private RuntimeManager singletonManager;
public void startProcess() {
RuntimeEngine runtime = singletonManager.getRuntimeEngine(EmptyContext.get());
KieSession ksession = runtime.getKieSession();
ProcessInstance processInstance = ksession.startProcess("UserTask");
singletonManager.disposeRuntimeEngine(runtime);
}
}Note
RuntimeManager in the application, but it is recommended to make use of DeploymentService whenever you need to have many RuntimeManagers active within your application.
DeploymentService, the application can inject RuntimeManagerFactory and then create RuntimeManager instance manually. In such cases, EnvironmentProducer remains the same as the DeploymentService. Here is an example of a simple ProcessEngine bean:
public class ProcessEngine {
@Inject
private RuntimeManagerFactory managerFactory;
@Inject
private EntityManagerFactory emf;
@Inject
private BeanManager beanManager;
public void startProcess() {
RuntimeEnvironment environment = RuntimeEnvironmentBuilder.Factory.get()
.newDefaultBuilder()
.entityManagerFactory(emf)
.addAsset(ResourceFactory.newClassPathResource("BPMN2-ScriptTask.bpmn2"), ResourceType.BPMN2)
.addAsset(ResourceFactory.newClassPathResource("BPMN2-UserTask.bpmn2"), ResourceType.BPMN2)
.registerableItemsFactory(InjectableRegisterableItemsFactory.getFactory(beanManager, null))
.get();
RuntimeManager manager = managerFactory.newSingletonRuntimeManager(environment);
RuntimeEngine runtime = manager.getRuntimeEngine(EmptyContext.get());
KieSession ksession = runtime.getKieSession();
ProcessInstance processInstance = ksession.startProcess("UserTask");
manager.disposeRuntimeEngine(runtime);
manager.close();
}
}Chapter 19. SOAP Interface
19.1. SOAP API
19.2. Client-Side Java Webservice Client
kie-remote-client module function as a client-side interface for SOAP. The CommandWebServiceClient class referenced in the test code below is generated by the Web Service Description Language (WSDL) in the kie-remote-client jar.
import org.kie.remote.client.api.RemoteRuntimeEngineFactory; import org.kie.remote.client.jaxb.JaxbCommandsRequest; import org.kie.remote.client.jaxb.JaxbCommandsResponse; import org.kie.remote.jaxb.gen.StartProcessCommand; import org.kie.remote.services.ws.command.generated.CommandWebService; import org.kie.services.client.serialization.jaxb.impl.JaxbCommandResponse;
public void runCommandWebService(String user, String password, String processId, String deploymentId, String applicationUrl) throws Exception {
CommandWebService client = RemoteRuntimeEngineFactory.newCommandWebServiceClientBuilder()
.addDeploymentId(deploymentId)
.addUserName(user)
.addPassword(password)
.addServerUrl(applicationUrl)
.buildBasicAuthClient();
// Get a response from the WebService
StartProcessCommand cmd = new StartProcessCommand();
cmd.setProcessId(processId);
JaxbCommandsRequest req = new JaxbCommandsRequest(deploymentId, cmd);
final JaxbCommandsResponse response = client.execute(req);
JaxbCommandResponse<?> cmdResp = response.getResponses().get(0);
JaxbProcessInstanceResponse procInstResp = (JaxbProcessInstanceResponse) cmdResp;
long procInstId = procInstResp.getId();
}CommandWebService class and includes the execute operation as depicted below:
import javax.jws.WebMethod;
import javax.jws.WebParam;
import javax.jws.WebResult;
import javax.jws.WebService;
import javax.xml.bind.annotation.XmlSeeAlso;
import javax.xml.ws.RequestWrapper;
import javax.xml.ws.ResponseWrapper;
import org.kie.remote.client.jaxb.JaxbCommandsRequest;
import org.kie.remote.client.jaxb.JaxbCommandsResponse
@WebService(name = "CommandServicePortType", targetNamespace = "http://services.remote.kie.org/6.3.0.1/command")
public interface CommandWebService {
/**
*
* @param request
* @return
* returns org.kie.remote.client.jaxb.JaxbCommandsResponse
* @throws CommandWebServiceException
*/
@WebMethod
@WebResult(targetNamespace = "")
@RequestWrapper(localName = "execute", targetNamespace = "http://services.remote.kie.org/6.3.0.1/command", className = "org.kie.remote.services.ws.command.generated.Execute")
@ResponseWrapper(localName = "executeResponse", targetNamespace = "http://services.remote.kie.org/6.3.0.1/command", className = "org.kie.remote.services.ws.command.generated.ExecuteResponse")
public JaxbCommandsResponse execute(@WebParam(name = "request", targetNamespace = "") JaxbCommandsRequest request) throws CommandWebServiceException;
}Appendix A. Revision History
| Revision History | |||
|---|---|---|---|
| Revision 6.2.0-4 | Thu Apr 28 2016 | ||
| |||
| Revision 6.2.0-3 | Tue Mar 29 2016 | ||
| |||
| Revision 6.2.0-2 | Mon Nov 30 2015 | ||
| |||
| Revision 6.2.0-1 | Mon Nov 30 2015 | ||
| |||





























