Appendix A. Process Elements
This chapter contains introduction to BPMN elements and their semantics. For details about BPMN, see the Business Process Model and Notation, Version 2.0. The BPMN 2.0 specification is an Object Management Group (OMG) specification that defines standards on how to graphically represent a business process, defines execution semantics for the elements along with an XML format of process definitions source.
Note that Red Hat JBoss BPM Suite focuses exclusively on executable processes and supports a significant subset of the BPMN elements including the most common types that can be used inside executable processes.
A process element is a node of the process definition. The term covers nodes with execution semantics as well as those without.
Elements with execution semantics define the execution workflow of the process.
Elements without execution semantics, such as artifacts, allow users to provide notes and further information on the process or any of its elements to accommodate collaboration of multiple users with different roles, such as, business analyst, business manager, or process designer.
All elements with execution semantics define their generic properties.
Generic Process Element Properties
- ID
- The ID defined as a String, unique in the parent knowledge base.
- Name
- The display name of the element.
A.1. Process
A process is a named element defined in a process definition. It exists in a knowledge base and is identified by its ID.
A process represents a namespace and serves as a container for a set of modeling elements. it contains elements that specify the execution workflow of a business process or its parts using flow objects and flows. Every process must contain at least one start event and one end event.
A process is accompanied by its BPMN Diagram, which is also part of the process definition, and defines the visualisation of the process execution workflow, for example in the Process Designer.
Apart from the execution workflow and process attributes, a process can define process variables, which store process data during runtime. For more information on process variables, see Section 4.9, “Variables”.
Runtime
During runtime, a process serves as a blueprint for a process instance, similarly to a class and its objects. A process instance is managed by a session, which may contain multiple process instances. This enables the instances to share data, for example, using global variables. Global variables are stored in the session instance, not in the process instance, which enables communication across process instances. Every process instance has its own context and ID.
Knowledge Runtime, called kcontext, holds all the process runtime data. Use it to retrieve or modify the runtime data, for example in Action Scripts:
Getting the currently executed element instance. You can then query further element data, such as its name and type, or cancel the element instance.
NodeInstance element = kcontext.getNodeInstance(); String name = element.getNodeName();
Getting the currently executed process instance. You can then query further process instance data, such as, its name, ID. You can also abort the process instance, or send an event, such as a Signal Event.
ProcessInstance proc = kcontext.getProcessInstance(); proc.signalEvent(type, eventObject);
Getting and setting the values of variables.
kcontext.setVariable("myVariableName", "myVariableValue");Execute calls on the Knowledge runtime, for example, start process instances, insert facts, and similar.
kcontext.getKnowledgeRuntime().signalEvent(eventType, data, kcontext.getProcessInstance().getId());
A process instance has the following lifecycle phases:
- CREATED
-
When you call the
createProcessInstancemethod on a process, a new process instance is created. The process variables are initialized and the status of the process instance isCREATED. - PENDING
- When a process instance is created, but not yet started.
- ACTIVE
-
When you call the
start()method on a process instance, its execution is triggered and its status isACTIVE. If the process is instantiated using an event, such as Signal, Message, or Error Events, the flow will start on the respective type of start event. Otherwise, the flow starts on the None Start Event. - COMPLETED
-
Once there is no token in the flow the process instance is finished and its status is
COMPLETED. Tokens in the flow are consumed by End Events and destroyed by Terminating Events. - ABORTED
-
If you call the
abortProcessInstancemethod, the process instance is interrupted and its status isABORTED.
The runtime state of a process instance can be made persistent, for example, in a database. This enables you to restore the state of execution in case of environment failure, or to temporarily remove running instances from memory and restore them later. By default, process instances are not made persistent. For more information on persistence see chapter Persistence of the Red Hat JBoss BPM Suite Administration and Configuration Guide.
Properties
- ID
Process ID defined as a String unique in the parent knowledge base.
Example value:
org.jboss.exampleProcess.It is recommended to use the ID form
<PACKAGENAME>.<PROCESSNAME>.<VERSION>.- Process Name
- Process display name.
- Version
- Process version.
- Package
Parent package to which the process belongs (that is process namespace).
The package attribute contains the location of the modeled process in form of a String value.
- Target Namespace
- The location of the XML schema definition of the BPMN2 standard.
- Executable
Enables or disables the process to be instantiated. Set to
falseto disable process instantiation.Possible values:
true,false.- Imports
- Comma-separated values of imported processes.
- Documentation
- Contains element description, has no impact on runtime.
- AdHoc
Boolean property defining whether a process is an ad-hoc process.
If set to
true, the flow of the process execution is controlled exclusively by a human user.- Globals
- Set of global variables visible for other processes to allow data sharing.
- Variable Definitions
- Enables you to define variables available for the process.
- Process Instance Description
- Contains description of the process, has no impact on runtime.
- TypeLanguage
- Identifies a type system used for the process.
- Base Currency
-
Identifies the currency in simulation scenarios. Uses the ISO 4217 standard, for example
EUR,GBP, orUSD.
A.2. Events mechanism
During process execution, the Process Engine ensures that all the relevant tasks are executed according to the process definition, the underlying work items, and other resources. However, a process instance often needs to react to a nevent it was not directly requesting. Such events can be created and caught by the Intermediate Event elements. See Section C.6, “Throwing Intermediate Events” for further information. Using these events in a process enables you to specify how to handle a particular event.
An event must specify the type of event it should handle. It can also define the name of a variable that will store the data associated with the event. This enables subsequent elements in the process to access and react to the data.
An event can be signaled to a running instance of a process in a number of ways:
Internal event
Any action inside a process, for example the action of an action node or an on-entry action a node, can signal the occurrence of an internal event to the process instance.
kcontext.getProcessInstance().signalEvent(type, eventData);
External event
A process instance can be notified of an event from the outside.
processInstance.signalEvent(type, eventData);
External event using event correlation
You can notify the entire session and use the event correlation to notify particular processes. Event correlation is determined based on the event type. A process instance that contains an event element listening to external events is notified whenever such an event occurs. To signal such an event to the process engine:
ksession.signalEvent(type, eventData);
You can also use events to start a process. When a Message Start Event defines an event trigger, a new process instance starts every time the event is signalled to the process engine.
This mechanism is used for implementation of the Intermediate Events, and can be used to define custom events.
A.3. Collaboration mechanisms
Elements with execution semantics use collaboration mechanisms. Different elements use the collaboration mechanism differently. For example, if you use signalling, the Throw Signal Intermediate Event element sends a signal, and the Catch Signal Intermediate Event element receives the signal. That means Red Hat JBoss BPM Suite provides you with two elements with execution semantics that make use of the same signal mechanism in a collaborative way.
Collaboration mechanism includes the following:
- Signals
- General, mainly inter-process instance communication.
- Messages
Messages are used to communicate within the process and between process instances. Messages are implemented as signals, which makes them scoped only for a given KIE session instance.
For external system interaction, use Send and Receive Tasks with proper handler implementation.
- Escalations
- Used as signalling between processes to trigger escalation handling.
- Errors
- Used as inter-process signalling of escalation to trigger escalation handling.
All the events are managed by the signaling mechanism. To distinguish individual objects of individual mechanism the signal use different signal codes or names.
A.3.1. Signals
Signals in Red Hat JBoss BPM Suite correspond to the Signal Event in the specification BPMN 2.0, and are the most flexible of the listed mechanisms. Signals can be consumed by an arbitrary number of elements both within its process instance and outside of it. Signals can also be consumed by any element in any session within or cross the current deployment, depending on the scope of the event that throws the signal.
A.3.1.1. Triggering Signals
The following Throw Events trigger signals:
- Intermediate Throw Event
- End Throw Event
Every signal defines its signal reference, that is the SignalRef property, which is unique in the respective session.
A signal can have one of the following scopes, which restricts its propagation to the selected elements:
- Default (ksession)
Signal only propagates to elements within the given KIE session. The behavior varies depending on what runtime strategy is used:
-
Singleton: All instances available for the KIE session are signalled. -
Per Request: Signal propagates within the currently processed process instance and process instances with Start Signal Events. -
Per Process Instance: Same as per request.
-
- Process Instance
- The narrowest possible scope, restricting the propagation of the signal to the given process instance only. No catch events outside that process instance will be able to consume the signal.
- Project
- Signals all active process instances of given deployment and start signal events, regardless of the strategy.
- External
-
Allows to signal elements both within the Project and across deployments. The
externalscope requires further setup.
To select the scope in the Process Designer, click Signal Scope under Core Properties of a Signal Throw Event.
Figure A.1. Selecting Signal Scope (Default)

Signalling External Deployments
When creating an external signal event, you need to specify the work item handler for the External Send Task manually. Use the org.jbpm.process.workitem.jms.JMSSendTaskWorkItemHandler work item handler, which is shipped with Red Hat JBoss BPM Suite. It is not registered by default because each supported application server handles JMS differently, mainly due to different JNDI names for queues and connection factories.
Procedure: Registering External Send Task Handler
- In Business Central, open your project in the Project Editor and click Project Settings: Project General Settings → Deployment descriptor.
- Find the list of Work Item handlers and click Add.
Provide these values:
-
Name:
External Send Task -
Value:
new org.jbpm.process.workitem.jms.JMSSendTaskWorkItemHandler() -
Resolver type:
mvel
Figure A.2. Registered External Send Task Handler

This will generate a corresponding entry in the
kie-deployment-descriptor.xmlfile.-
Name:
The JMSSendTaskWorkItemHandler handler has five different constructors. The parameterless constructor used in the procedure above has two default values:
-
Connection factory:
java:/JmsXA -
Destination queue:
queue/KIE.SIGNAL
You can specify custom values using one of the following constructors instead:
-
new org.jbpm.process.workitem.jms.JMSSendTaskWorkItemHandler("CONNECTION_FACTORY_NAME", "DESTINATION_NAME") -
new org.jbpm.process.workitem.jms.JMSSendTaskWorkItemHandler("CONNECTION_FACTORY_NAME", "DESTINATION_NAME", TRANSACTED), whereTRANSACTEDistrueorfalse. The argument affects the relevant JMS session. See the Interface Connection Javadoc for more information.
Both cross-project signalling and process instance signalling within a project is supported. To do so, specify the following data inputs in the DataInputAssociations property of the signal event in the Process Designer. See Section A.3.1.2, “Catching and Processing Signals” for more information.
Signal: The name of a signal which will be thrown. This value should match the SignalRef property in the signal definition.SignalWorkItemId: The ID of a Work Item which will be completed.These two data inputs are mutually exclusive.
-
SignalProcessInstanceId: The target process instance ID. The parameter is optional. -
SignalDeploymentId: The target deployment ID.
Figure A.3. Specifying SignalDeploymentId Data Input

The data inputs provide information about the signal, target deployment, and target process instance. For external signalling, the deployment ID is required, because an unrestricted broadcast would negatively impact the performance in large environments.
To send signals and messages in asynchronous processes, you need to configure a receiver of the signals, that is to limit a number of sessions for a given endpoint. By default, the receiver message-driven bean (org.jbpm.process.workitem.jms.JMSSignalReceiver) does not limit a concurrent processing.
Open the EAP_HOME/standalone/deployments/business-central.war/WEB-INF/ejb-jar.xml file and add the following activation specification property to the JMSSignalReceiver message-driven bean:
<activation-config-property> <activation-config-property-name>maxSession</activation-config-property-name> <activation-config-property-value>1</activation-config-property-value> </activation-config-property>
The message-driven bean should look like the following:
<message-driven>
<ejb-name>JMSSignalReceiver</ejb-name>
<ejb-class>org.jbpm.process.workitem.jms.JMSSignalReceiver</ejb-class>
<transaction-type>Bean</transaction-type>
<activation-config>
<activation-config-property>
<activation-config-property-name>destinationType</activation-config-property-name>
<activation-config-property-value>javax.jms.Queue</activation-config-property-value>
</activation-config-property>
<activation-config-property>
<activation-config-property-name>destination</activation-config-property-name>
<activation-config-property-value>java:/queue/KIE.SIGNAL</activation-config-property-value>
</activation-config-property>
<activation-config-property>
<activation-config-property-name>maxSession</activation-config-property-name>
<activation-config-property-value>1</activation-config-property-value>
</activation-config-property>
</activation-config>
</message-driven>This setting ensures that all messages, even the ones that were sent concurrently, will be processed serially and that notifications sent to the parent process instance will be delivered and will not cause any conflicts.
A.3.1.2. Catching and Processing Signals
Signals are caught by the following catch event types:
- Start Catch Event
- Intermediate Catch Event
- Boundary Catch Event
To catch and process a signal, create an appropriate catching signal event in the Process Designer, and set the following properties:
- SignalRef
The signal’s reference.
Value: The same as the Throwing Signal Event’s SignalRef.
- DataOutputAssociations
The variables used to store the output of the received signal, if applicable.
To assign a data output:
- Select the appropriate catch event type in the Process Designer.
-
Click
to open the Properties tab.
- Click the drop down menu next to the DataOutputAssociations property, and click Add.
- In the new row, enter a name for the association.
- Select the expected data type from the dropdown menu. Selecting Custom… enables you to type in any class name.
- Select the target process variable, where the output will be stored.
Click Save to save the association.
For more information about setting process variables, see Section 4.9, “Variables”.
A.3.1.3. Triggering Signals Using API
To signal a process instance directly, that is equivalent to the process Instance scope, use the following API function:
ksession.signalEvent(eventType, data, processInstanceId)
Here, the parameters used are as follows:
- eventType
The signal’s reference, SignalRef in Process Designer.
Value: A
String. You can also reference a process variable using the string#{myVar}for a process variablemyVar.- data
The signal’s data.
Value: Instance of a data type accepted by the corresponding Catching Signal Event. Typically an arbitrary
Object.- processInstanceId
- The process ID of the signalled process.
You can use a more general version of the above function, which does not specify the parameter processInstanceId. That results in signalling all processes in the given ksession, that is equivalent to the Default scope:
ksession.signalEvent(eventType, data);
The usage of the arguments eventType and data is the same as above.
To trigger a Signal from a script, that is a Script Task, or using on-entry or on-exit actions of a node, use the following API function:
kcontext.getKieRuntime().signalEvent(
eventType, data, kcontext.getProcessInstance().getId());
The usage of the arguments eventType and data is the same as above.
A.3.2. Messages
A Message represents the content of a communication between two Participants. In BPMN 2.0, a Message is a graphical decorator (it was a supporting element in BPMN 1.2). An ItemDefinition is used to specify the Message structure.[1]
Messages are similar objects to Signals; the main difference is that when you are throwing the message, you must uniquely identify the recipient of the Message. In Red Hat JBoss BPM Suite, this is achieved by specifying both the element ID and the Process Instance ID. For this reason, Messages do not benefit from the scope feature of Signals.
A.3.2.1. Sending Messages
Like signals, messages are sent by throw events of one of the following types:
- Intermediate Throw Event
- End Throw Event
- Send Task
When creating the appropriate throw event, register a custom handler for the Send Task Work Item. Red Hat JBoss BPM Suite provides only dummy implementation by default. It is recommended to use the JMS-based org.jbpm.process.workitem.jms.JMSSendTaskWorkItemHandler.
If necessary, you can emulate the message-sending mechanism using signals and their scopes so that only one element can receive the given signal.
A.3.2.2. Catching Messages
The process for catching messages does not differ from receiving signals, with the exception of using the MessageRef element property instead of SignalRef. See Section A.3.1.2, “Catching and Processing Signals” for further information.
When catching messages through the API, the MessageRef property of the catching event is not the same as the eventType parameter of the API call. See Section A.3.2.3, “Sending Messages Using API” for further information.
A.3.2.3. Sending Messages Using API
To send a message using the API, use the following method:
ksession.signalEvent(eventType, data, processInstanceId);
Here, the parameters used are as follows:
- eventType
A
Stringthat starts withMessage-and contains the message’s reference (MessageRef). You can also reference a process variable using the string#{myVar}for a process variablemyVar.Examples:
-
Message-SampleMessage1for MessageRefSampleMessage1. -
#{myVar}for process variablemyVar. The value ofmyVarmust be aStringstarting withMessage-.
-
- data
The message’s data.
Value: An arbitrary
Object.- processInstanceId
- The Process ID of the process being messaged.
To send a message from a Script Task or using on-entry or on-exit actions of a node, use the following method:
kcontext.getKieRuntime().signalEvent(
eventType, data, kcontext.getProcessInstance().getId());
The usage of the arguments eventType and data is the same as above.
A.3.3. Escalation
"An Escalation identifies a business situation that a Process might need to react to." [2]
The escalation mechanism is intended for the handling of events that need the attention of someone of higher rank, or require additional handling.
Escalation is represented by an escalation object that is propagated across the process instances. It is produced by the Escalation Intermediate Throw Event or Escalation End Event, and can be consumed by exactly one Escalation Start Event or Escalation Intermediate Catch Event. Once produced, it is propagated within the current context and then further up the contexts until caught by an Escalation Start Event or Escalation Intermediate Catch Event, which is waiting for an Escalation with the particular Escalation Code. If an escalation remains uncaught, the process instance is ABORTED.
Attributes
Mandatory Attributes
- Escalation Code
- string with the escalation code
A.4. Transaction Mechanisms
A.4.1. Errors
An error represents a critical problem in a process execution and is indicated by the Error End Event. When a process finishes with an Error End Event, the event produces an error object with a particular error code that identifies the particular error event. The Error End Event represents an unsuccessful execution of the given process or activity. Once generated, it is propagated as an object within the current context and then further up the contexts until caught by the respective catching Error Intermediate Event or Error Start Event, which is waiting for an error with a particular error code. If the error is not caught and is propagated to the upper-most process context, the Process instance becomes ABORTED.
Every Error defines its error code, which is unique in the respective process.
Attributes
- Error Code
- Error code defined as a String unique within the process.
A.4.2. Compensation
Compensation is a mechanism that allows you to handle business exceptions that might occur in a process or sub-process, that is in a business transaction. Its purpose is to compensate for a failed transaction, where the transaction is presented by the process or sub-process, and then continues the execution using the regular flow path. Note that compensation is triggered only after the execution of the transaction has finished and that either with a Compensation End Event or with a Cancel End Event.
Consider implementing handling of business exceptions in the following cases:
- When an interaction with an external party or 3rd party system may fail or be faulty.
- When you cannot fully check the input data received by your process, for example a client’s address information.
When there are parts of your process that are dependent on one of the following:
- Company policy or policy governing certain in-house procedures.
- Laws governing the business process, such as age requirements.
If a business transaction finishes with a Compensation End Event, the Event produces a request for compensation handling. The compensation request is identified by ID and can be consumed only by the respective Compensation Intermediate Event placed on the boundary of the transaction Elements and Compensation Start Event. The Compensation Intermediate Event is connected with an Association Flow to the activity that defines the compensation, such as a sub-process or task. The execution flow either waits for the compensation activity to finish or resumes depending on the Wait for completion property set on the Compensation End Event of the business transaction that is being compensated.
If a business transaction contains an event sub-process that starts with a Compensation Start Event, the Event Sub-Process is run as well if compensation is triggered.
The activity to which the Compensation Intermediate Event points may be a sub-process. Note that the sub-process must start with the Compensation Start Event.
If running over a multi-instance sub-process, compensation mechanism of individual instances do not influence each other.
A.5. Timing
Timing is a mechanism for scheduling actions and is used by Timer Intermediate and Timer Start events. It enables you to delay further execution of a process or task.
A timer event can be triggered only after the transaction is commited, while the timer countdown starts right after entering the node, that is the attached node in case of a boundary event. In other words, a timer event is only designed for those use cases where there is a wait state, such as a User Task. If you want to be notified of the timeout of a synchronous operation without a wait state, a boundary timer event is not suitable.
The timing strategy is defined by the following timer properties:
- Time Duration
- Defines the period for which the execution of the event is put on hold. The execution continues after the defined period has elapsed. The timer is applied only once.
- Time Cycle
-
This defines the time between subsequent timer activations. If the period is
0, the timer is triggered only once.
The value for these properties can be provided as either Cron or as an expression by defining the, Time Cycle Language property.
- Cron
- [#d][\#h][\#m][\#s][#[ms]]
Example A.1. Timer Period With Literal Values
1d 2h 3m 4s 5ms
The element will be executed after 1 day, 2 hours, 3 minutes, 4 seconds, and 5 milliseconds.
Any valid ISO8601 date format that supports both one shot timers and repeatable timers can be used. Timers can be defined as date and time representation, time duration or repeating intervals. For example:
- Date
- 2013-12-24T20:00:00.000+02:00 - fires exactly at Christmas Eve at 8PM
- Duration
- PT2S - fires once after 2 seconds
- Repetable Intervals
- R/PT1S - fires every second, no limit, alternatively R5/PT1S will fire 5 times every second
- None
- #{expression}
Example A.2. Timer period with expression
myVariable.getValue()
The element will be executed after time period returned by the call myVariable.getValue().
A.6. Event Types
Events are triggers that impact a business process. Events are classified as:
Start events
Indicate the beginning of a business process.
End events
Indicate the completion of a business process.
Intermediate events
Drive the flow of a business process.
Every event has an event ID and a name. You can implement triggers for each of these event types to identify the conditions under which an event is triggered. If the conditions of the triggers are not met, the events are not initialized, and the process flow does not complete.
A.6.1. Start Event
Every process must have at least one start event with no incoming and exactly one outgoing flow.
Multiple start event types are supported:
- None Start Event
- Signal Start Event
- Timer Start Event
- Conditional Start Event
- Message Start Event
- Compensation Start Event
- Error Start Event
- Escalation Start Event
All start events, except for the None Start Event, define a trigger. When you start a process, the trigger needs to be fulfilled. If no start event can be triggered, the process is never instantiated.
A.6.1.1. Start Event types
A.6.1.1.1. None Start Event
The None Start Event is a start event without a trigger condition. A process or a sub-process can contain at most one None Start Event, which is triggered on process or sub-process start by default, and the outgoing flow is taken immediately.
When used in a sub-process, the execution is transferred from the parent process into the sub-process and the None Start Event is triggered. That means that the token is taken from the parent sub-process activity and the None Start Event of the sub-process generates a token.
A.6.1.1.2. Message Start Event
A process or an event sub-process can contain multiple Message Start Events, which are triggered by a particular message. The process instance with a Message Start Event only starts its execution from this event after it has received the respective message. After the message is received, the process is instantiated and its Message Start Event is executed immediately (its outgoing Flow is taken).
As a message can be consumed by an arbitrary number of processes and process elements, including no elements, one message can trigger multiple Message Start Events and therefore instantiate multiple Processes.
Attributes
- MessageRef
- ID of the expected Message object
A.6.1.1.3. Timer Start Event
The Timer Start Event is a Start Event with a timing mechanism. For more information about timing, see Section A.5, “Timing”.
A process can contain multiple Timer Start Events, which are triggered at the start of the process, after which the timing mechanism is applied.
When used in a sub-process, the execution is transferred from the parent process into the sub-process and the Timer Start Event is triggered. The token is taken from the parent sub-process activity and the Timer Start Event of the sub-process is triggered and waits for the timer to trigger. Once the time defined by the timing definition has been reached, the outgoing flow is taken.
Attributes
- Time Cycle
-
Repeatedly triggers the timer after a specific time period. If the period is
0, the timer is triggered only once. - Time Cycle Language
Set to
Nonefor the default interval, orCronfor the followingTime Cycleproperty format:[\#d][\#h][\#m][\#s][#[ms]]
- Time Duration
- Marks the timer as a one-time expiration timer. It is the delay after which the timer fires. Possible values are a String interval, a process variable, or the ISO-8601 date format.
- Time Date
- Starts the process at the specified date and time in the ISO-8601 date format.
A.6.1.1.4. Escalation Start Event
The Escalation Start Event is a start event that is triggered by an escalation with a particular escalation code. For further information, see Section A.3.3, “Escalation”.
Process can contain multiple Escalation Start Events. The process instance with an Escalation Start Event starts its execution when it receives the defined escalation object. The process is instantiated and the Escalation Start Event is executed immediately, which means its outgoing flow is taken.
Attributes
- Escalation Code
- Expected escalation Code.
A.6.1.1.5. Conditional Start Event
The Conditional Start Event is a start event with a Boolean condition definition. The execution is triggered always when the condition is first evaluated to false and then to true. The process execution starts only if the condition is evaluated to true after the start event has been instantiated.
A process can contain multiple Conditional Start Events.
Attributes
- Expression
-
A Boolean condition that starts the process execution when evaluated to
true. - Language
-
A language of the
Expressionattribute.
A.6.1.1.6. Error Start Event
A process or sub-process can contain multiple Error Start Events, which are triggered when an Error object with a particular ErrorRef property is received. The error object can be produced by an Error End Event, and it signalizes an incorrect process ending. The process instance with the Error Start Event starts execution after it has received the respective error object. The Error Start Event is executed immediately upon receiving the error object, which means its outgoing Flow is taken.
Attributes
- ErrorRef
- A code of the expected error object.
A.6.1.1.7. Compensation Start Event
A Compensation Start Event is used to start a Compensation Event sub-process when using a sub-process as the target activity of a Compensation Intermediate Event.
A.6.1.1.8. Signal Start Event
The Signal Start Event is is triggered by a signal with a particular signal code. For further information, see Section A.3.1, “Signals”.
A process can contain multiple Signal Start Events. The Signal Start Event only starts its execution within the Process instance after the instance has received the respective Signal. Then, the Signal Start Event is executed, which means its outgoing flow is taken.
Attributes
- SignalRef
- The expected Signal Code.
A.6.2. Intermediate Events
A.6.2.1. Intermediate Events
“... the Intermediate Event indicates where something happens (an Event) somewhere between the start and end of a Process. It will affect the flow of the Process, but will not start or (directly) terminate the Process.[3]”
An intermediate event handles a particular situation that occurs during process execution. The situation is a trigger for an intermediate event.
In a process, intermediate events can be placed as follows:
- On an activity boundary with one outgoing flow
If the event occurs while the activity is being executed, the event triggers its execution to the outgoing flow. One activity may have multiple boundary intermediate events. Note that depending on the behavior you require from the activity with the boundary intermediate event, you can use either of the following intermediate event types:
- Interrupting: The activity execution is interrupted and the execution of the intermediate event is triggered.
- Non-interrupting: The intermediate event is triggered and the activity execution continues.
Based on the type of the event trigger, the following Intermediate Events are distinguished:
- Timer Intermediate Event
- Delays the execution of the outgoing flow.
- Conditional Intermediate Event
-
Is triggered when its condition evaluates to
true. - Error Intermediate Event
- Is triggered by an error object with the given error code.
- Escalation Intermediate Event
Has two subtypes:
- Catching Escalation Intermediate Event, which is triggered by an escalation event.
- Throwing Escalation Intermediate Event, which produces an escalation event when executed.
- Signal Intermediate Event
Has two subtypes:
- Catching Signal Intermediate Event, which is triggered by a signal.
- Throwing Signal Intermediate Event, which produces a signal when executed.
- Message Intermediate Event
Has two subtypes:
- Catching Message Intermediate Event, which is triggered by a message object.
- Throwing Message Intermediate Event, which produces a message object when executed.
- Compensation Intermediate Event
Has two subtypes:
- Catching Compensation Intermediate Event, which is triggered by a compensation object.
- Throwing Compensation Intermediate Event, which produces a compensation object when executed.
A.6.2.2. Intermediate Event types
A.6.2.2.1. Timer Intermediate Event
A timer intermediate event allows you to delay workflow execution or to trigger the workflow execution periodically. It represents a timer that can trigger one or multiple times after a given period of time. When triggered, the timer condition, that is the defined time, is checked and the outgoing flow is taken. For more information about timing, see Section A.5, “Timing”.
When placed in the process workflow, a timer intermediate event has one incoming flow and one outgoing flow. Its execution starts when the incoming flow transfers to the event. When placed on an activity boundary, the execution is triggered at the same time as the activity execution.
The timer is canceled if the timer element is canceled, for example by completing or aborting the enclosing process instance.
Attributes
- Time Cycle
-
Repeatedly triggers the timer after a specific time period. If the period is
0, the timer is triggered only once. - Time Cycle Language
Set to
Nonefor the default interval, orCronfor the followingTime Cycleproperty format:[\#d][\#h][\#m][\#s][#[ms]]
- Time Duration
- Marks the timer as a one-time expiration timer. It is the delay after which the timer fires. Possible values are a String interval, a process variable, or the ISO-8601 date format.
- Time Date
- Triggers the timer at the specified date and time in the ISO-8601 date format.
A.6.2.2.2. Conditional Intermediate Event
A Conditional Intermediate Event is an intermediate event with a boolean condition as its trigger. The event triggers further workflow execution when the condition evaluates to true and its outgoing flow is taken.
The event must define the Expression property. When placed in the process workflow, a Conditional Intermediate Event has one incoming flow, one outgoing flow, and its execution starts when the incoming flow transfers to the event. When placed on an activity boundary, the execution is triggered at the same time as the activity execution. Note that if the event is non-interrupting, the event triggers continuously while the condition is true.
Attributes
- Expression
-
A Boolean condition that triggers the execution when evaluated to
true. - Language
-
A language of the
Expressionattribute.
A.6.2.2.3. Compensation Intermediate Event
A compensation intermediate event is a boundary event attached to an activity in a transaction sub-process. It can finish with a compensation end event or a cancel end event. The compensation intermediate event must be associated with a flow, which is connected to the compensation activity.
The activity associated with the boundary compensation intermediate event is executed if the transaction sub-process finishes with the compensation end event. The execution continues with the respective flow.
A.6.2.2.4. Message Intermediate Event
A Message Intermediate Event is an intermediate event that allows you to manage a message object. Use one of the following events:
- Throwing Message Intermediate Event produces a message object based on the defined properties.
- Catching Message Intermediate Event listens for a message object with the defined properties.
Throwing Message Intermediate Event
When reached during execution, a Throwing Message Intermediate Event produces a message object and the execution continues to its outgoing Flow.
Attributes
- MessageRef
- ID of the produced Message object.
Catching Message Intermediate Event
When reached during execution, a Catching Message Intermediate Event awaits a message object defined in its properties. Once the message object is received, the event triggers execution of its outgoing flow.
Attributes
- MessageRef
- ID of the expected Message object.
- CancelActivity
-
If the event is placed on the boundary of an activity and
Cancel Activityproperty is set totrue, the activity execution is canceled when the event receives its escalation object.
A.6.2.2.5. Escalation Intermediate Event
An Escalation Intermediate Event is an intermediate event that allows you to produce or consume an escalation object. Depending on the action the event element should perform, you need to use either of the following:
- Throwing Escalation Intermediate Event produces an escalation object based on the defined properties.
- Catching Escalation Intermediate Event listens for an escalation object with the defined properties.
Throwing Escalation Intermediate Event
When reached during execution, a Throwing Escalation Intermediate Event produces an escalation object and the execution continues to its outgoing flow.
Attributes
- EscalationCode
- ID of the produced escalation object.
Catching Escalation Intermediate Event
When reached during execution, a Catching Escalation Intermediate Event awaits an escalation object defined in its properties. When the object is received, the event triggers execution of its outgoing Flow.
Attributes
- EscalationCode
- Code of the expected Escalation object.
- CancelActivity
-
If the event is placed on the boundary of an activity and
Cancel Activityproperty is set totrue, the activity execution is canceled when the event receives its escalation object.
A.6.2.2.6. Error Intermediate Event
An Error Intermediate Event is an intermediate event that can be used only on an activity boundary. It allows the process to react to an Error End Event in the respective activity. The activity must not be atomic. When the activity finishes with an Error End Event that produces an error object with the respective ErrorCode property, the Error Intermediate Event catches the error object and execution continues to its outgoing flow.
A.6.2.2.6.1. Catching Error Intermediate Event
When reached during execution, a Catching Error Intermediate Event awaits an error object defined in its properties. Once the object is received, the event triggers execution of its outgoing Flow.
Attributes
- ErrorRef
- The reference number of the expected error object.
A.6.2.2.7. Signal Intermediate Event
A Signal Intermediate Event enables you to produce or consume a signal object. Use either of the following:
- Throwing Signal Intermediate Event produces a signal object based on the defined properties.
- Catching Signal Intermediate Event listens for a signal object with the defined properties.
Throwing Signal Intermediate Event
When reached on execution, a Throwing Signal Intermediate Event produces a signal object and the execution continues to its outgoing flow.
Attributes
- SignalRef
- The signal code that will be sent.
- Signal Scope
You can choose one of the following scopes:
-
Process Instance: Catch events in the same process instance can catch this signal. Default: Catch events in a given KIE session can catch this signal. The behavior varies depending on the KIE session strategy:-
Singleton: Signal reaches all the process instances available to the KIE session. -
Per request: Signal reaches only the current process instance and start processes with a Signal Start Event. -
Per process: same asper request.
-
-
Project: Signal reaches only active process instances of a given deployment and starts processes with a Signal Start Event. -
External: Enables the signal to reach the same process instances as with theProjectscope, as well as process instances across deployments. To send the signal to a process instance across deployments, create aSignalDeploymentIdprocess variable that provides information about what deployment or project should be the target of the signal. Broadcasting the signal would have negative impact on performance in larger environments.
-
A.6.2.2.7.1. Catching Signal Intermediate Event
When reached during execution, a Catching Signal Intermediate Event awaits a signal object defined in its properties. Once the object is received, the event triggers execution of its outgoing flow.
Attributes
- SignalRef
- Reference code of the expected signal object.
- CancelActivity
-
If the event is placed on the boundary of an activity and
Cancel Activityproperty is set totrue, the activity execution is canceled when the event receives its Escalation object.
A.6.3. End Events
An end event is a node that ends a particular workflow. It has one or more incoming sequence flows and no outgoing flow.
A process must contain at least one end event.
During runtime, an end event finishes the process workflow. The end event can finish only the workflow that reached it, or all workflows in the process instance, depending on the end event type.
A.6.3.1. End Event types
A.6.3.1.1. Simple End Event
The Simple End Event finishes the incoming workflow, that means it consumes the incoming token. Any other running workflows in the process or sub-process remain uninfluenced.
In Red Hat JBoss BPM Suite, the Simple End Event has the Terminate property in its Property tab. This is a Boolean property that turns a Simple End Event into a Terminate End Event when set to true.
A.6.3.1.2. Message End Event
When a flow enters a Message End Event, the flow finishes and the end event produces a message as defined in its properties.
A.6.3.1.3. Escalation End Event
The Escalation End Event finishes the incoming workflow, that means consumes the incoming token, and produces an escalation signal as defined in its properties, triggering the escalation process.
A.6.3.1.4. Terminate End Event
The Terminate End Event finishes all execution flows in the given process instance. Activities being executed are canceled. If a Terminate End Event is reached in a sub-process, the entire process instance is terminated.
A.6.3.1.5. Throwing Error End Event
The Throwing Error End Event finishes the incoming workflow, that means consumes the incoming token, and produces an error object. Any other running workflows in the process or sub-process remain uninfluenced.
Attributes
- ErrorRef
- The reference code of the produced error object.
A.6.3.1.6. Cancel End Event
The Cancel End Event triggers compensation events defined for the namespace, and the process or sub-process finishes as CANCELED.
A.6.3.1.7. Compensation End Event
A Compensation End Event is used to finish a transaction sub-process and trigger the compensation defined by the Compensation Intermediate Event attached to the boundary of the sub-process activities.
A.6.3.1.8. Signal End Event
A throwing Signal End Event is used to finish a process or sub-process flow. When the execution flow enters the element, the execution flow finishes and produces a signal identified by its SignalRef property.
A.6.4. Scope of Events
An event can send signals globally or be limited to a single process instance. You can use the scope attribute for events to define if a signal is to be considered internal (only for one process instance) or external (for all process instances that are waiting). The scope attribute called Signal Scope on the Properties panel of the process designer allows you to change the scope of the signal throw intermediate or end events.
The Scope data input is an optional property implemented to provide the following scope of throw events:
-
Process Instance: Catch events only in the process instance will be able to catch this signal. Default: Catch events in a given KIE session will be able to catch this signal. The behavior varies depending on the KIE session strategy:-
Singleton: Signal reaches all process instances available to the KIE session. -
Per request: Signal reaches only the current process instance and start processes with a Signal Start Event. -
Per process: same asper request.
-
-
Project: Signal reaches all active process instances of a given deployment and start processes with a Signal Start Event. -
External: Enables the signal to reach the same process instances as with theProjectscope, as well as process instances across deployments. To send the signal to a process instance across deployments, create aSignalDeploymentIdprocess variable that provides information about what deployment or project should be the target of the signal. Broadcasting the signal would have negative impact on performance in larger environments.
A.7. Gateways
A.7.1. Gateways
“Gateways are used to control how Sequence Flows interact as they converge and diverge within a Process.[4]”
Gateways are used to create or synchronize branches in the workflow using a set of conditions, which is called the gating mechanism. Gateways are of two types:
- Converging, that is merging multiple flows into one flow.
- Diverging, that is splitting one Flow into multiple flows.
One Gateway cannot have multiple incoming and multiple outgoing flows.
You can use the following types of gateways:
Parallel (AND)
- Converging AND gateway waits for all incoming flows before continuing to the outgoing flow.
- Diverging AND gateway starts all outgoing flows simultaneously.
Inclusive (OR)
- Converging OR gateway waits for all incoming flows whose condition evaluates to true.
- Diverging OR gateway starts all outgoing flows whose condition evaluates to true.
Exclusive (XOR)
- Converging XOR gateway waits for the first incoming flow whose condition evaluates to true.
- Diverging XOR gateway starts only one outgoing flow.
- Data-based exclusive gateways, which can be both diverging and converging, and are used to make decisions based on available data. For further information, see Section A.7.2.4, “Data-based Exclusive Gateway”.
- Event-based gateways, which can only be diverging, and are used for reacting to events. For further information, see Section A.7.2.1, “Event-based Gateway”.
A.7.2. Gateway types
A.7.2.1. Event-based Gateway
“The Event-Based Gateway has pass-through semantics for a set of incoming branches (merging behavior). Exactly one of the outgoing branches is activated afterwards (branching behavior), depending on which of Events of the Gateway configuration is first triggered. [5]”
The gateway is only diverging and allows you to react to possible events as opposed to the Data-based Exclusive Gateway, which reacts to the process data. The outgoing flow is taken based on the event that occurs. Only one outgoing flow is taken at a time.

The gateway might act as a start event, where the process is instantiated only if one the intermediate events connected to the Event-Based Gateway occurs.
A.7.2.2. Parallel Gateway
“A Parallel Gateway is used to synchronize (combine) parallel flows and to create parallel flows.[6]”
- Diverging
- Once the incoming flow is taken, all outgoing flows are taken simultaneously.
- Converging
- The gateway waits until all incoming flows have entered and only then triggers the outgoing flow.
A.7.2.3. Inclusive Gateway
- Diverging
Once the incoming flow is taken, all outgoing flows that evaluate to true are taken. Connections with lower priority numbers are triggered before triggering higher priority ones. Priorities are evaluated but the BPMN2 specification does not guarantee the priority order. It is recommended that you do not depend on the
priorityattribute in your workflow.ImportantEnsure that at least one of the outgoing flow evaluates to true at runtime. Otherwise, the process instance terminates with a runtime exception.
- Converging
- The gateway merges all incoming Flows previously created by a diverging Inclusive Gateway; that is, it serves as a synchronizing entry point for the Inclusive Gateway branches.
Attributes
- Default gate
- The outgoing flow taken by default if no other flow can be taken.
A.7.2.4. Data-based Exclusive Gateway
- Diverging
The gateway triggers exactly one outgoing flow. The flow with the constraint evaluated to true and the lowest priority number is taken.
ImportantEnsure that at least one of the outgoing flow evaluates to true at runtime. Otherwise, the process instance terminates with a runtime exception.
- Converging
- The gateway allows a workflow branch to continue to its outgoing flow as soon as it reaches the gateway. When one of the incoming flows triggers the gateway, the workflow continues to the outgoing flow of the gateway. If it is triggered from more than one incoming flow, it triggers the next node for each trigger.
Attributes
- Default gate
- The outgoing flow taken by default if no other flow can be taken.
A.8. Activities, Tasks and Sub-Processes
A.8.1. Activity
"An Activity is work that is performed within a Business Process." [7]
This is opposed to the execution semantics of other elements that defined the process logic.
An activity can be:
- A sub-process; compound, can be broken down into multiple process elements.
- A task; atomic, represents a single unit of work.
An activity in Red Hat JBoss BPM Suite expects one incoming and one outgoing flow. If you want to design an activity with multiple incoming and multiple outgoing flows, set the system property jbpm.enable.multi.con to true. For more information about system properties, see chapter System Properties of the Red Hat JBoss BPM Suite Administration and Configuration Guide.
Activities share properties ID and Name. Note that activities, that is all tasks and sub-processes, have additional properties specific for the given activity or task type.
A.8.2. Activity Mechanisms
A.8.2.1. Multiple Instances
You can run activities in multiple instances during execution. Individual instances are executed in a sequence. The instances are run based on a collection of elements. For every element in the collection, a new activity instance is created.
Every multiple-instance activity has the Collection Expression attribute that maps the input collection of elements to a single element. The multiple-instance activity then iterates through all the elements of the collection.
A.8.2.2. Activity Types
A.8.2.2.1. Call Activity
“A Call Activity identifies a point in the Process where a global Process or a Global Task is used. The Call Activity acts as a 'wrapper' for the invocation of a global Process or Global Task within the execution. The activation of a call Activity results in the transfer of control to the called global Process or Global Task. [8]”
A call activity, that is a Reusable sub-process, represents an invocation of a process from within a process. The activity must have one incoming and one outgoing flow.
When the execution flow reaches the activity, the activity creates an instance of a process with the defined ID.
Attributes
- Called Element
- The ID of the process to be called and instantiated by the activity.
A.8.3. Tasks
A task is the smallest unit of work in a process flow. Red Hat JBoss BPM Suite uses the BPMN guidelines to separate tasks based on the types of inherent behavior that the tasks represent. This section defines all task types available in Red Hat JBoss BPM Suite except for the User Task. For more information about the User Task, see Section A.8.5, “User Task”.
A.8.3.1. None Task
"Abstract Task: Upon activation, the Abstract Task completes. This is a conceptual model only; an Abstract Task is never actually executed by an IT system." [9]
A.8.3.2. Send Task
"Send Task: Upon activation, the data in the associated Message is assigned from the data in the Data Input of the Send Task. The Message is sent and the Send Task completes." [10]
Attributes
- MessageRef
- The ID of the generated message object.
In Red Hat JBoss BPM Suite 6.x, the Send Task is not supported. A custom WorkItemHandler implementation is needed to use the Send task.
A.8.3.3. Receive Task
"Upon activation, the Receive Task begins waiting for the associated Message. When the Message arrives, the data in the Data Output of the Receive Task is assigned from the data in the Message, and Receive Task completes." [11]
Attributes
- MessageRef
- ID of the associated message object.
A.8.3.4. Manual Task
"Upon activation, the Manual Task is distributed to the assigned person or group of people. When the work has been done, the Manual Task completes. This is a conceptual model only; a Manual Task is never actually executed by an IT system." [12]
A.8.3.5. Service Task
Use a Service Task to invoke web services and Java methods.
Table A.1. Service Task Attributes
| Attribute | Description |
|---|---|
|
|
The underlying technology used for implementing the task. Possible values are |
|
| Specifies the operation that is invoked by the task: typically a particular method of a Java class or a web service method. |
A.8.3.5.1. Using Service Task to Invoke Web Service
The preferred way of invoking web services is to use a WS Task, as opposed to a generic Service Task. For more information, see Section B.1, “WS Task”.
The default implementation of a Service Task in the BPMN2 specification is a web service. The web service support is based on the Apache CXF dynamic client, which provides a dedicated Service Task handler that implements the WorkItemHandler interface:
org.jbpm.process.workitem.bpmn2.ServiceTaskHandler
As a part of the process definition, you must first configure the web service:
- Open the process in Process Editor.
- Open the Properties panel on the right and click the Value field next to the Imports property. Click the arrow that appears on the right to open the Editor for Imports window.
Click Add Import to import the required WSDL (Web Services Description Language) values. For example:
-
Import Type:
wsdl WSDL Location:
http://localhost:8080/sample-ws-1/SimpleService?wsdlThe WSDL location points to the WSDL file of your service.
WSDL Namespace:
http://bpmn2.workitem.process.jbpm.org/The WSDL namespace must match
targetNamespacefrom your WSDL file.
-
Import Type:
- Drag a Service Task (Tasks → Service) from the Object Library into the canvas.
Click the task, and in the Properties panel on the right, set the following:
-
Service Implementation:
Webservice -
Service Interface:
SimpleService -
Service Operation:
hello
-
Service Implementation:
In the Core Properties section, click the Value field next to the Assignments property. Click the arrow that appears on the right to open the Data I/O window and do the following:
-
Provide a data input named
Parameter. -
Optionally, provide a data output named
Result.
For an example setting in the Service Task Data I/O window, see the image below:

-
Provide a data input named
To use a request or a response object of the service as a variable, the objects must all implement the java.io.Serializable interface to use persistence properly. To add the interface while generating classes from WSDL, configure the JAXB API:
Create an XML binding file with the following contents.
<?xml version="1.0" encoding="UTF-8"?> <bindings xmlns="http://java.sun.com/xml/ns/jaxb" xmlns:xsi="http://www.w3.org/2000/10/XMLSchema-instance" xmlns:xjc="http://java.sun.com/xml/ns/jaxb/xjc" xsi:schemaLocation="http://java.sun.com/xml/ns/jaxb http://java.sun.com/xml/ns/jaxb/bindingschema_2_0.xsd" version="2.1"> <globalBindings> <serializable uid="1" /> </globalBindings> </bindings>Add the Apache CXF Maven plug-in (
cxf-codegen-plugin) to thepom.xmlfile of the project:<build> <plugins> <plugin> <groupId>org.apache.cxf</groupId> <artifactId>cxf-codegen-plugin</artifactId> <version>CXF_VERSION</version> ... </plugin> </plugins> <build>
A.8.3.5.2. Using Service Task to Invoke Java Method
You can use a Service Task to invoke a method of a particular Java class. The method can have only one parameter and returns a single value. If the invoked Java class is not a part of the project, add all the required dependencies to the pom.xml file of the project.
The following procedures use an example class WeatherService with a method int getTemperature(String location). The method has one parameter (String location) and returns a single value (int temperature).
Invoking Java Method in Red Hat JBoss Developer Studio
- In Red Hat JBoss Developer Studio, open the business process that you want to add a Service Task to, or create a new process with a start and an end event.
- Select Window → Show View → Properties, and click Interfaces in the lower-right corner of the Properties panel.
Click the Import icon (
) to open the Browse for a Java type to Import window. To find the Java type, start typing WeatherServicein the Type field. In the Available Methods list box below, select theint getTemperature(String)method. Click OK.Note that it is also possible to select the Create Process Variables check box to automatically import process variables with generated names. In this procedure, the process variables are created manually.
In the Properties panel, click Data Items. Click the Add icon (
) to create a local process variable:
Enter the process variable details:
-
Name:
location -
Data Type:
java.lang.String
-
Name:
Create a second process variable:
-
Name:
temperature -
Data Type:
java.lang.Integer
-
Name:
Add a Service Task to the process:
- Drag a Service Task (Tasks → Service Task) from the Palette panel on the right to the canvas.
Double-click the Service Task on the canvas to open the Edit Service Task window. Click Service Task and set the following properties:
-
Implementation:
Java -
Operation:
WeatherService/getTemperature -
Source:
location -
Target:
temperature
-
Implementation:
- Click OK and save the process.
The Java application that starts the business process must be available. If you created a new business process and do not have the application, create a new jBPM project with an example application:
- Click File → New → Other → jBPM → jBPM project. Click Next.
- Select the second option and click Next to create a project and populate it with some example files to help you get started quickly.
- Enter a project name and select the Maven radio button. Click Finish.
Register work item handlers. In the
src/main/resources/META-INF/directory, create a file namedkie-deployment-descriptor.xmlwith the following contents:<?xml version="1.0" encoding="UTF-8" standalone="yes"?> <deployment-descriptor xsi:schemaLocation="http://www.jboss.org/jbpm deployment-descriptor.xsd" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"> <persistence-unit>org.jbpm.domain</persistence-unit> <audit-persistence-unit>org.jbpm.domain</audit-persistence-unit> <audit-mode>JPA</audit-mode> <persistence-mode>JPA</persistence-mode> <runtime-strategy>SINGLETON</runtime-strategy> <marshalling-strategies/> <event-listeners/> <task-event-listeners/> <globals/> <work-item-handlers> <work-item-handler> <resolver>mvel</resolver> <identifier>new org.jbpm.process.instance.impl.demo.SystemOutWorkItemHandler()</identifier> <parameters/> <name>Log</name> </work-item-handler> <work-item-handler> <resolver>mvel</resolver> <identifier>new org.jbpm.process.workitem.bpmn2.ServiceTaskHandler(ksession, classLoader)</identifier> <parameters/> <name>Service Task</name> </work-item-handler> <work-item-handler> <resolver>mvel</resolver> <identifier>new org.jbpm.process.workitem.webservice.WebServiceWorkItemHandler(ksession, classLoader)</identifier> <parameters/> <name>WebService</name> </work-item-handler> <work-item-handler> <resolver>mvel</resolver> <identifier>new org.jbpm.process.workitem.rest.RESTWorkItemHandler(classLoader)</identifier> <parameters/> <name>Rest</name> </work-item-handler> </work-item-handlers> <environment-entries/> <configurations/> <required-roles/> <remoteable-classes/> <limit-serialization-classes>true</limit-serialization-classes> </deployment-descriptor>Open the
ProcessMain.javafile that is located in thesrc/main/javadirectory, and modify the code of the application that starts the business process:Initialize the process variables:
Map<String, Object> arguments = new HashMap<>(); arguments.put("location", "Brno"); arguments.put("temperature", -1);Start the process:
ksession.startProcess("demo-package.demo-service-task", arguments);
Invoking Java Method in Business Central
The invoked Java class must be available either on the class path or in the dependencies of the project. To add the class to the dependencies of the project:
- In Business Central, click Authoring → Artifact Repository.
- Click Upload to open the Artifact upload window.
-
Choose the
.jarfile, and click
.
- Click Authoring → Project Authoring, and find or create the project you want to use.
- Click Open Project Editor and then Project Settings: Project General Settings → Dependencies.
-
Click Add from repository, locate the uploaded
.jarfile, and click Select. - Save the project.
- Open or create the business process to which you want to add a Service Task.
In Process Editor, open the Properties panel on the right and click the Value field next to the Imports property. Click the arrow that appears to open the Editor for Imports window. In the window:
Click Add Import and specify the following values:
-
Import Type:
default -
Custom Class Name: fully qualified name of the invoked Java class, for example
org.jboss.weather.WeatherService
-
Import Type:
- Click Ok.
Create process variables:
- In the Properties panel, click the Value field next to the Variable Definitions property. Click the arrow that appears to open the Editor for Variable Definitions window.
Click Add Variable to add the following two process variables:
-
Name:
temperature, Defined Types:Integer(or Custom Type:java.lang.Integer) -
Name:
location, Defined Types:String(or Custom Type:java.lang.String)
-
Name:
- Click Ok.
- To add a Service Task into the process, drag and drop a Service Task (Tasks → Service) from the Object Library panel on the left into the canvas.
Click the Service Task on the canvas to open its properties on the right, and set the following properties:
-
Service Interface:
org.jboss.weather.WeatherService -
Service Operation:
getTemperature
-
Service Interface:
Click the Value field next to the Assignments property. Click the arrow that appears to open the Data I/O window and do the following:
Click Add next to Data Inputs and Assignments and add the following:
-
Name:
Parameter, Data Type:String, Source:location -
Name:
ParameterType, Data Type:String, Source:java.lang.String(to add this value, click Constant … and type it manually)
-
Name:
Click Add next to Data Outputs and Assignments and add the following:
-
Name:
Result, Data Type:Integer, Target:temperature
-
Name:
- Click Save.
A.8.3.6. Business Rule Task
“A Business Rule Task provides a mechanism for the Process to provide input to a Business Rules Engine and to get the output of calculations that the Business Rules Engine might provide. [13]”
The task defines a set of rules that need to be evaluated and fired on task execution. Any rule defined as part of the ruleflow group in a rule resource is fired.
When a Business Rule Task is reached in the process, the engine starts executing the rules with the defined ruleflow group. When there are no more active rules with the ruleflow group, the execution continues to the next element. During the ruleflow group execution, new activations belonging to the active ruleflow group can be added to the agenda as these are changed by the other rules. Note that the process continues immediately to the next element if there are no active rules of the ruleflow group.
If the ruleflow group was already active, the ruleflow group remains active and the execution continues if all active rules of the ruleflow group have been completed.
Attributes
- Ruleflow Group
-
The name of the ruleflow group that includes the set of rules to be evaluated by the task. This attribute refers to the
ruleflow-groupkeyword in yourDRLfile.
A.8.3.7. Script Task
A Script Task represents a script to be executed during the process execution.
The associated Script can access process variables and global variables. When using a Script Task:
- Avoid low-level implementation details in the process. A Script Task could be used to manipulate variables, but consider using a Service Task when modelling more complex operations.
- The script should be executed immediately. If there is the possibility that the execution could take some time, use an asynchronous Service Task.
- Avoid contacting external services through a Script Task. It would be interacting with external services without notifying the engine, which can be problematic. Model communication with an external service using a Service Task.
- Scripts should not throw exceptions. Runtime exceptions should be caught and managed, for example, inside the script or transformed into signals or errors that can then be handled inside the process.
When a Script Task is reached during execution, the script is executed and the outgoing flow is taken.
Attributes
- Script
- The script to be executed.
- Script Language
- The language in which the script is written.
From Red Hat JBoss BPM Suite 6.2 onwards, JavaScript is supported as a dialect in Script Tasks. To define a Script Task in Business Central and JBoss Developer Studio using the process design tool:
- Select a Script Task object from the Object Library menu on the left hand side and add it to the process design tool.
- In the Properties panel on the right hand side, open the Script property.
- Write the script to be executed in the Expression Editor window and click Ok.
Example A.3. Script Task in Business Central using JavaScript

A.8.4. Sub-Process
“A Sub-Process is an Activity whose internal details have been modeled using Activities, Gateways, Events, and Sequence Flows. A Sub-Process is a graphical object within a Process, but it also can be 'opened up’to show a lower-level Process. [14]”
You can understand a sub-process as a compound activity or a process in a process. When reached during execution, the element context is instantiated and the encapsulated process triggered. Note that, if you use a Terminating End Event inside a sub-process, the entire process instance that contains the sub-process is terminated, not just the sub-process. A sub-process ends when there are no more active elements in it.
The following sub-process types are supported:
- Ad-Hoc sub-process, which has no strict element execution order.
- Embedded sub-process, which is a part of the parent process execution and shares its data.
- Reusable sub-process, which is independent from its parent process.
- Event sub-process, which is only triggered on a start event or a timer.
Note that any sub-process type can be a multi-instance sub-process.
A.8.4.1. Embedded Sub-Process
An embedded sub-process encapsulates a part of the process.
It must contain a start event and at least one end event. Note that the element allows you to define local sub-process variables, that are accessible to all elements inside this container.
A.8.4.2. AdHoc Sub-Process
“An Ad-Hoc Sub-Process is a specialized type of Sub-Process that is a group of Activities that have no REQUIRED sequence relationships. A set of Activities can be defined for the Process, but the sequence and number of performances for the Activities is determined by the performers of the Activities. [15]”
“An Ad-Hoc Sub-Process or Process contains a number of embedded inner Activities and is intended to be executed with a more flexible ordering compared to the typical routing of Processes. Unlike regular Processes, it does not contain a complete, structured BPMN diagram description--i.e., from Start Event to End Event. Instead the Ad-Hoc Sub-Process contains only Activities, Sequence Flows, Gateways, and Intermediate Events. An Ad-Hoc Sub-Process MAY also contain Data Objects and Data Associations. The Activities within the Ad-Hoc Sub- Process are not REQUIRED to have incoming and outgoing Sequence Flows. However, it is possible to specify Sequence Flows between some of the contained Activities. When used, Sequence Flows will provide the same ordering constraints as in a regular Process. To have any meaning, Intermediate Events will have outgoing Sequence Flows and they can be triggered multiple times while the Ad-Hoc Sub-Process is active.[16]”
Attributes
- AdHocCompletionCondition
-
When this condition evaluates to
true, the execution finishes. - AdHocOrdering
- Enables you to choose paralel or sequential execution of elements inside of the sub-process.
- Variable Definitions
- Enables you to define process variables available only for elements of the sub-process.
A.8.4.3. Multi-instance Sub-Process
A Multiple Instances Sub-Process is instantiated multiple times when its execution is triggered. The instances are created in a sequential manner, that means a new sub-process instance is created only after the previous instance has finished.
A Multiple Instances Sub-Process has one incoming connection and one outgoing connection.
Attributes
- MI collection input
- A collection to be iterated through. It is used to create individual instances of given activity. The sub-process will be run with each element of this collection.
- MI collection output
- A collection of the sub-process execution results.
- MI completion condition
-
An MVEL expression evaluated at the end of every instance. When evaluated as
true, the sub-process is evaluated as finished and the sub-process’s outgoing flow is taken. Possible remaining sub-process instances are cancelled. - MI data input
- A variable name for each element from the collection that will be used in the process.
- MI data output
- An optional variable name for the collection of the results.
A.8.4.4. Event Sub-Process
An event sub-process becomes active when its start event gets triggered. It can interrupt the parent process context or run in parallel to it.
With no outgoing or incoming connections, only an event or a timer can trigger the sub-process. The sub-process is not part of the regular control flow. Although self-contained, it is executed in the context of the bounding sub-process.
Use an event sub-process within a process flow to handle events that happen outside of the main process flow. For example, while booking a flight, two events may occur:
- Cancel booking (interrupting).
- Check booking status (non-interrupting).
Both these events can be modeled using the event sub-process.
A.8.5. User Task
"A User Task is a typical 'workflow' Task where a human performer performs the Task with the assistance of a software application and is scheduled through a task list manager of some sort." [17]
The User Task cannot be performed automatically by the system and therefore requires an intervention of a human user, the actor. The User Task is atomic.
On execution, the User Task element is instantiated as a task that appears in the list of tasks of one or multiple actors.
If a User Task element defines the Groups attribute, it is displayed in task lists of all users that are members of the group. Any of the users can claim the task. Once claimed, the task disappears from the task list of the other users.
Note that User Task is implemented as a domain-specific task and serves as a base for your custom tasks. For further information, see Section 4.14.1, “Work Item Definition”.
Attributes
- Actors
- A comma-separated list of users who can perform the generated task.
- Content
- The data associated with this task. This attribute does not affect TaskService behavior.
- CreatedBy
- The name of the user or ID of the process that created the task.
- Groups
- A comma-separated list of groups who can perform the generated task.
- Locale
- The locale for which the element is defined. This property is not used by the Red Hat JBoss BPM Suite engine at the moment.
- Notifications
- A definition of notification applied to the User Task. For further information, see Section A.8.5.3, “Notification”.
- Priority
- An integer value defining the User Task priority. The value influences the User Task ordering in the user Task list and the simulation outcome.
- Reassignment
- The definition of escalation applied to the User Task. For further information, see Section A.8.5.2, “Reassignment”.
- ScriptLanguage
-
The language of the script. Choose between
Java,MVEL, orJavascript. - Skippable
-
A Boolean value that defines if the User Task can be skipped. If
true, the actor of the User Task can decide not to complete it and the User Task is never executed. - Task Name
- Name of the User Task generated during runtime. It is displayed in the task list in Business Central.
Note that any other displayed attributes are used by features not restricted to the User Task element and are described in the chapters dealing with the particular mechanism.
A.8.5.1. User Task lifecycle
When a User Task element is triggered during process execution, a User Task instance is created. The User Task instance execution is preformed by the User Task service of the Task Execution Engine. For further information about the Task Execution Engine, see the Red Hat JBoss BPM Suite Administration and Configuration Guide. The Process instance continues the execution only when the associated User Task has been completed or aborted.
See the User Task lifecycle:
-
When the process instance enters the User Task element, the User Task is the
Createdstage. -
This is usually a transient state and the User Task enters the
Readystate immediately. The task appears in the task list of all the actors that are allowed to execute the task. -
When one of the actors claims the User Task, the User Task becomes
Reserved. If a User Task has only one potential actor, it is automatically assigned to that actor upon creation. -
When the user who has claimed the User Task starts the execution, the User Task status changes to
InProgress. -
On completion, the status changes to
CompletedorFaileddepending on the execution outcome.
Note that the User Task lifecycle can include other statuses if the User Task is reassigned (delegated or escalated), revoked, suspended, stopped, or skipped. For further details, on the User Task lifecycle see the Web Services Human Task specification.
A.8.5.2. Reassignment
The reassignment mechanism implements the escalation and delegation capabilities for User Tasks, that is, automatic reassignment of a User Task to another actor or group after a User Task has remained inactive for a certain amount of time.
A reassignment can start if a User Task is in one of the following states for a defined amount of time:
-
When not started:
READYorRESERVED. -
When not completed:
IN_PROGRESS.
When the conditions defined in the reassignment are met, the User Task is reassigned to the users or groups defined in the reassignment. If the actual owner is included in the new users or groups definition, the User Task is set to the READY state.
Reassignment is defined in the Reassignment property of User Task elements. The property can take an arbitrary number of reassignment definitions with the following parameters:
-
Users: A comma-separated list of user IDs that are reassigned to the task on escalation. It can be a String or an expression, such as#{user-id}. -
Groups: A comma separated list of group IDs that are reassigned to the task on escalation. It can be a String or an expression, such as#{user-id}. -
Expires At: A time definition when escalation is triggered. It can be a String or an expression, such as#{expiresAt}. For further information about time format, see Section A.5, “Timing”. -
Type: A state in which the task needs to be at the givenExpires Attime so that the escalation is triggered.
A.8.5.3. Notification
The notification mechanism provides the capability to send an e-mail notification if a User Task is in one of the following states for the specified time:
-
When not started:
READYorRESERVED. -
When not completed:
IN_PROGRESS.
A notification is defined in the Notification property of User Task elements. The property accepts an arbitrary number of notification definitions with the following parameters:
-
Type: The state in which the User Task needs to be at the givenExpires Attime so that the notification is triggered. -
Expires At: A time definition when notification is triggered. It can be a String value or expression, such as#{expiresAt}. For information about time format, see Section A.5, “Timing”. -
From: The user or group ID of users used in theFromfield of the email notification message. It can be a String or expression. -
To Users: A comma-separated list of user IDs to which the notification is sent. It can be a String or expression, such as#{user-id}. -
To Groups: A comma separated list of group IDs to which the notification is be sent. It can be a String or expression, such as#{group-id}. -
Reply To: A user or group ID that receives any replies to the notification. It can be a String or expression, such as#{group-id}. -
Subject: The subject of the email notification. It can be a String or an expression. -
Body: The body of the email notification. It can be a String or an expression.
Available variables
A notification can reference process variables by using the #{processVariable} syntax. Similarly, task variables use the ${taskVariable} syntax.
In addition to custom task variables, the notification mechanism can use the following local task variables:
-
taskId: The internal ID of the User Task instance. -
processInstanceId: The internal ID of task’s parent process instance. -
workItemId: The internal ID of a work item that created the User Task. -
processSessionId: The knowledge session ID of the parent process instance. -
owners: A list of users and groups that are potential owners of the User Task. -
doc: A map that contains task variables.
Example A.4. Body of notification with variables
<html>
<body>
<b>${owners[0].id} you have been assigned to a task (task-id ${taskId})</b><br>
You can access it in your task
<a href="http://localhost:8080/jbpm-console/app.html#errai_ToolSet_Tasks;Group_Tasks.3">inbox</a><br/>
Important technical information that can be of use when working on it<br/>
- process instance id - ${processInstanceId}<br/>
- work item id - ${workItemId}<br/>
<hr/>
Here are some task variables available
<ul>
<li>ActorId = ${doc['ActorId']}</li>
<li>GroupId = ${doc['GroupId']}</li>
<li>Comment = ${doc['Comment']}</li>
</ul>
<hr/>
Here are all potential owners for this task
<ul>
$foreach{orgEntity : owners}
<li>Potential owner = ${orgEntity.id}</li>
$end{}
</ul>
<i>Regards from jBPM team</i>
</body>
</html>A.9. Connecting objects
A.9.1. Connecting Objects
Connecting object connect two elements. There are two main types of Connecting object:
- Sequence Flow, which connect Flow elements of a Process and define the flow of the execution (transport the token from one element to another)
- Association Flow, which connect any Process elements but have no execution semantics
A.9.2. Connecting Objects types
A.9.2.1. Sequence Flow
A sequence flow represents the transition between two flow elements. It establishes an oriented relationship between activities, events, and gateways, and defines their execution order.
- Condition Expression
When this condition evaluates to
true, the workflow takes the sequence flow.If a sequence flow has a gateway element as its source, you need to define a conditional expression that is evaluated before the sequence flow is taken. If evaluated to
false, the workflow attempts to switch to another sequence flow. If evaluated totrue, the sequence flow is taken.When defining the condition in Java, make sure to return a boolean value:
return <expression resolving to boolean>;
- Condition Expression Language
- You can use either Java, Javascript, MVEL, or Drools to define the condition expression.
When defining a Condition Expression, make sure to call process and global variables. You can also call the kcontext variable, which holds the process instance information.
A.10. Swimlanes
Swimlanes visually group tasks related to one group or user. For example, you can create a marketing task swimlane to group all User Tasks related to marketing activities into one Lane.
A.10.1. Lanes
"A Lane is a sub-partition within a Process (often within a Pool)… " [18]
A Lane allows you to group some of the process elements and define their common parameters. Note that a lane may contain another lane.
To add a new Lane:
- Click the Swimlanes menu item in the Object Library.
- Drag and drop the Lane artifact to your process model.
This artifact is a box into which you can add your User Tasks.
Lanes should be given unique names and background colors to fully separate them into functional groups. You can do so in the properties panel of a lane.
During runtime, lanes auto-claim or assign tasks to a user who has completed a different task in that lane within the same process instance. This user must be eligible for claiming a task, that is, this user must be a potential owner. If a User Task doesn’t have an actor or group assigned, it marks the task as having no potential owners. At runtime, the process will stop its execution.
For example, suppose there are two User Tasks, UT1 and UT2, located in the same lane. UT1 and UT2 have group field set to the analyst value. When the process is started, and UT1 is claimed, started, or completed by an analyst user, UT2 gets claimed and assigned to the user who completed UT1. If only UT1 has the analyst group assigned, and UT2 has no user or group assignments, the process stops after UT1 had been completed.
A.11. Artifacts
A.11.1. Artifacts
Any object in the BPMN diagram that is not a part of the process workflow is an artifact. Artifacts have no incoming or outgoing flow objects. The purpose of artifacts is to provide additional information needed to understand the diagram.
A.11.2. Data Objects
Data objects are visualizations of process or sub-process variables. Note that not every process or sub-process variable must be depicted as a data object in the BPMN diagram. Data Objects have separate visualization properties and variable properties.

Where did the comment section go?
Red Hat's documentation publication system recently went through an upgrade to enable speedier, more mobile-friendly content. We decided to re-evaluate our commenting platform to ensure that it meets your expectations and serves as an optimal feedback mechanism. During this redesign, we invite your input on providing feedback on Red Hat documentation via the discussion platform.