WSO2 EI consists of two conceptual components. The ESB profile handles short-running integration flows while the business process profile handles long running stateful business processes. Following are definitions of some of the concepts and terminology associated with the ESB and business process profiles.
[ Transports ] [ Message builders and formatters ] [ Endpoints ] [ Inbound endpoints ] [Proxy sevices][ REST APIs ] [ Topics ] [ Mediators ] [ Sequences ] [ Tasks ] [ Quality of Service (QoS) component ] [ Registry ] [ Management and configuration GUI ] [ Connectors ] [ Analytics ] [ Message routing ] [ Message filtering ] [ Message transformation ] [ Protocol switching ] [ Service chaining ] [ Store and forward ] [ Maven Multi Module (MMM) project ] [ Composite Application project ] [ Data integration ] [ Business Processes ] [Microservices Framework]
The ESB profile shipped with WSO2 EI comprises of the following high level components:
A transport is responsible for carrying messages that are in a specific format. The ESB profile supports all the widely used transports including HTTP/s, JMS, VFS and domain-specific transports like FIX. You can easily add a new transport using the Axis2 transport framework and plug it into the ESB profile. Each transport provides a receiver, which the the ESB profile uses to receive messages, and a sender, which it uses to send messages. The transport receivers and senders are independent of the the ESB profile core.
For more information, see Working with Transports.
Message builders and formatters
When a message comes into the the ESB profile, the receiving transport selects a message builder based on the message's content type. It uses that builder to process the message's raw payload data and convert it into common XML, which the the ESB profile mediation engine can then read and understand. The ESB profile includes message builders for text-based and binary content.
Conversely, before a transport sends a message out from the the ESB profile, a message formatter is used to build the outgoing stream from the message back into its original format. As with message builders, the message formatter is selected based on the message's content type.
You can implement new message builders andformattersusing the Axis2 framework.
For more information, see Working with Message Builders and Formatters.
An endpoint defines an external destination for a message. An endpoint can connect to any external service after configuring it with any attributes or semantics needed for communicating with that service. For example, an endpoint could representa URL,a mailbox, a JMS queue, or a TCP socket, along with the settings needed to connect to it.
You can specify an endpoint as an address endpoint, WSDL endpoint, a and more. An endpoint is defined independently of transports, allowing you to use the same endpoint with multipletransports. When you configure a message mediation sequence or a proxy service to handle the incoming message, you specify which transport to use and the endpoint where the message will be sent.
For more information, see Working with Endpoints.
An inbound endpoint is a message source that can be configured dynamically. In the ESB profile, when it comes to the existing Axis2 based transports, only the HTTP transport works in a multi-tenant mode. Inbound endpoints support all transports to work in a multi-tenant mode.
For more information, see Working with Inbound Endpoints
Proxy services are virtual services that receive messages and optionally process them before forwarding them to a service at a given endpoint. This approach allows you to perform necessary transformations and introduce additional functionality without changing your existing service. Any available transport can be used to receive and send messages from the proxy services. A proxy service is externally visible and can be accessed usinga URL similar to a normal web service address.
For more information, see Working with Proxy Services.
A REST API in the ESB profile is analogous to a web application deployed in the the ESB profile profile. Each API is anchored at a user-defined URL context, much like how a web application deployed in a servlet container is anchored at a fixed URL context. An API will only process requests that fall under its URL context. A REST API defines one or more resources, which is a logical component of an API that can be accessed by making a particular type of HTTP call.
A REST API resource is used by the ESB profile mediation engine to mediate incoming requests, forward them to a specified endpoint, mediate the responses from the endpoint, and send the responses back to the client that originally requested them. We can create an API resource to process defined HTTP request method/s that are sent to the backend service. The In sequence handles incoming requests and sends them to the back-end service, and the Out sequence handles the responses from the back-end service and sends them back to the requesting client.
REST APIs allow you to send messages directly into the the ESB profile using REST.
For more information, see Working with APIs.
A topic allows services to receive messages when a specific type of event occurs by subscribing to messages that have been published to a specific topic.
For more information, see Working with Topics and Events.
Mediators are individual processing units that perform a specific function, such as sending, transforming, or filtering messages. The ESB profile includes a comprehensive mediator library that provides functionality for implementing widely used Enterprise Integration Patterns (EIPs). You can also easily write a custom mediator to provide additional functionality using various technologies such as Java, scripting, and Spring.
For more information, see Mediators.
A sequence is a set of mediators organized into a logical flow, allowing you to implement pipes and filter patterns. You can add sequences to proxy services and REST APIs.
For more information, see Mediation Sequences.
A task allows you to run a piece of code triggered by a timer. The ESB profile provides a default task implementation, which you can use to inject a message to the ESB profile at a scheduled interval. You can also write your own custom tasks by implementing a Java interface.
For more information, see Working with Scheduled Tasks.
Quality of Service (QoS) component
The Quality of Service (QoS) component implements security.
For more information, see Applying Security to a Proxy Service.
A registry is a content store and metadata repository. WSO2 Enterprise Integrator provides a registry with a built-in repository that stores the configuration and configuration metadata that define your messaging architecture. You can also use an external registry/repository for resources such as WSDLs, schemas, scripts, XSLT and XQuery transformations, etc. You can hide or merge one or more remote registries behind a local registry interface, and you can configure the Enterprise Integrator to poll these registries to update its current configurations.
For more information, see Working with the Registry in WSO2 Admin Guide.
Management and configuration GUI
The Management Console provides a graphical user interface (GUI) that allows you to easily configure the components mentioned above.
A connector is a collection of templates that define operations that can be called from the ESB profile and is used when connecting the Enterprise Integrator to external third party APIs. The ESB profile provides a variety of connectors via the WSO2 Connector Store.
For information on using a connector in your EI configuration, see the tutorial.
Monitoring and analytics are provided by the EI-Analytics profile and this can be started as a separate profile. This component provides an analytics dashboard with a host of statistical graphs, message flow diagrams and mediator properties. If required, the analytics can be extended with customizations. For more information, see WSO2 Enterprise Integrator Analytics.
For information on using the Analytics Dashboard, see the tutorial.
When there is an incoming message into the ESB profile, it is able to determine and route the message to the recipient. Routing can also be done based on some component of the message. This is known as content-based routing and is done using the Switch mediator.
For information in implementing content based routing, see the tutorial.
The ESB profile is able to filter out messages based on the message content using the Filter mediator. This feature allows you to perform complex logic, where you are able to filter out messages and send them in different mediation flows.
When sender and receiver messages do not have the same data format, the ESB profile can be used to translate the messages between the sender and recipient. For information on how the the ESB profile can be used for this, see the Message Translator pattern in the EIP Guide. The PayloadFactory mediator and Data Mapper mediator can be used to implement this. You can manipulate messages by adding and removing content from them, converting them to a completely different message format and even validating messages based on the available validation mechanisms of the message format.
For an example of how you can implement message transformation using the Data Mapper mediator, see the tutorial.
The ESB profile has the capability to take messages that come in one protocol and then send the message out in a completely different protocol (e.g. HTTP to JMS). The protocol bridging technology in the ESB profile takes the business content of a message that comes in from one protocol and sends this content out in a completely different format and protocol.
Service chaining is a popular use case in the ESB profile, where several services are exposed as a single service, aggregated service. The ESB profile is used for the integration and sequential calling of these services so that the expected response can be provided to the client.
For information on implementing a simple service chaining scenario, see the tutorial.
Store and forward
Store and forwardmessaging pattern isused in asynchronous messaging. This can be used when integrating with systems that accept message traffic at a given rate and handling failover scenarios. In this pattern, messages are sent to a Message Store where they are temporarily stored before they are delivered to their destination by a Message Processor. The ESB profile is shipped with a few message store implementations and also allows you to implement a custom message store implementation.
For information on implementing a store and forward pattern using the in-memory store of the ESB profile, see the tutorial.
For information on implementing guaranteed delivery in the ESB profile, see the example.
Maven Multi Module project
WSO2 Enterprise Integrator tooling creates separate projects and a separate Maven
pom.xml file for most deployable artifacts. In Maven-centric development, however, you have a parent project and some child modules, including a separate distribution module that is a child module of the parent project. To achieve this model, you can create a Maven Multi Module project in your workspace, create your artifact projects nested within it, and then create the Composite Application project for the distribution module.
Building all deployable artifacts within a Maven Multi Module project allows you to build the deployable artifacts using Continuous Integration (CI) tools.
Composite Application project
To deploy the artifacts to the ESB profile, we must first package the artifact project/s into a Composite Application (C-App) project. A C-App also allows you to easily port your artifacts from one environment to another. For detailed information on C-Apps, see Introduction to Composite Applications in WSO2 Admin Guide.
Data services hosted within the ESB profile can be easily used for an integration solution. You can expose any type of datasource as a data service, which will simply decouple the datasource from its infrastructure and expose the capabilities of the datasource as a service.
Data services and resources provide a service-and-resource-interface to some data stored in a relational database. In a service interface, you must indicate how service requests map to queries against collections of tables in a relational database and how query results are mapped to service responses. In a resource interface, you must indicate how a set of resources map to queries and how query responses are returned as resource representations (or reports of resource creation or deletion, depending on the HTTP verb in use).
The following topics describe the data services configuration language and the key elements used when composing a data service, such as queries, databases, operations etc. along with example syntax.
[ Data services and resource language ] [ Configuring the datasource ] [ Defining queries ] [ Defining service operations ] [ Defining resources ] [ Defining event trigger ] [ Security configuration ] [ Sample data service configuration ]
Data services and resource language
Data services and resources are defined using the Data Services and Resource Language (DSRL) where a
<data> element describes a data service or a resource. The common attributes of a
<data> element is given in the following example:
|baseURI||a REQUIRED URI indicating the base URI for the operations and resources definedwithinthe <data> element.|
|name||a REQUIRED name of the data service.|
|enableBatchRequest||an OPTIONAL boolean to enable the batch request feature.|
|enableBoxcarring||an OPTIONAL boolean to enable the boxcarring feature.|
|txManagerJNDIName||an OPTIONAL JNDI name for overriding the standard JNDI location for locating the JTA transaction manager|
|serviceNamespace||an OPTIONAL URI to uniquely identify the web service.|
|serviceGroup||an OPTIONAL name that is used to categorize data-services in different groups.|
|serviceStatus||an OPTIONAL string to enable WIP (specifies weather the data service is deployed or work in progress) support.|
|transports||an OPTIONAL string to enable the transports required for the data service. The possible values are "http", "https", "JMS" and "local".|
The following sample config gives the common elements used to connect to a datasource:
- config/@id: an OPTIONAL XML ID identifying the config element. If the configuration file has multiple <config> elements, then this attribute is required.
The actual set of properties is defined by each type of database connection (e.g., JDBC will have its own standard set).
A query consists of parameters that describe how to map the result to an XML element. It is similar to a function that maps some parameters to an XML element. A query definition does not indicate how the parameters are acquired. Instead, it just lists the parameters that are needed, assuming that the parameters will be provided. If the query is at a top level (i.e., direct child of
<data>) then either an operation definition or a resource definition provides the context for the parameters. If the query is nested within a
<result> element, then the parameter names refer to column names of the result table described in the
<result> element of the XML.
The following sample config shows the common attributes of a
|id||an OPTIONAL XML ID identifying the query. If |
|useConfig||a REQUIRED reference to thedatasourcethat is to be used forquery.|
an OPTIONAL boolean parameter to enable returnGeneratedKeys.
Set this attribute to true only in INSERT queries, where the query inserts to a table that has an auto incrementing key column. In such a case, an auto incremented key value is added to the results set.Also see Returning Generated Keys.
|param||a declaration of a parameter of the query|
|name||a REQUIRED name of the parameter.|
|sqlType||an OPTIONAL string containing a legal SQL type which defines the type of the parameter. If none is specified then defaults to string.|
|paramType||a REQUIRED parameter type. If none is specified then defaults to SCALAR.|
|ordinal||a REQUIRED only for stored procedures which map the parameter positions with the query.|
|defaultValue||an OPTIONAL default value of the input parameter.|
|validateCustom||class||a REQUIRED custom validation class to validate the input parameter.|
|validateLength||minimum||a REQUIRED integer when specifying the minimum length of the parameter.|
|maximum||a REQUIRED integer when specifying the maximum length of the parameter.|
|validatePattern||pattern||a REQUIRED string pattern to validate the string input parameter.|
|sql||a REQUIRED string containing the SQL query or SQL function to execute. See Calling an SQL Function in a Query.|
|dialect||an OPTIONAL string containingjdbcdriver prefix when need to usesql-dialects.|
|sparql||a REQUIRED string containing thesparqlquery to execute when using RDF asdatasource.|
|properties||an OPTIONAL XML to define advanced query properties. Each property is defined as a child element of this.|
|name||a REQUIRED name of the property.|
|result||a REQUIRED elementdescriibinghow the table resulting from executing the query will be converted to an XML element. If any <column> or <query> child elements are present, then ONLY those are transferred as child elements of the result element (or elements, depending on whether result/@rowName is given or not). The order of the nested <column> or <query> elements defines the order of elements in the result element.|
|element||a REQUIRED QName which is the name of the element which will hold the results.|
|rowName||an OPTIONAL QName which is the name of the element wrapping each row of the result table if more than one element from the table is to be returned. If this attribute is not given, then only the first row is returned and hence no second level wrapper element is needed.|
|defaultNamespace||an OPTIONAL URI being the default namespace to use for the namespace name of elements and attributesthat result columnsare mapped to. Defaultsto "" (meaning no namespace).|
|call-query||an OPTIONAL element (which may occur any number of times) which is used to execute a further query and produce an element which will be present in the parent element as a child. This is used primarily to use a value of a column as key to select data from a new table.|
|element||an OPTIONAL element (which may occur any number of times) indicating how a specific column in the result table is to be mapped into an element|
|element/@name||a REQUIRED QName giving the name of the element to put the column data into|
|element/@column||an OPTIONAL string giving the name of the column whose value is to be copied into the element.|
|element/@requiredRoles||an OPTIONAL string giving the names of roles that who has permission to see the result element. Bydefaultit has set to all users.|
|element/@export||an OPTIONAL name giving to the element that to be export outside of query. This feature is used withboxcarringsupport.|
|element/@exportType||a REQUIRED parameter when using export option. Used to give the export element type whether scalar or array.|
|element/@xsdType||an OPTIONAL indication of the XML Schema type of the element. If none is given defaults to the mapping of the SQL type of the result column named by @column to an XML Schema type as per [SQL XML Mapping]|
|attribute||an OPTIONAL element (which may occur any number of times) indicating how a specific column in the result table is to be mapped into an attribute of the element representing the current row|
|attribute/@name||a REQUIRED QName giving the name of the attribute to put the column data into|
|attribute/@column||an OPTIONAL string giving the name of the column whose value is to be copied into the attribute. Either @column or @param is required.|
|attribute/@param||an OPTIONAL string giving the name of the param whose value is to be copied into the attribute. Either @column or @param is required.|
|attribute/@requiredRoles||an OPTIONAL string giving the names of roles that who has permission to see the result attribute. Bydefaultit has set to all users.|
|attribute/@export||an OPTIONAL name giving to the attribute that to be export outside of query. This feature is used withboxcarringsupport.|
|attribute/@exportType||a REQUIRED parameter when using export option. Used to give the export element type whether scalar or array.|
|attribute/@xsdType||an OPTIONAL indication of the XML Schema type of the attribute. If none is given defaults to the mapping of the SQL type of the result column named by @column to an XML Schema type as per [SQL XML Mapping]|
|call-querry||an OPTIONAL element (which may occur any number of times) indicating how a specific column in the result table is to be mapped into a query result.|
|with-param/@name||a REQUIRED name of the query to put the column data into|
|with-param/@query-param||an OPTIONAL string giving the name of the column whose value is to be copied into the element.|
Defining service operations
Operation refers to a Web service operation defined by a query. The operation is defined as an invocation of a query indicating how the parameters of the query are computed or derived. The syntax is as follows:
- operation/@name: is the REQUIRED name of the operation.
- operation/@disableStreaming: an OPTIONAL boolean that used to disable streaming. By default streaming are enable.
- operation/@description: an OPTIONAL string used to describe operation.
- operation/call-query: describes how a query is to be invoked with the data received in the operation invocation.
- call-query/@href: an OPTIONAL reference to the query that is to be invoked. If this is missing then a query must be nested within this element.
- call-query/with-param: a description of a parameter binding for the query invocation: says how a named parameter's value is computed.
- with-param/@name: a REQUIRED NMTOKEN identifying the parameter whose value is being specified.
- with-param/@query-param: an OPTIONAL attribute indicating the name of the URI query parameter (from operation/@path) whose value is the value of this parameter.
- with-param/@column: an OPTIONAL attribute naming a column of the immediate parent <result> element. That is, this applies onlyfornested queries and serves the purpose of being able to use a query result as input to a nested query.
- with-param/@param: an OPTIONAL attribute naming a <param> of the parent <query>. That is, this applies onlyfornested queries and serves the purpose of being able to use a parameter of the parent query input to a nested query.
- call-query/query: an OPTIONAL <query> being the anonymous query to be invoked as the implementation of this operation with the parameters identified above.
This defines the resource identified by "new URI (/data/@baseURI, /data/resource/@path)" and indicates how the request is mapped to a query invocation.
Defining event trigger
- event-triger/@id: REQUIRED id used to identify the event-trigger, used in data services queries.
- event-triger/language REQUIRED currently only XPath issupported asthe event trigger language.
- target-topic REQUIRED topic, to which the event notifications will be published.
- subscriptions REQUIRED can be any WS-Eventing complient endpoint. For example, an SMTP transport can be used to send a message to a mail inbox, where an email address is given as the subscription. Here many subscriptions can be defined for the given topic.
When a data service receives messages, it expects to receive a signed and encrypted message as specified by the security policy stored in the registry of your server. Therefore, as shown below, you can embed the security configurations directly in the .dbs file of the data service by adding the path to the relevant security policy. Please see Apache Rampart and Axis2 documentation on the format of the policy file stored in the registry. You can also use the 'enableSec' element to ensure that Apache Rampart is engaged for the data service.
Sample data service configuration
Given below is a sample data service configuration with queries, resources etc. for your reference:
WSO2 Enterprise Integrator has the capability to run short running stateless business processes as well as long running stateful business processes. However, for long running stateful business processes, you need to use the EI-Business-Process profile that is shipped with the Enterprise Integrator. Business processes can be written using Web Services Business Process Execution Language (WS-BPEL) and Business Process Model and Notation (BPMN) standards. It also has the capability for human task integration.
For a description of the concepts and terminology on business process management, click on the Business Process Management tab.
WSO2 MSF4J (Microservices Framework for Java) is a separate profile that is shipped with WSO2 Enterprise Integrator. This allows developers to quickly get started with developing and running Java microservices. You simply need to annotate your service and deploy it using a single line of code.
For more information on MSF4j services, see the following:
- Developing your first MSF4J service
- Developing MSF4J services using the Spring framework
- Analytics for MSF4J services
For more information on WSO2 MSF4J in GitHub, see the full documentation.
EI-Business-Process profile shipped with WSO2EI, executes business processes written using either the BPMN 2.0 standard or WS-BPEL 2.0 standard.
BPMN 2.0 supports business process management for both technical users and business users by providing a notation that is intuitive to business users, yet able to represent complex semantics. With BPMN 2.0, you can model and execute workflows to accomplish your process automation tasks within your SOA.
WS-BPEL is the de-facto standard for composing multiple synchronous and asynchronous web services into collaborative and transactional process flows, which increase the flexibility and agility of your SOA.
Few basic concepts of business process management are briefly described below.
[ Business processes ] [ Process instance ] [ Business Process Execution Language (BPEL) ] [ Orchestration vs. choreography ] [ Abstract and executable processes ] [ Asynchronous and synchronous communication ]
A business process is typically a collection of related and structured activities or tasks, that depicts a business use case and produces a specific service or output. A process may have zero or more well-defined inputs and an output. During the execution of the business process, it executes its sub-processes synchronously or asynchronously for producing the final output. During the execution, it may interact with both humans or applications.
For example, a banking customer requesting a bank loan is a simple process. The following diagram depicts this process.
Taking the above process as an example, let's look at some of the key workflow components of a typical business process.
Process Initiator: In the 'Bank Loan Request' process, a banking customer isthe client who initiates a loan request.
Well-Defined Input: Banking customer provides the inputs required for the initialization of the process. It may contain the personal details of the customer, his financial information, account details, etc.
Request Processing: This is typically a sub process that produces an output internally during the execution of the business process. It analyzes the input data, verifies loan eligibility of the client through the execution of several logical expressions etc.
Human Task: This is where a human interaction is involved in the business process. In this particular example, a bank employee sends an acknowledgement to the bank customer regarding his loan request approval.
Final Output: Sendsacknowledgement. This is the final output which is sent back to the client who initiated the business process.
An instance of a process is a specific example of a process workflow. For example, if a particular process defines a banking customer requesting a bank loan, then an example instance of this process is Chris requesting for a loan of USD 50,000 and getting approval for it. Every time a banking customer makes a request for a loan, that request triggers a new process instance in the EI-Business-Process profile, which flows through the elements of the process workflow according to its design.
Business Process Execution Language (BPEL)
BPEL is the industry standard for business process orchestration. It is an XML-based language used for the definition and execution of business, as well as scientific workflows using Web services. In other words, BPEL is used to write business processes by composing Web services together with orchestration. The outcome is a composite Web service.
Although these business processes may interact with humans, the WS-BPEL standard does not specify human interactions. As a result, a business process defined by WS-BPEL alone can not have human interactions but only with Web services. The WSO2 BPS facilitates defining and using human tasks in business processes.
Orchestration vs. choreography
Web services can be composed using two approaches: orchestration and choreography. In orchestration, there is a central director to coordinate the services. In contrast, choreography contains no central director and each contributing service should have an understanding of participant services.
For composing Web services for a business process,orchestrationis a better option for reasons such as simpler process management, loose-coupling between web services, easeinerror handling, standardization, etc.
Abstract and executable processes
Based on the definition of the actual behavior required by a business process, it can design in two ways using WS-BPEL: abstract and executable. Abstract processes are intended to hide some operational details of the process. As a result, they do not include executable details like process flows. Executable business processes are used to model the actual implementation of the business process.
An abstract process is denoted under the http://docs.oasis-open.org/wsbpel/2.0/process/abstract namespace and an executable process is denoted under http://docs.oasis-open.org/wsbpel/2.0/process/executable. Additionally, there are syntactical differences between an abstract and an executable BPEL process.
Asynchronous and synchronous communication
BPEL processes can also be categorized based on how it invokes an operation of a partner service: synchronous and asynchronous. It is not possible to use both methods when invoking a partner service's operation, as it is dependent on the type of the partner service operation also.
Asynchronous transmission - Assume a BPEL process invokes a partner service. After the invokation of the partner process, the BPEL process continues to carry on with its execution process while that partner service completes performing its operation. The BPEL process then receives a response from the partner service, when the partner service is completed.
Synchronous transmission - Assume a BPEL process invokes a partner service. The BPEL process then waits for the partner service's operation to be completed, and responded. After receiving this completion response from the partner service, the BPEL process continues to carry on its execution flow. This transmission is not applicable to the In-Only operations defined in the WSDL of the partner service.
Usually asynchronous services are used for long-lasting operations and synchronous services for operations that return a result in a relatively short time. Typically, when asynchronous Web services are used, the BPEL process is asynchronous.
In addition to the above terms, the following article in WSO2 Oxygen Tank explains how to deploy a sample service and a BPEL process, establish a link with a Web service, invoke the process from a client, along with related topics to better understand how the WSO2 BPS works overall: http://wso2.org/library/articles/writing-simple-ws-bpel-process-wso2-bps-apache-ode.
BPMN stands for Business Process Modeling Notation and is an executable graphical notation for business processes. With the graphical notation capability, business analysts can develop the processes and then technical personnel can build the executable process that can be executed and followed by the management.
A business process in BPMN is a collection of business activities that is focused on a particular business goal or a use case. A process deployment can have one or more such processes. The following is a sample business process that is graphically visualized using the Activiti Designer Eclipse plugin.
This is a simple user approval process. The process starts with a none start event followed by two user tasks, i.e., the registration form getting filled by a front officer followed by the approval from a user in a managerial position. The process endsina none end event by approving or rejecting the user. Each of these steps are discussed below.
In order to start a process, a process definition must be deployed in the BPMN engine. Deployed processes can then be started as required. In this scenario, when there is a new arrival, a front officer can start a new process. So the none start event will create a new process instance and the engine will execute the process until it reaches a wait state. Now, a new task has been persisted in the system.
A none start event technically indicates that the start time of the task is not defined. There are other start events such as Timer Start Event, Message Start Event, Signal Start Event, etc.
Once the event is started, process is at the ‘Fill registration Form’ step. This is referred to as a user task in BPMN. User tasks are tasks that should be completed by human users or an external party to the activity engine. A user task should have authorized users or user groups. So a candidate user can claim and complete the task when a task is created. In this scenario, the front officer can be assignedforthe first user task to fill the registration form.
After he completes the task, the engine will continue and halt on the second user task. This task can be assigned to the manager user group so that any of the managers can claim the task and complete it. Once a task is claimed, it will disappear from the task lists of other users.
An end event marks the end of a process instance. In this scenario, the process will end after the approval in a none end event. Since this is a none end event, the engine will not do anything other than finishing the process.
There are other types of end events such as Error End Event, that will throw an error, and Cancel End Event, that would cancel the BPMN transaction, etc.
A Manual Task is a task that is performed without the help of any business process execution engine or application. It models work that is done by an external person, which the engine doesn't need to know. The engine handles the manual task as a pass-through activity where the process is continued automatically when the process execution arrives into it.
E.g. Following is a manual task depicting a delivery boy delivering pizza.
- The user enters the details of the pizza he/she wants to order which includes the topping and the size.
- Next the user has to confirm the order by entering the amout.
- Then the pizza delivery boy will deliver the pizza to the user: Manual Task.
- The user then confirms the status of the delivery.
The scenario explained here is a very simple scenario to introduce the basics. There are a number of constructs available in BPMN 2.0, that could address complex business scenarios. The following are few more useful constructs.
- Gateways that act as decisions. This way, a manager could reject the new user registration, which could return back to the Fill Registration Form usertask. .
- Variables, such that we can store or reference the user information so that it can be visualized in the form provided to managers for approval.
- Service task at the end of the process that will send the report to every shareholder.
- Call activity will call a subprocess when process execution arrives at the activity. Call activity refers to an independent process that is external to the process definition, whereas the subprocess will be embedded in the original process definition. The independent process in the call activity is a reusable process that can be called from multiple other process definitions.