The default distribution of WSO2 BPS comes with an embedded H2 database as the BPEL engine's persistence storage along with other settings suitable for development environment. However, it is recommended that some of these configurations be changed when moving to production. The configuration setting may change depending on the number of requests BPS is going to handle per second, your auditing and monitoring requirements, performance requirements and nature of your processes.
The following are the key points to note.
To alter the number of allowed open files for system users, configure the following settings in
/etc/security/limits.conffile of Linux (be sure to include the leading * character).
Optimal values for these parameters depend on the environment.
To alter the maximum number of processes your user is allowed to run at a given time, configure the following settings in
/etc/security/limits.conffile of Linux (be sure to include the leading * character). Each carbon server instance you run would require up to 1024 threads (with default thread pool configuration). Therefore, you need to increase the nproc value by 1024 per each carbon server (both hard and soft).
If one or more worker nodes in a clustered deployment require access to the management console, you would need to increase the entity expansion limit as follows in the
<BPS_HOME>/bin/wso2server.batfile (for Windows) or the
<BPS_HOME>/bin/wso2server.shfile (for Linux/Solaris). The default entity expansion limit is 64000.
Tip: This is not included by default in the wso2server.sh file. You must add this in explicitly.
- Memory allocated for the BPS can be increased by modifying
<BPS_HOME>/bin/wso2server.batfile (for Windows) or the
<BPS_HOME>/bin/wso2server.shfile (for Linux/Solaris).
- Default setting for WSO2 ESB 4.6.0 and later is: -Xms256m -Xmx512m -XX:MaxPermSize=256m
- This can be changed for benchmarking as shown in the following example: -Xms2048m -Xmx2048m -XX:MaxPermSize=1024m (for Java 7) or -Xms2048m -Xmx2048m (for Java 8)
Configure an external database server such as MySQL as the persistence storage instead of embedded H2 database. Although slight performance gains can be experienced when using simple BPEL processes with H2 database, it cannot handle multiple concurrent requests and complex processes with the same efficiency.
JDBC connections are useful when your application requires high throughput.
BPS has three engines; Apache ODE BPEL processor, HumanTask engine and Activiti BPMN engine. These engines are tightly coupled with the database layer and their function is to persist instance data into the database. Thus, for BPS to function properly, you need to allocate enough database connections for datasource configurations.
Both Apache ODE BPEL processor and HumanTask engine share same BPS datasource and database connections so we generally recommend allocating 50% of database connections for each engine for an application running with both BPEL and HumanTask.
For example if you have a total 100 database connections for a BPEL and HumanTask application, you can use upto 50 database connections for the ODE engine and leave the rest of the database connections for HumanTask operations. If you have only BPEL in your application, you can allocate many more database connections for the ODE engine.
Configure the BPS datasource by editing the
<BPS_HOME>/repository/conf/datasources/bps-datasources.xml file and changing the maxActive value .
Execution of each BPMN process instance makes multiple database calls. Therefore, when executing multiple process instances by concurrent threads (i.e., users), multiple database connections are used. Accordingly, the database connection pool has to be configured to provide the required number of connections based on the expected maximum concurrent process executions. This can be configured by setting the maxActive parameter of the
<BPS_HOME>/repository/conf/datasources/activiti-datasource.xml file. To avoid failures that may occur due to the congestion for db connections, maxActive should be equal to the expected number of concurrent process executions. However, lesser number of connections may be sufficient depending on the properties of executed process models (i.e., number/type of tasks) and the behavior of processes (i.e. presence of timer events, reaction time of process participants). If db connection pool size (i.e. maxActive) has to be reduced, it has to be done based on load tests with actual process models and expected process behaviors.
Maximum allowed connections for the database connection pool (i.e., maxActive) should not exceed the maximum allowed connections (i.e. DB sessions) for the database server. In addition, if the database server is shared with BPEL runtime or other server, make sure sufficient number of sessions are available for all shared servers. For example, if BPMN connection pool needs 100 connections and BPEL connection pool needs 50 connections, and if it is expected to have peak BPMN and BPEL loads at the same time, the number of database sessions should be at least 150.
Configure the Activiti datasource by editing the
<BPS_HOME>/repository/conf/datasources/activiti-datasources.xml file and changing the following.
Also note that, even you have allocated higher number of database connections for datasources, performance may not increase as expected. One reason for this could be that there are not enough database sessions from the database side. If that is the case, you need to increase the number of database sessions from database side.
ODE scheduler threads
ODE scheduler threads are useful when your application requires high throughput.
In the ODE engine, every scheduler thread is associated with a database connection. So the rule of thumb is; the number of ODE scheduler threads should be less than or equal to number of database connections allocated for the ODE engine. If this is not followed, some threads may not work properly as they cannot acquire a database connection to work. For example, in an application that uses both BPEL and HumanTask, if you have a total 100 database connections, you can allocate 50 threads for the ODE scheduler. This will guarantee that at a given time, only 50 database connections are acquired by the ODE engine.
Configure this by adding the following to the
<BPS_HOME>/repository/conf/bps.xml file if it isn't there already.
Multi-threaded HTTP connection manager
Configure multi-threaded HTTP connection manager connection pool settings to suit your BPEL processes. Typically, the HTTP connection manager should be configured to be in sync with the concurrent HTTP connections in BPS. This is necessary when you have a lot of internal or external service invocations.
There are two configurations in HTTP connection manager. One is 'max total connections' and the other is 'max total connection per host'. These settings depend on the number of concurrent requests BPS needs to handle, and the number of external service calls per process instance. Also, if your processes do a lot of service invocation to localhost (or a particular host), then it is necessary to increase the
<BPS_HOME>/repository/conf/bps.xml file to set specific values as shown below.
This configuration is relevant when partner services take more time to response. When partner services are slow or take more time to response, callee BPEL process's invoke activity fails due to message exchange timeout. By increasing time will avoid these kind of failures. Also note that, slow partner services will slow entire BPEL process. This will cause to timeout the client application. Thus it is required increase timeout interval for client application. To do this, configure the
<BPS_HOME>/repository/conf/bps.xml file and the
<BPS_HOME>/repository/conf/axis2/axis2.xml file as shown below.
Here you must increase the default values for message exchange timeout and external service invocation timeout. Also set the
SO_TIMEOUT parameter and
CONNECTION_TIMEOUT parameter in HttpSender. Increase the timeout value from the default value to 10 minutes.
HumanTask caching is important when you have to deal with a large user store. HumanTasks are tightly coupled with users and user roles/groups. Because of this, BPS does lot of user store lookups for HumanTask operations. These user store calls can take considerable amount of time, if the user store is large or located remotely. This degrades the performance of the entire HumanTask engine. Caching user and role lookup data at the BPS side will reduce these remote user store calls and improve the overall performance of the HumanTask engine.
Enable HumanTask caching in the
Number of HumanTask scheduler threads
This is relevant when you are not using HumanTask deadline/escalation. HumanTask deadline and escalation are scheduled tasks that are executed by the HumanTask scheduler. By default, 50 threads are allocated for the HumanTask scheduler. If you are not using deadline/escalations, you can configure this value to a lower value such as 5. This will utilize idle threads in BPS server. Note that, you can't set this to 0, because the HumanTask engine has several internal scheduled tasks to run.
Configure this value in the
BPEL process persistence
Configuring BPEL process persistence is recommended. If a process is implemented in the request-response interaction model, use in-memory processes instead of persistence processes. This decision mainly depends on the specific business use-case.
Use process-to-process communication. This reduces the overhead introduced by additional network calls, when calling one BPEL process from another deployed in the same BPS instance.
Configure event-filtering at process and scope level. A lot of database resources can be saved by reducing the number of events generated.
Take precaution when deploying WSO2 BPS in virtualized environments. Random increases in network latencies and performance degradation have been observed when running BPS on VMs.
Process hydration and dehydration
One technique to reduce memory utilization of the BPS engine is process hydration and dehydration. User can configure the hydration/dehydration policy in the
<BPS_HOME>/repository/conf/bps.xml file or define a custom hydration/dehydration policy.
The following example enables the dehydration policy and sets the maximum deployed process count that can exist in memory at a particular time to 100. The maximum age of a process before it is dehydrated is set to 5 minutes.
MaxAge value: Sets the maximum age of a process before it is dehydrated.
maxCount: The maximum deployed process count that can exist in memory at a particular time.
For performance purposes, a process can be defines as being executed only in-memory. This greatly reduces the amount of generated queries and puts far less load on the database. Both persistent and non-persistent processes can cohabit in WSO2 BPS.
Shown below is an example of declaring a process as in-memory simply by adding an in-memory element in the deploy.xml file.
Configuration details for these optimizations vary in older BPS versions. Also, these optimizations are supported by Apache ODE, but configuration is different from WSO2 BPS.
BPMN performance tuning
The BPMN runtime frequently accesses the database for persisting and retrieving process instance states. Therefore, performance of BPMN processes depends heavily on the database server. In order to get best performance, it is recommended to have a high speed network connection between BPS instances and the database server.
BPMN runtime uses a database based ID generator for allocating IDs for all persisted entities. In a highly loaded clustered scenario (i.e., multiple BPS instances with a shared database), database transaction failures may occur if two BPS instances try to allocate IDs at the same time. This can be mitigated by increasing the number of IDs allocated in a single transaction by setting the "idBlockSize" property. Default value of ID block size is 2500. This can be increased by adding the following property to
processEngineConfiguration bean in the
Another option is to configure the
StrongUuidGenerator instead of using database based ID generator by adding the following property to
processEngineConfiguration bean in the