Work in progress!
This Task Scheduling component allows you to define specific tasks that can be invoked periodically. This functionality is used by the following WSO2 products:
Follow the instructions given on this page to configure and set up this component for your server. You can find details on how to use this component in the respective product documentation.
The task scheduling component is configured in the
tasks-config.xml file (stored in the
<PRODUCT_HOME>/repository/conf/etc/ directory). The default values in the
tasks-config.xml file ensures that minimal changes are required when running in both standalone and clustered modes. Given below are the settings that you can configure in this file.
Setting the task server mode
You can set the task server mode by using one of the following values for the
<taskServerMode> element in the
- AUTO: This is the default task handling mode. This setting detects if clustering is enabled in the server and automatically switches to CLUSTERED task handling mode.
- STANDALONE: This mode is used when the Carbon server is used as a single installation. That is, tasks will be managed locally within the server.
CLUSTERED: This mode is used when a cluster of Carbon servers are put together. This requires Axis2 clustering to work. With this setting, if one of the servers in the cluster fail, the tasks will be rescheduled in one of the remaining server nodes.
Find out more about clustering WSO2 products.
Configuring a clustered task server
If you have enabled the CLUSTERED task server mode in step 1, the following configuration elements in the
tasks-config.xml file will be effective:
:This value specifies the number of nodes in the server cluster, which basically represents the number of servers that will share the task scheduling. Task scheduling will only begin after the given number of servers are activated. For example, consider a situation where ten tasks are saved and scheduled in your product and there are five servers in the cluster. When individual servers become active, we do not want the first active server to schedule all the tasks. Instead, we will want all five servers to become active and share the ten tasks between them.
The task server count is set to "2" by default, which indicates that at least two nodes will be there in a clustered setup.
<defaultLocationResolver>:The default location resolver basically controls how the scheduled tasks are allocated among multiple nodes of a cluster. The possible options are as follows:
RoundRobinTaskLocationResolver: Cluster nodes are selected on a round robin basis and the tasks are allocated.
RandomTaskLocationResolver: Cluster nodes are randomly selected and the tasks are allocated.
RuleBasedLocationResolver: This allows you to set a criteria for selecting the cluster nodes to which the tasks should be allocated. The [task-type-pattern],[task-name-pattern] and [address-pattern of the server node] can be used as criteria. For example, with this setting, a scheduled task that matches a particular [task-type-pattern] and [task-name-pattern] will be allocated to the server node with a particular [address-pattern]. If multiple server nodes in the cluster match the [address-pattern], the nodes are selected on a round robin basis. This criteria is specified in the configuration using the
<property>element. Therefore, you can define multiple properties containing different criteria values.
For example, see the details of the
RuleBasedLocationResolverconfiguration given below.
As shown above, the property names (rule-1, rule-2 and rule-5) define a sequence for the list of properties in the configuration. Therefore, scheduled tasks will evaluate the criteria specified in each property according to the sequence order; i.e., rule-1 is checked before rule-2. In other words, the scheduled task will first check if it matches the criteria in rule-1, and if it does not, it will check rule-2.
RuleBasedLocationResolverallows you to address scenarios where tasks are required to be executed in specific server nodes first. Then, it can fail-over to another set of server nodes if the first (preferred) one is not available.