A Siddhi Application is a combination of multiple Siddhi executional elements. A Siddhi executional element can be a Siddhi Query or a Siddhi Partition. When defining a Siddhi application, you can specify a number of parallel instances to be created for each executional element, and how each executional element must be isolated for an SP instance. Based on this, the initial Siddhi application is divided into multiple Siddhi applications and deployed in different SP instances.
Any standalone Siddhi Application can be converted in to a distributed Siddhi Application by adding
@dist annotations. By adding these annotations, you ca divide the Siddhi Application to multiple execution groups and run them in parallel with multiple instances per group.
The supported annotations are as follows:
Table of Contents
A distributed Siddhi application can contain one or more of the following elements:
Execution group annotation
|Description||This annotation specifies the execution groups. An execution group is a collection of queries that is executed as a single unit. You can add this annotation at the query level and specify a name for the group. Queries with same execution group name are considered as part of the same group. If you do not specify an execution group name, system generated name with the |
This annotation specifies the execution parallelism of an execution group (i.e., the number of instances in which the executional elements of the execution group must be executed in parallel). Parallelism is always defined against an execution group. If no parallelism is specified,
Transport channel creation annotation
|Description||This annotation specifies whether Stream Processor managers are allowed to create Kafka topics that are required for application deployment. By default, this is set to |
Creating a distributed Siddhi application
This section explains how to write distributed Sidhi applications by assigning executional elements to different execution groups.
The following annotations are used when writing a distributed Siddhi application.
All the executional elements with the same execution group are executed in the same Siddhi application. When different execution groups are mentioned within the same distributed Siddhi application, WSO2 SP initiates a separate Siddhi Application per execution group. In each separated Siddhi application, only the executional elements assigned to the relevant execution group are executed.
Executional elements that have no execution group assigned to them are executed in a separate SP instance.
The number of instances in which the executional element must be executed in parallel. All the executional elements assigned to a specific execution group (i.e., via the
@dist(execGroup) annotation) must have the same number of parallel instances specified. If there is a mismatch in the parallel instances specified for an execution group, an exception occurs.
When the number of parallel instances to be run is not given for the executional elements assigned to an execution group, only one Siddhi application is initiated for that execution group.
User given source parellelism annotation
|Description||This specifies the number of parallel receiver instances that should be created for a user given source. In a distributed deployment, user given sources are extracted out as separate passthrough Siddhi applications and deployed on designated receiver nodes. If adequate receiver nodes are not available, the deployment is aborted. If you have not specified a parallel count, it is defaulted to |
The following is a sample distributed Siddhi application.
@App:name('wso2-app') @Source(type=’http’,@map(type=’json),@dist(parallel=’3’)) Define stream TempStream(deviceID long, roomNo int, temp double); @Source(type=’http’,@map(type=’json),@dist(parallel=’3’)) Define stream RegulatorStream(deviceID long, roomNo int, isOn bool); @info(name = ‘query1‘query-1') @dist(execGroup='group1group-1') from TempStream#window.time(2 min) select avg(temp) as avgTemp, roomNo, deviceID insert all events into AvgTempStream; @info(name = ‘query2‘query-2') @dist(execGroup='group1group-1') from TempStream[temp > 30.0]#window.time(1 min) as T join RegulatorStream[isOn ==false]#window.length(1) as R on T.roomNo == R.roomNo select T.roomNo, R.deviceID, 'start' as action insert into RegulatorActionStream; @info(name = ‘query3‘query-3') @dist(execGroup='group1group-1') from every( e1=TempStream ) -> e2=TempStream[e1.roomNo==roomNo and (e1.temp + 5) <= temp ] within 10 min select e1.roomNo, e1.temp as initialTemp, e2.temp as finalTemp insert into AlertStream; @info(name = ‘query4‘query-4') @dist(execGroup='group2group-2' ,parallel ='23') from TempStream [(roomNo >= 100 and roomNo < 110) and temp > 40 ] select roomNo, temp insert into HighTempStream; @info(name = ‘query5‘query-5') @dist(execGroup='group3group-3' , parallel=’2’) partition with ( deviceID of TempStream ) begin from TempStream#window.time(1 min) select roomNo, deviceID, temp, avg(temp) as avgTemp insert into #AvgTempStream; from #AvgTempStream[avgTemp > 20]#window.length(10) select roomNo, deviceID, max(temp) as maxTemp insert into deviceTempStream; end;
When this In the above Siddhi application is deployed, it is executed as shown in the table below.
Number of Siddhi
, there are two user given sources. The execution group per source is created via a passthrough query to fetch data from each user given source and insert into Kafka. The details of these execution groups are as follows:
wso2-app-passthrough-xxxxx-1 (auto generated query)
There are three user-defined groups.
roup-1 is defined without specifying parallelism. Therefore, the default parellelism of 1 applies.
- Details of
group-1are as follows:
Group name: wso2-app-group-1
Queries included: query-1, query-2, query-3
- Details of
group-2are as follows:
- Details of
group-3are as follows:
Group name: wso2-app-group-3
Queries included: query-5
All of these groups are deployed in worker nodes with patching parallelism as separate parallel Siddhi applications. Failure to deploy any of the group reults in the whole deployment being aborted.