To configure the resource nodes for a fully distributed HA cluster, edit the
<SP_HOME>/conf/worker/deployment.yaml file as follows. You have to uncomment (remove the # in front of each line) the section under # Sample of deployment.config for Distributed deployment. Now start performing following steps under deployment.config section.
- Uncomment deploymentUncomment
deployment.configsection of the
deployment.yaml and perform below changes to configurefile. Then configure the resource nodes to communicate with the manager node by following the steps below.
In thetype: distributed
typefield, enter the type of the cluster as
httpsInterfaceparameter, specify the host, port and the user credentials of the configuring resource node.
The host must be the IP of the network interface through which the nodes are connected (i.e., LAN IP). If all the nodes are deployed in the same physical machine, each node must have a separate port.
host:localhost, port:9090, username:admin, password:admin
- In the
leaderRetryIntervalparameter, enter the number of milliseconds for which the resource node must keep retrying to connect with a manager node. If the time specified for this parameter elapses without the resource node connecting to a manager node, the resource node is shut down.
resourceManagersparameter, specify the hosts, ports and user credentials of the manager nodes to which the resource node must try to connect. If there are multiple managers, a sequence must be specified.
Following is a sample deployment configuration for a resource node.
deployment.config: type: distributed # required in both manager / resource httpsInterface: # required in both manager / resource host: 192.168.1.3 port: 9090 username: admin # username of current resource node password: admin # password of current resource node leaderRetryInterval: 10000 # only required in worker resourceManagers: # only required in worker - host: 192.168.1.1 port: 9543 username: admin # username of manager node password: admin # password of manager node - host: 192.168.1.2 port: 9543 username: admin # username of manager node password: admin # password of manager node
If you want to configure this noded as a receiver node instead of a resource node, you need to add the following parameter below the
isReceiverNode : true
This is the only difference between a resource node and a receiver node. The following is the sample configuration of a receiver node.
deployment.config: type: distributed isReceiverNode : true httpsInterface: host: 192.168.1.3 port: 9090 username: admin password: admin leaderRetryInterval: 10000 resourceManagers: - host: 192.168.1.1 port: 9543 username: admin # username of manager node password: admin # password of manager node - host: 192.168.1.2 port: 9543 username: admin # username of manager node password: admin # password of manager node
In order to retrieve the state of the Siddhi Applications deployed in the system in case the event of a node failure, state persistence must be enabled for all worker nodes using DB . This is done via database state persistance, sharing a common database. To do that first you will have need to define a new datasource to be used by persistance information under the
datasourcessection of the
deployment.yamlfile. Below The following is a sample of such datasource configuration based on a MySQL DBdatabase.
- name: WSO2_PERSISTENCE_DB description: The datasource used for test database jndiConfig: name: jdbc/WSO2_PERSISTENCE_DB definition: type: RDBMS configuration: jdbcUrl: jdbc:mysql://localhost:3306/WSO2_PERSISTENCE_DB?useSSL=false username: root password: root driverClassName: com.mysql.jdbc.Driver maxPoolSize: 50 idleTimeout: 60000 connectionTestQuery: SELECT 1 validationTimeout: 30000 isAutoCommit: false
After this user will , you need to enable state persistance pointing to this Database using persistance store as 'database via the
org.wso2.carbon.stream.processor.core.persistence.DBPersistenceStore'. Sample config is shown below persistence store. The following is a sample configuration. For detailed instructions, see Configuring Database and File System State Persistence.
state.persistence: enabled: true intervalInMin: 3 revisionsToKeep: 2 persistenceStore: org.wso2.carbon.stream.processor.core.persistence.DBPersistenceStore config: datasource: WSO2_PERSISTENCE_DB table: PERSISTENCE_TABLE