This documentation is for WSO2 Stream Processor 4.3.0. View documentation for the latest release.

All docs This doc

Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

To configure the resource nodes for a fully distributed HA cluster, edit the <SP_HOME>/conf/worker/deployment.yaml file as follows. You have to uncomment (remove the # in front of each line) the section under # Sample of deployment.config for Distributed deployment. Now start performing following steps under deployment.config section.

  1. Uncomment deploymentUncomment deployment.config section of the deployment.yaml and perform below changes to configure file. Then configure the resource nodes to communicate with the manager node by following the steps below.  
    1. In the type field, enter the type of the cluster as distributed.
      type: distributed 
    2. For the httpsInterface parameter, specify the host, port and the user credentials of the configuring resource node.

      Info

      The host must be the IP of the network interface through which the nodes are connected (i.e., LAN IP). If all the nodes are deployed in the same physical machine, each node must have a separate port.

      e.g., host:localhost, port:9090, username:admin, password:admin

    3. In the leaderRetryInterval parameter, enter the number of milliseconds for which the resource node must keep retrying to connect with a manager node. If the time specified for this parameter elapses without the resource node connecting to a manager node, the resource node is shut down.
      e.g., leaderRetryInterval: 5000 
    4. In the resourceManagers parameter, specify the hosts, ports and user credentials of the manager nodes to which the resource node must try to connect. If there are multiple managers, a sequence must be specified. 

      Following is a sample deployment configuration for a resource node.

      Code Block
      deployment.config:
        type: distributed           # required in both manager / resource
        httpsInterface:              # required in both manager / resource
          host: 192.168.1.3
          port: 9090
          username: admin			  # username of current resource node
          password: admin			  # password of current resource node
        leaderRetryInterval: 10000  # only required in worker
        resourceManagers:           # only required in worker
          - host: 192.168.1.1
            port: 9543
            username: admin		  # username of manager node
            password: admin		  # password of manager node
          - host: 192.168.1.2
            port: 9543
            username: admin		  # username of manager node
            password: admin		  # password of manager node
      Tip

      If you want to configure this noded as a receiver node instead of a resource node, you need to add the following parameter below the type parameter.

      isReceiverNode : true

      This is the only difference between a resource node and a receiver node. The following is the sample configuration of a receiver node.

      Code Block
      deployment.config:
       type: distributed       
       isReceiverNode : true
       httpsInterface:          
         host: 192.168.1.3
         port: 9090
         username: admin     
         password: admin     
       leaderRetryInterval: 10000
       resourceManagers:          - host: 192.168.1.1
           port: 9543
           username: admin      # username of manager node
           password: admin      # password of manager node
         - host: 192.168.1.2
           port: 9543
           username: admin      # username of manager node
           password: admin      # password of manager node
  2. In order to retrieve the state of the Siddhi Applications deployed in the system in case the event of a node failure, state persistence must be enabled for all worker nodes using DB . This is done via database state persistance, sharing a common database. To do that first you will have need to define a new datasource to be used by persistance information under the datasources section of the deployment.yaml file. Below The following is a sample of such datasource configuration based on a MySQL DBdatabase.

    Code Block
     - name: WSO2_PERSISTENCE_DB
          description: The datasource used for test database
          jndiConfig:
            name: jdbc/WSO2_PERSISTENCE_DB
          definition:
            type: RDBMS
            configuration:
              jdbcUrl: jdbc:mysql://localhost:3306/WSO2_PERSISTENCE_DB?useSSL=false
              username: root
              password: root
              driverClassName: com.mysql.jdbc.Driver
              maxPoolSize: 50
              idleTimeout: 60000
              connectionTestQuery: SELECT 1
              validationTimeout: 30000
              isAutoCommit: false

    After this user will , you need to enable state persistance pointing to this Database using persistance store as 'database via the org.wso2.carbon.stream.processor.core.persistence.DBPersistenceStore'. Sample config is shown below persistence store. The following is a sample configuration. For detailed instructions, see Configuring Database and File System State Persistence

    Code Block
    state.persistence:
      enabled: true
      intervalInMin: 3
      revisionsToKeep: 2
      persistenceStore: org.wso2.carbon.stream.processor.core.persistence.DBPersistenceStore
      config:
        datasource: WSO2_PERSISTENCE_DB
        table: PERSISTENCE_TABLE

...