This documentation is for WSO2 Stream Processor 4.3.0. View documentation for the latest release.

All docs This doc

Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.
Comment: Update manager cluster section with Nats configuring details

...

  1. In the cluster.config section, make the following changes.
    1. To enable the cluster mode, set the enabled property to true.
    2. In order to cluster all the manager nodes together, enter the same cluster ID as the group ID for all the nodes (e.g., groupId: group-1).
    3. Enter the ID of the class that defines the coordination strategy for the cluster as shown in the example below.
      e.g., coordinationStrategyClass: org.wso2.carbon.cluster.coordinator.rdbms.RDBMSCoordinationStrategy
  2. In the strategyConfig section of cluster.config, enter information for the required parameters as follows.
    1. Enter the ID of the datasource shared by the nodes in the cluster as shown in the example below. Data handled by the cluster are persisted here.
      datasource: SP_MGT_DB

      Info

      The SP_MGT_DB datasource is configured to an h2 database by default. You must create a MySQL database and then configure this datasource in the <SP_HOME>/conf/manager/deployment.yaml file of the required manager. The following is a sample configuration.

      Code Block
        - name: SP_MGT_DB
          description: The MySQL datasource used for Cluster Coordination
          # JNDI mapping of a data source
          jndiConfig:
             name: jdbc/WSO2ClusterDB
          # data source definition
          definition:
             # data source type
             type: RDBMS
             # data source configuration
             configuration:
               jdbcUrl: 'jdbc:mysql://<host>:<port>/<database_name>?useSSL=false'
               username: <Username_Here>
               password: '<Password_Here>'
               driverClassName: com.mysql.jdbc.Driver
               maxPoolSize: 50
               idleTimeout: 60000
               connectionTestQuery: SELECT 1
               validationTimeout: 30000
               isAutoCommit: false
    2. Specify the time interval (in milliseconds) at which heartbeat pulse should occur within the manager cluster to indicate that a manager is in an active state as shown in the example below.
      heartbeatInterval: 500
    3. Specify the number of times the heartbeat pulse must be unavailable at the specified time interval in order to consider a manager node as inactive as shown in the example below. A value of four means that if a manager node fails to send four consecutive heart beat pulses, it will be identified as unresponsive and another manager node will act as the active node in the manager cluster.
      heartbeatMaxRetry: 4
    4. Specify the time interval (in milliseconds) at which each node should listen for changes that occur in the cluster as shown in the example below.
      eventPollingInterval: 1000
  3. In the deployment.config section, enter information as follows:
    1. In the type field, enter the type of the cluster as distributed
      type: distributed    
    2. For the httpsInterface parameter, specify the host and the port of the node.

      Info

      Host should be the IP of the network interface though which nodes are connected. (i.e LAN IP). Each node should have a separate port if deployed in same physical machine.

      e.g., host:localhost, port:9543

    3. Specify the time interval (in milliseconds) at which resource nodes connected to this manager should send heartbeat pulses to indicate that they are in a working state as shown in the example below.
      e.g., heartbeatInterval: 2000
    4. Specify the number of times a resource node's heartbeat should be unavailable for the manager node to identify that the resource node as unresponsive. i.e. according to the below example, if the resource node fails to send 4 consecutive heartbeat pulses it will be recognized as unresponsive and the siddhi applications deployed in that node will be rescheduled to another available resource node.
      e.g., heartbeatMaxRetry: 4

    5. In the minResourceCount parameter, specify the minimum number of resource nodes required to operate the distributed setup. Siddhi applications are not deployed when the number of available resource nodes is less than the number specified here. The default value is 1.In the bootstrapURLs parameter

    6. If you are using NATS as messaging layer, specify the Kafka NATS server URLs used by the cluster with natsServerUrl as a comma separated list.  
      e.g., It should be given in the
      format format <host_1>:<port_1>,   <host_2>:<port_2>In the zooKeeperURLs parameter.  Also you will need to provide cluster ID name of the cluster created in NATS server using clusterid property.
    7. If you are using kafka as messaging layer, specify the server URL of the zookeeper of the cluster in the format given below:Kafka server URLs used by the cluster with bootstrapURLs as a comma separated list.
      e.g., It should be given in the format <host_1>:<port_1>, <host_2>:<port_2>

    8. In the zooKeeperURLs parameter under zookeeper config, specify the server URL of the zookeeper of the cluster in the format given below:
      <ZOOKEEPER_HOST>:<ZOOKEEPER_PORT>

      The following is an example of a deployment.config section configured as described above.


      Tabs Container
      directionhorizontal
      Tabs Page
      titleNATS Config
      Code Block
      deployment.config:
      # Deployment Configuration for Distributed Deployment
      deployment.config:
        type: distributed
        httpsInterface:
          host: localhost
          port: 9543
        heartbeatInterval: 2000
        heartbeatMaxRetry: 4
        datasource:
      192.168.1.1
       SP_MGT_DB           # define a mysql datasource in datasources and refer it from here.
        minResourceCount: 1
        natsServerUrl: nats://localhost:4222
       clusterId: test-cluster
      Tabs Page
      titleKafka Config
      Code Block
      # Deployment Configuration for Distributed Deployment
      deployment.config:
        type: distributed
        httpsInterface:
          host: localhost
          port: 9543
        heartbeatInterval: 2000
        heartbeatMaxRetry: 
      2
      4
        datasource: SP_MGT_DB           # define a mysql datasource in datasources and refer it from here.
        minResourceCount: 1
        bootstrapURLs: 
      192.168.1.10
      localhost:9092
      , 192.168.1.11:9092
         # 
      only
      kafka urls
      
      required
       
      in
       
      manager
      zooKeeperConfig:
      
      (kafka
       
      urls)
         zooKeeperURLs: 
      192.168.1.10:2181
      localhost:2181   # zookeeper urls
          connectionTimeout: 10000
          sessionTimeout: 10000
Configure resource nodes

To configure the resource nodes for a fully distributed HA cluster, edit the <SP_HOME>/conf/worker/deployment.yaml file as follows. You have to uncomment (remove the # in front of each line) the section under # Sample of deployment.config for Distributed deployment. Now start performing following steps under deployment.config section.

...