This documentation is for older versions of WSO2 products and may not be relevant now. Please see your respective product documentation for clustering details and configurations.

Versions Compared


  • This line was added.
  • This line was removed.
  • Formatting was changed.


The following diagram indicates the fully-distributed deployment pattern used for high availability.

Distributed componentMinimum number of nodesDescription
Receiver nodes2For data analytics to happen, it is necessary to first collect the relevant data you require. DAS provides data agents that capture information on the messages flowing through the WSO2 ESB, WSO2 Application Server, and other products that use the DAS data publisher. The information is obtained by the data receivers and are then stored in a datastore, where it is optimized for analysis. The receiver nodes are used to obtain this data from the data agents.
Indexer nodes2

A background indexing process fetches the data from the datastore and does the indexing operations. These operations are handled by the indexer nodes in a fully distributed, highly available system.

Analyzer (Spark) nodes2The analyzer engine, which is powered by Apache Spark, analyzes this data according to defined analytic queries. This will usually follow a pattern of retrieving data from the datastore, performing a data operation such as an addition, and storing the data back in the datastore. The analyzer operations are performed by the analyzer nodes.
Dashboard nodes2

The dashboard sends queries to the datastore for the analyzed data and displays them graphically. This function can be distributed to the dashboard nodes.

Storm nodes0Apache Storm can be used to handle any additional load. This can be any number of nodes and need not be used in a fully distributed system unless required.

When configuring the Fully distributed cluster following setups should be done in each DAS node.

  1. Mount all governance registries to a single governance registry and config registries to a single config registry.
  2. Point all user stores to a single user store DB.
  3. Enable Hazelcast clustering in all nodes and configure as a single WKA based cluster.
  4. Set carbon.spark.master.count to 2 in the <DAS_HOME>/repository/conf/analytics/spark/spark-defaults.conf file, which is the number of spark masters in the setup in all analyzer nodes. Configure that file with the carbon.spark.master and configurations as described in Spark Configurations.


    In a multi node DAS cluster that runs in a RedHat Linux environment, you also need to update the <DAS_HOME>/bin/ file with the following entry so that the <DAS_HOME> is exported. This is because the symbolic link may not be resolved correctly in this operating system.

    Export CARBON_HOME=<symbolic link>

  5. Point WSO2_ANALYTICS_FS_DB, WSO2_ANALYTICS_EVENT_STORE_DB and WSO2_ANALYTICS_PROCESSED_DATA_STORE_DB in the <DAS_HOME>/repository/conf/datasources/analytics-datasources.xml file in all nodes to common 3 different datasources.
  6. If using other datasource provider types uncomment relevant providers on the beginning of that config file. 
  7. Enable dep-sync in all DAS nodes and enable commit to true only in a single receiver node.

When starting the instances you can provide predefined profiles to start the instances as receiver nodes, analyzer nodes or Indexer nodes.

Node TypeDisabled ComponentsOption
Receiver NodeAnalyticsEngine, AnalyticsExecution, Indexing, DataPurging, AnalyticsSparkCtx, AnalyticsStats
Indexer NodeAnalyticsExecution, AnalyticsEngine, EventSink, AnalyticsSparkCtx, AnalyticsStats, DataPurging-indexerNode
Analyzer NodeIndexing, EventSink, DataPurging, IndexThrottling, AnalyticsStats-analyzerNode

These can be provided at the server startup. For example: