WSO2 Data Analytics Server is succeeded by WSO2 Stream Processor. To view the latest documentation for WSO2 SP, see WSO2 Stream Processor Documentation.

All docs This doc
                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                             
||
Skip to end of metadata
Go to start of metadata

A multi datacenter setup is a deployment where two or more DAS High Availability (HA) clusters are run in two different Datacenters (DC) for the purpose of disaster recovery.

The global load balancer partitions the request traffic into each datacenter based on a specific condition (e.g., the geographical location or IP ranges). The requests are delivered to the local load balancer of each datacenter, and processed within each local HA setup. The event-store and the processed-store are synced within the DCs via Bi-Directional Replication (BDR). 

Setting up a multi DC HA cluster

To set up a multi DC HA cluster, follow the steps below:

  1. Create the required databases. In this scenario, you need three databases for storing received events, storing processed events and for CarbonJDBC (as shown in the above image). Let's set up these databases by following the steps below:

    The following needs to be done in each node in each datacenter.


    1. Download and install MySQL Server.

    2. Download the MySQL JDBC driver.

    3. Unzip the downloaded MySQL driver zipped archive, and copy the MySQL JDBC driver JAR (mysql-connector-java-x.x.xx-bin.jar) into the <DAS_HOME>/repository/components/lib directory of all the nodes in the cluster.

    4. Enter the following command in a terminal/command window, where username is the username you want to use to access the databases.
      mysql -u username -p 
    5. When prompted, specify the password that will be used to access the databases with the username you specified.
    6. Create three databases named EventStoreDBProcessedStoreDB, and CarbonJDBCDB.

      About using MySQL in different operating systems

      For users of Microsoft Windows, when creating the database in MySQL, it is important to specify the character set as latin1. Failure to do this may result in an error (error code: 1709) when starting your cluster. This error occurs in certain versions of MySQL (5.6.x) and is related to the UTF-8 encoding. MySQL originally used the latin1 character set by default, which stored characters in a 2-byte sequence. However, in recent versions, MySQL defaults to UTF-8 to be friendlier to international users. Hence, you must use latin1 as the character set as indicated below in the database creation commands to avoid this problem. Note that this may result in issues with non-latin characters (like Hebrew, Japanese, etc.). The following is how your database creation command should look.

      mysql> create database <DATABASE_NAME> character set latin1;

      For users of other operating systems, the standard database creation commands will suffice. For these operating systems, the following is how your database creation command should look.

      mysql> create database <DATABASE_NAME>;
    7. Execute the following script for the two databases you created in the previous step (If you are using MySQL  5.7.x , use 'mysql5.7.sql' script).
      mysql> source <DAS_HOME>/dbscripts/mysql.sql; 

       Click here to view the commands for performing steps f and g
      mysql> create database EventStoreDB;
      mysql> use EventStoreDB;
      mysql> source <DAS_HOME>/dbscripts/mysql.sql;
      mysql> grant all on EventStoreDB.* TO [email protected] identified by "password";
       
      mysql> create database ProcessedStoreDB;
      mysql> use ProcessedStoreDB;
      mysql> source <DAS_HOME>/dbscripts/mysql.sql;
      mysql> grant all on ProcessedStoreDB.* TO [email protected] identified by "password";
      
      
      mysql> create database CarbonJDBCDB;
      mysql> use CarbonJDBCDB;
      mysql> source <DAS_HOME>/dbscripts/mysql.sql;
      mysql> grant all on CarbonJDBCDB.* TO [email protected] identified by "password";
    8. Create the required database tables. The same data needs to be maintained in each datacenter. To ensure that there are no primary key violations, the schema of each database table must include a column for the datacenter ID. This column must be the primary key of each table or a part of the primary key combination.
      e.g.,

      CREATE TABLE IF NOT EXISTS transactions (datacenterID INT AUTO_INCREMENT,
          product VARCHAR(255) NOT NULL,
          price INT,
          quantity INT,
          datacenterID VARCHAR(255) NOT NULL,
          PRIMARY KEY (datacenterID)
      )  ENGINE=INNODB;
  2. Create two DAS HA clusters. This each cluster can be a Minimum HA Deployment or a Fully Distributed Deployment. For this example, let's create two Minimum HA clusters.

  • No labels