The latest version for DAS is WSO2 Data Analytics Server 3.1.0. View documentation for the latest release.
WSO2 Data Analytics Server is succeeded by WSO2 Stream Processor. To view the latest documentation for WSO2 SP, see WSO2 Stream Processor Documentation.

All docs This doc
                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                 
||
Skip to end of metadata
Go to start of metadata

This section explains how to migrate data from one database type to another via the DAS Backup/Restore tool. This is demonstrated via a user scenario where data stored in an H2 embedded database is migrated to an MySQL database. 

Before you begin:

  • Stop the WSO2 DAS server before starting this procedure. You can restart it once the backing up and restoration of data is complete.
  • To test this scenario, you need to download the MySQL JDBC driver jar and place it in the <DAS_HOME>/repository/components/lib directory. You can dowload this jar from here.


To migrate the data, follow the procedure below.

Step 1: Create databases in the destination store

In WSO2 DAS, there are two databases configured by default to save unprocessed data and processed data. First, you need to recreate these two databases or the new database type, which is MySQL in this scenario. To do this, start the MySQL server and issue the following two commands.

CommandPurpose
mysql> create database EVENT_STORE;This creates the EVENT_STORE store to save the unprocessed data.
mysql> create database PROCESSED_DATA_STORE;This creates the PROCESSED_DATA_STORE store to save the processed data.


Step 2: Configure the data source to retrieve data

In this scenario, let's use the H2 databases that are configured by default in the <DAS_HOME>/repository/conf/datasources/analytics-datasources.xml file as shown in the following extract.

<datasource>
    <name>WSO2_ANALYTICS_EVENT_STORE_DB</name>
    <description>The datasource used for analytics record store</description>
    <definition type=”RDBMS”>
        <configuration>
            <url>jdbc:h2:repository/database/ANALYTICS_EVENT_STORE;AUTO_SERVER=TRUE;DB_CLOSE_ON_EXIT=FALSE;LOCK_TIMEOUT=60000</url>
            <username>wso2carbon</username>
            <password>wso2carbon</password>
            <driverClassName>org.h2.Driver</driverClassName>
            <maxActive>50</maxActive>
            <maxWait>60000</maxWait>
            <validationQuery>SELECT 1</validationQuery>
            <defaultAutoCommit>false</defaultAutoCommit>
            <initialSize>0</initialSize>
            <testWhileIdle>true</testWhileIdle>
            <minEvictableIdleTimeMillis>4000</minEvictableIdleTimeMillis>
            <defaultTransactionIsolation>READ_COMMITTED</defaultTransactionIsolation>
        </configuration>
    </definition>
</datasource>

<datasource>
     <name>WSO2_ANALYTICS_PROCESSED_DATA_STORE_DB</name>
     <description>The datasource used for analytics record store</description>
     <definition type=”RDBMS”>
         <configuration>
             <url>jdbc:h2:repository/database/ANALYTICS_PROCESSED_DATA_STORE;AUTO_SERVER=TRUE;DB_CLOSE_ON_EXIT=FALSE;LOCK_TIMEOUT=60000</url>
             <username>wso2carbon</username>
             <password>wso2carbon</password>
             <driverClassName>org.h2.Driver</driverClassName>
             <maxActive>50</maxActive>
             <maxWait>60000</maxWait>
             <validationQuery>SELECT 1</validationQuery>
             <defaultAutoCommit>false</defaultAutoCommit>
             <initialSize>0</initialSize>
             <testWhileIdle>true</testWhileIdle>
             <minEvictableIdleTimeMillis>4000</minEvictableIdleTimeMillis>
             <defaultTransactionIsolation>READ_COMMITTED</defaultTransactionIsolation>
         </configuration>
     </definition>
 </datasource>


Step 3: Run data backup script

To back up your data, follow the steps below:

  1. To temporarily store the data that needs to be migrated, create a directory in a preffered location in your machine. In this scenario, let's name it as backup.
  2. To back up the data in the directory you created, run the <DAS_HOME>/bin/analytics-backup.sh script by issuing the following command.
    ./analytics-backup.sh -backupRecordStore -dir ~/backup

In this scenario, only the data in the database needs to be backed up. Therefore, the arguement used is backupRecordStore. For more information about the arguments used with the Backup/Restore tool, see Backing Up or Restoring Analytics Data.


This generates the following logs in the console.

 Click here to view the complete log
[main] INFO org.wso2.carbon.analytics.dataservice.core.indexing.IndexNodeCoordinator — My Analytics Node ID: 26bafa65-a23e-4c95–83ee-33f824e05ceb
[main] INFO org.wso2.carbon.analytics.dataservice.core.indexing.IndexNodeCoordinator — Indexing Initialized: STANDALONE | Current Node Indexing: No
[main] INFO org.wso2.carbon.analytics.dataservice.core.AnalyticsDataServiceImpl — Current Node Data Purging: Yes
Intializing [tenant=-1234] [timefrom=’-9223372036854775808'] [timeto=’9223372036854775807'] [dir=’/Users/milanperera/backup’]…
72 table(s) available.
Backing up table ‘ORG_WSO2_ANALYTICS_APIM_ABNORMAL_ADITIONAL_DATA_TBL’…
Backing up table ‘ORG_WSO2_ANALYTICS_APIM_ABNORMAL_USAGE_ALERT_TBL’..
Backing up table ‘ORG_WSO2_ANALYTICS_APIM_PERCENTILE_TBL’…
Backing up table ‘ORG_WSO2_ANALYTICS_APIM_AVG_REQ_FOR_X_DAYS_TBL’…
Backing up table ‘ORG_WSO2_APIMGT_STATISTICS_WORKFLOW’…
Backing up table ‘ORG_WSO2_ANALYTICS_APIM_TIERLIMITHITTINGALERT’..
…
Done.

The data in the database is backed up in the backup directory you created.

Step 4: Configure destination data source

In this scenario, you are switching to the MySQL database type. Therefore, you need to configure two MySQL datasources in the  <DAS_HOME>/repository/conf/datasources/analytics-datasources.xml file as shown in the example below.

<datasource>
    <name>WSO2_ANALYTICS_EVENT_STORE_DB</name>
    <description>The datasource used for analytics record store</description>
    <definition type=”RDBMS”>
        <configuration>
            <url>jdbc:mysql://localhost:3306/EVENT_STORE?autoReconnect=true</url>
            <username>mysqluser</username>
            <password>mysqlpassword</password>
            <driverClassName>com.mysql.jdbc.Driver</driverClassName>
            <maxActive>50</maxActive>
            <maxWait>60000</maxWait>
            <validationQuery>SELECT 1</validationQuery>
            <defaultAutoCommit>false</defaultAutoCommit>
            <initialSize>0</initialSize>
            <testWhileIdle>true</testWhileIdle>
            <minEvictableIdleTimeMillis>4000</minEvictableIdleTimeMillis>
            <defaultTransactionIsolation>READ_COMMITTED</defaultTransactionIsolation>
        </configuration>
    </definition>
</datasource>
<datasource>
     <name>WSO2_ANALYTICS_PROCESSED_DATA_STORE_DB</name>
     <description>The datasource used for analytics record store</description>
     <definition type=”RDBMS”>
         <configuration>
            <url>jdbc:mysql://localhost:3306/PROCESSED_DATA_STORE?autoReconnect=true</url>
            <username>mysqluser</username>
            <password>mysqlpassword</password>
            <driverClassName>com.mysql.jdbc.Driver</driverClassName>
            <maxActive>50</maxActive>
            <maxWait>60000</maxWait>
            <validationQuery>SELECT 1</validationQuery>
            <defaultAutoCommit>false</defaultAutoCommit>
            <initialSize>0</initialSize>
            <testWhileIdle>true</testWhileIdle>
            <minEvictableIdleTimeMillis>4000</minEvictableIdleTimeMillis>
            <defaultTransactionIsolation>READ_COMMITTED</defaultTransactionIsolation>
         </configuration>
     </definition>
 </datasource>


Step 5: Run the script to restore the data

To restore the data that you previously backed up, run the <DAS_HOME>/bin/analytics-backup.sh script by issuing the following command.

./analytics-backup.sh -restoreRecordStore -dir ~/backup

As a result, a log similar to the following is displayed in the console.

[main] INFO org.wso2.carbon.analytics.dataservice.core.indexing.IndexNodeCoordinator — My Analytics Node ID: 26bafa65-a23e-4c95–83ee-33f824e05ceb
[main] INFO org.wso2.carbon.analytics.dataservice.core.indexing.IndexNodeCoordinator — Indexing Initialized: STANDALONE | Current Node Indexing: No
[main] INFO org.wso2.carbon.analytics.dataservice.core.AnalyticsDataServiceImpl — Current Node Data Purging: Yes
Intializing [tenant=-1234] [timefrom=’-9223372036854775808'] [timeto=’9223372036854775807'] [dir=’/Users/milanperera/backup’]…
102 table(s) available.
Restoring table ‘LOGANALYZER’…
Restoring table ‘LOGANALYZER_CLASS_LEVEL_ERROR_WEEKLY’…
Restoring table ‘LOGANALYZER_MESSAGE_LEVEL_ERROR_DAILY’…
Restoring table ‘ORG_WSO2_APIMGT_STATISTICS_PERDAYREQUEST’…
…
done.

Once you start the WSO2 DAS server again and run the DAS operations, the data handled is stored in the new MySQL database for which you configured datasources. You are also able to view the data you handled and stored before the new MySQL databases were configured (i.e., via the DAS UIs)

  • No labels