This documentation is for WSO2 Application Server version 4.1.2. View documentation for the latest release.

Skip to end of metadata
Go to start of metadata

This document describes the steps you need to follow to setup a working WSO2 Application Server cluster in your environment.

Clustering the Application Server has two different aspects:

Configuration Sharing Across the Cluster Nodes

The configuration sharing is done using the WSO2 Governance Registry. All the Application Server nodes in the cluster are pointed to one instance of WSO2 Governance Registry through which, configuration is shared.

WSO2 Governance Registry consists of three registry spaces. Also see Registry.

  • Local Registry - Per running instance of every product.
  • Configuration Registry - Shared among all the instance of a cluster of one product.
  • Governance Registry - Centrally managed and shared across multiple products and multiple instances of those.

A local registry is used per each instance, mounting the configuration registry of each Application Server instance to the same configuration registry. The governance registry is shared among Governance Registry and Application Server instances.

Setting Up WSO2 Application Server Cluster with Governance Registry

After extraction of the WSO2 Application Server distribution, configure the database.

Both Governance Registry and Application Server are shipped with an embedded H2 registry. However, for clustering purposes MySQL is used.

In the following setup, the Governance Registry is running on MySQL database. That means its "local/config/governance" registries are stored in a MySQL database and its user-manager tables are also inside the same database.

In Application Server instances, both master and slave are using unique local registries, both configuration registries are mounted to the same directory of Governance Registry.

All the Application Server instances in the cluster are sharing a common database for user-manager.

Tip

If you are using an LDAP you do not have to use a database for user-manager.

Tip

For configurations with different databases, refer the documentation of WSO2 Governance Registry here: http://docs.wso2.org/display/Governance/Governance+Registry+Documentation

Application Server Cluster Database Configuration

Instance Name

Registry Space

Instance Name
Registry Space Mysql database name/mounted path (x.x.x.x)

AS Master (x.x.x.x)

local

amamaster

 

config

amagregdb /_system/asConfig

 

governance

amagregdb /_system/governance

 

user-manager

amausermgtdb

AS Slave1 (x.x.x.x)

local

amaslave1

 

config

amagregdb /_system/asConfig

 

governance

amagregdb /_system/governance

 

user-manager

amausermgtdb

GREG (x.x.x.x)

All

amagregdb

Tip

In the above table, there are 4 separate databases with following names:

  • amagregdb
  • amausermgtdb
  • amaslave1
  • amamaster

If you are having your database remotely, please configure mysql instance to allow remote connections to accept.

Running Multiple Instance of WSO2 Carbon Servers in the Same Machine

If you are running multiple instance same or different WSO2 Product Instances, you need to configure ports for each instance. You can configure $CARBON_HOME/repository/conf/carbon.xml by using portOffset.

For example, Offset=2 and HTTPS port=9443 will set the effective HTTPS port to 9445.

Configuring Governance Registry

1. Change the configuration in registry.xml and user-mgt.xml.

1.1. Open the GREG_HOME/repository/conf/registry.xml file, where GREG_HOME refers to Governance Registry home directory throughout this document. Remove the original dbConfig element and add the database configuration below to the registry.xml file.

Note

IP addresses and database URLs have to change according to your setup.

1.2. Open the GREG_HOME/repository/conf/user-mgt.xml file and change the database configuration as shown below.

Note

IP addresses and URLs have to change according to your setup.

1.3. Copy MySQL jdbc driver library to GREG_HOME/repository/components/lib directory since, by default, we do not ship the MySQL drivers in Governance Registry.

2. In the above configuration, the database is pointed in to same database amagregdb. Now start the Governance Registry instance with -Dsetup.

This will create tables for user manager database and registry database

Tip

Pick the database configuration from currentDBConfig element in registry.xml since there might be a number of database configurations in your registry.xml, which we put to perform mounting from other databases, but only one database configuration is configured as currentDBConfig).

3. After starting the Governance Registry instance successfully, create the following directory from the Governance Registry UI. This directory will be used by the Application Server instances as its mounted registry.

4. To configure the Application Server master node, change the default database configuration. To point its local registry to amamaster, point the user manager database to amausermgtdb and mount its configuration and governance registries from the Governance Registry database.

4.1. Add local registry configuration and user-manager configuration.

4.1.1. Open the AS_HOME/repository/conf/registry.xml file.                                       
4.1.2. Remove the original dbConfig element.                                                 
4.1.3. Add the database configuration shown below.

Note

IP addresses and database URLs have to change according to your setup.

4.1.4. Open the <AS_HOME>/repository/conf/user-mgt.xml file and change the database configuration as shown below.

Note

IP addresses and URLs have to change according to your setup.

4.2. To configure the mounting configuration, more parameters should be added to registry.xml file.

4.2.1. Open the <AS_HOME>/repository/conf/registry.xml file.                               
4.2.2. Remove the original dbConfig element.                                               
4.2.3. Add the following database configuration as shown below.

Note

IP addresses and database URLs have to change according to your setup.

4.2.4. Change the remoteInstance URL according to the configuration of the machine running the Governance Registry.

Note

InstanceIds, id and dbConfig elements should be mapped properly if you are using different names for them.

4.2.5. Copy MySQL jdbc driver library to <AS_HOME>/repository/components/lib directory.

5. To configure AS slave node, change the default database configuration, to point its local registry to amaslave1, point user manager database to amausermgtdb and mount its configuration and governance registries from Governance Registry database.

5.1. Add local registry configuration and user-manager configuration.

5.1.1. Open the <AS_SLAVE_HOME>/repository/conf/registry.xml file.                           
5.1.2. Remove the original dbConfig element.                                                 
5.1.3. Add the database configuration shown below.

Note

IP addresses and database URLs have to change according to your setup.

5.1.4. Open the <AS_SLAVE_HOME>/repository/conf/user-mgt.xml file and change the database configuration as shown below.

Note

IP addresses and URLs have to change according to your setup. Here we change to mode readOnly in database configuration.

5.1.5. Change the remoteInstance URL according to the configuration of the GRed running machine.

Note

InstanceIds, id and dbConfig elements should be mapped properly if you are using different names for them.

Note

If you compare the configuration with master node, we are only changing the configuration of local registry and all the other configurations are same with the master node, but the registry access mode in mounting is readOnly but the local registry configuration should be read-write since it is specific for each node.

5.2. Copy MySQL jdbc driver library to <AS_SLAVE_HOME>/repository/components/lib directory.

6. Load registry database schema to amaslave1 database before starting the slave AS node. You can find the required SQL script at <AS_SLAVE_HOME>/dbscripts directory.

7. Run the wso2server.sh or wso2server.bat of master node based on your platform with -Dsetup option for the first time start up to create the database tables, and after creating database tables, do not use -Dsetup option in the start.sh CARBON_HOME/bin/wso2server.sh -Dsetup.

Tip

You do not need to specify the -Dsetup option for AS slave node because we have created DB schema in the previous step.

Note

-Dsetup does not work with slave node because we share a same database for user manager and current -Dsetup option tries to create user manager schema also at the startup if you use -Dsetup.

8. After setting up the Application Server pointing to a single Registry, there are two options for sharing the configuration.

  • Manual configuration sharing - In this mode, the Application Servers can be configured to load the configuration from the Registry instead of the file system. Configuration loading happens only when the Application Server starts up. So if a change is made to the master AS node, other nodes have to be restarted to get the new configuration. A lot of users prefer this mode because it guarantees a consistent configuration at the start up and configuration is not changed while the Application Server is running.

Tip

By default, Application Server loads the configuration from the File System.

If the configuration has to be loaded from the Registry, following configuration has to be uncommented and changed in the carbon.xml.

Note

LoadFromRegistry parameter has to be changed to true in the slave nodes.

  • Deployment Synchronizer - The next approach is to use the deployment synchronizer shipped with the Application Server. With the deployment synchronizer, configurations can be updated in the slave nodes at the runtime:** Configuration has to be changed in the master node.
    • Check-in that configuration to the registry.
    • Check out the configuration from the slave nodes.

Deployment synchronizer can be used to synchronize the main repository of a Carbon server with a collection in the registry. This feature is sometimes also referred to as the registry-based repository synchronizer. Deployment synchronizer has a number of use cases:

  • Maintaining an up-to-date backup of the Carbon repository.
  • Sharing a single Carbon repository among multiple servers (through a shared registry).
  • Enforcing artifact updates in the registry to be deployed on a server at runtime.
  • Enforcing artifact updates in the registry to be deployed on a server at runtime.

In this scenario, an Application Server Cluster of two nodes are sharing the same configuration registry. With the deployment synchronizer, it is possible to maintain all the Application Server nodes in the cluster in sync through the shared registry. The master AS node uploads its local repository to the registry using the deployment synchronizer. Slave AS nodes (slave nodes) can then download the same repository from the registry and deploy locally.

To support this use case, synchronizer has to be run in auto commit mode in the master node. When in auto commit mode, it will periodically upload the changed artifacts in the local repository to the registry.

Similarly, slave nodes should run the synchronizer in the auto checkout mode. If needed, registry eventing can be employed to run the checkout operations so that a checkout will be made only when some artifact has changed in the shared registry.

Note

Carbon repository is located in the repository/deployment/server directory by default. Once enabled, deployment synchronizer uploads the contents of this directory to a collection named repository/deployment/server in the configuration registry (this is configurable).

Runtime State Replication

WSO2 Application Server flows are stateless for the most part. But in very rare deployments users have requested to share some of the runtime information among the cluster nodes. For example, Caching can be used with runtime state sharing.

Runtime state replication is based on the Tribes group management system. It provides group membership handling and group communication for clustered Carbon Server instances. Although the WSO2 Application Server is shipped with this built-in Tribes-based implementation, other clustering implementations based on different group management systems can be plugged in easily. The only thing you have to do is to implement a set of clustering interfaces provided by Carbon Server.

AS clustering currently does not support distributed locking for session data replication. Therefore, we have to deploy primary backup clusters for stateful services. This restriction does not apply to stateless services and the user can direct client requests to any node in the cluster for such services.

Application Server State Replication Configuration

AS clustering is configured using the axis2.xml file. So you have to enable clustering in the axis2.xml of each node.

For more details about Carbon Server clustering, please, see the article from the WSO2 library named "WSO Carbon Clustering Configuration Language."

1. Now enable clustering by changing the configuration in the axis2.xml.

Tip

By default, clustering is turned off to avoid additional overhead for individual deployments.

2. Open the axis2.xml and set the value true for the enable attribute of the clustering element as follows.

3. You may also change clustering properties to suit your deployment as explained in the above article. However, the default configuration is sufficient for the demonstration.

Also see, Session State Replication.

 

  • No labels