The following sections provide information and instructions on how to cluster the ESB profile of WSO2 Enterprise Integrator (WSO2 EI) with a third-party load balancer.
The clustering deployment pattern
This pattern has two WSO2 EI nodes to serve service requests with high availablity and scalability. It allows access to the Management Console through an external load balancer and directs service requests to nodes through this load balancer. The following image depicts the sample pattern this clustering deployment scenario will follow.
This pattern uses two nodes as well-known members. It is always recommended to have all nodes of the cluster as well-known members.
When configuring your WSO2 products for clustering to host them in your production environment, it is necessary to use a specific IP address and not localhost or host names in your configurations.
Configuring the load balancer
The load balancer automatically distributes incoming traffic across multiple WSO2 product instances. It enables you to achieve greater levels of fault tolerance in your cluster and provides the required balancing of load needed to distribute traffic.
These configurations are written assuming that default 80 and 443 ports are used and exposed by the third party load balancer for this WSO2 EI cluster. If any other ports are used instead of the default ones, replace 80 and 443 values with the corresponding ports accordingly. Also, note the following facts when configuring the load balancer:
These configurations are not required if your clustering setup does not have a load balancer.
Load balancer ports are HTTP 80 and HTTPS 443 in the deployment pattern above.
Direct the HTTP requests to WSO2 EI nodes using the
http://xxx.xxx.xxx.xx3/<service>URL via HTTP 80 port.
Direct the HTTPS requests to the WSO2 EI nodes using the
https://xxx.xxx.xxx.xx3/<service>URL via HTTPS 443 port.
Access the Management Console using the
URL via HTTPS 443 port.
It is recommended to use NGINX Plus as your load balancer of choice.
Install NGINX Plus or Nginx community version configured in a server within your cluster network.
Create a VHost file (ei
.http.conf) in the
/conf.d directory and add the following configurations into it. This configures NGINX Plus to direct the HTTP requests to the two WSO2 EI nodes via the HTTP 80 port using the
Create a VHost file (ei.https.conf) in the /etc/nginx/conf.d directory and add the following configurations into it. This configures NGINX Plus to direct the HTTPS requests to the WSO2 EI nodes via the HTTPS 443 port using the
Configure Nginx to access the Management Console as
https://ui.ei.wso2.com/carbonvia HTTPS 443 port. To do this, create a VHost file (ui
.ei.https.conf) in the
/etc/nginx/conf.d/directory and add the following configurations into it.
Follow the instructions below to create SSL certificates for both WSO2 EI nodes.
Enter the host name (i.e., ei
.wso2.com) as the common name when creating keys.
- Execute the following command to create the Server Key: $sudo
openssl genrsa -des3 -out server.key 1024
- Execute the following command to request to sign the certificate:
-new -key server.key -out server.csr
- Execute the following commands to remove the passwords:
$sudo cp server.key server.key.org
-in server.key.org -out server.key
Execute the following command to sign your SSL Certificate:
-days 365 -in server.csr
-signkey server.key -out server.crt
Execute the following command to add the certificate to the
.jks file: keytool
-alias server -file server.crt
- Execute the following command to create the Server Key: $sudo
Execute the following command to restart the NGINX Plus server:
Execute the following command if you do not need to restart the server when you are simply making a modification to the VHost file:
$sudo service nginx reload
Creating the databases
All profiles of WSO2 EI uses a database to store information such as user management details and registry data. All nodes in the cluster must use one central database for config and governance registry mounts. You can create the following databases and associated datasources.
The embedded H2 database is suitable for development. However, for most enterprise testing and production environments, it is recommended an industry-standard RDBMS such as Oracle, PostgreSQL, MySQL, MS SQL, etc.
|JDBC user store and authorization manager|
|Shared database for config and governance registry mounts in the product's nodes|
|Local registry space in the manager node|
|Local registry space in the worker node|
Following the steps below to create the databases necessary.
These instructions assume you are installing MySQL as your relational database management system (RDBMS), but you can install another supported RDBMS as needed.
Download and install MySQL Server.
Download the MySQL JDBC driver.
Download and unzip the WSO2 EI binary distribution.
Throughout this guide,
<EI_HOME>refers to the extracted directory of the WSO2 EI product distribution.
- Unzip the downloaded MySQL driver, and copy the MySQL JDBC driver JAR (
mysql-connector-java-x.x.xx-bin.jar) into the
<EI_HOME>/lib/directory of both WSO2 EI nodes.
Add the following line to the
/etc/hostsfile to define the hostname for configuring permissions for the new database:
Do this step only if your database is not on your local machine and on a separate server.
- Execute the following command in a terminal/command window, where username is the username you want to use to access the databases: mysql
-u username -p
Specify the password to access the databases with the username you specified when prompted.
Create the databases using the following commands:
About using MySQL in different operating systems
For users of Microsoft Windows, when creating the database in MySQL, it is important to specify the character set as latin1. Failure to do this may result in an error (error code: 1709) when starting your cluster. This error occurs in certain versions of MySQL (5.6.x) and is related to the UTF-8 encoding. MySQL originally used the latin1 character set by default, which stored characters in a 2-byte sequence. However, in recent versions, MySQL defaults to UTF-8 to be friendlier to international users. Hence, you must use latin1 as the character set as indicated below in the database creation commands to avoid this problem. Note that this may result in issues with non-latin characters (like Hebrew, Japanese, etc.). The following is how your database creation command should look.
mysql> create database <DATABASE_NAME> character set latin1;
For users of other operating systems, the standard database creation commands will suffice. For these operating systems, the following is how your database creation command should look.
mysql> create database <DATABASE_NAME>;
Mounting the registry
Add the following configurations to the
REGISTRY_DB database) mounts on both WSO2 EI nodes.
Note the following when adding these configurations:
- The existing
wso2registrymust not be removed.
- The datasource you specify in the
<dbConfig name="sharedregistry">tag must match the JNDI Config name you specify in the
The registry mount path denotes the type of registry. For example, ”
/_system/config” refers to configuration Registry, and "
/_system/governance" refers to the governance registry.
The <dbconfig> entry enables you to identify the datasource you configured in the
.xml file. The unique name "sharedregistry" refers to that datasource entry.
<remoteInstance>section refers to an external registry mount. Specify the read-only/read-write nature of this instance, caching configurations and the registry root location in this section.
Also, specify the cache ID in the
<remoteInstance>section. This enables caching to function properly in the clustered environment.
Cache ID is the same as the JDBC connection URL of the registry database. This value is the Cache ID of the remote instance. It should be in the format of
[email protected]$database_url, where
$database_usernameis the username of the remote instance database and
$database_urlis the remote instance database URL. This cacheID denotes the enabled cache. In this case, the database it should connect to is
REGISTRY_DB, which is the database shared across all the nodes. You can find that in the mounting configurations of the same datasource that is being used.
Define a unique name in the
<id>tag for each remote instance. This is then referred to from mount configurations. In the above example, the unique ID for the remote instance is "
Specify the actual mount path and target mount path in each of the mounting configurations. The target path can be any meaningful name. In this instance, it is "
Configuring the ESB profile node
Do the following configurations for all nodes of your cluster.
Configure the datasourcestopoint to the
WSO2_USER_DBdatabases as follows in the
Replace the username, password, and database URL of your MySQL environment accordingly.
Repeat this configuration on the second WSO2 EI node to configure the datasources to point to the
WSO2_USER_DBdatabases as follows: (Change the username, password, and database URL as needed for your environment):
Add the following configuration in the
<EI_HOME>/conf/user-mgt.xml file to configure the user stores.
Enter the datasource information for the user store that you configured in the
.xml file. You can change the admin username and password as well. However, you should do this before starting the server.
- Update the
dataSourceproperty in all nodes in the
<EI_HOME>/conf/user-mgt.xmlfile as shown below to configure the datasource:
<EI_HOME>/conf/axis2/axis2.xmlfile as follows to set up the cluster configurations.
- Enable clustering for this node as follows:
<clustering class="org.wso2.carbon.core.clustering.hazelcast.HazelcastClusteringAgent" enable="true">
- Set the membership scheme to "wka" to enable the well-known address registration method. (This node sends cluster initiation messages to the WKA members):
- Specify the name of the cluster this node will join as folows:
- Specify the host to communicate cluster messages as follows:
Specify the port to communicate cluster messages as follows:
This port number is not affected by the port offset value specified in the
<EI_HOME>/conf/carbon.xml file. If this port number is already assigned to another server, the clustering framework automatically increments this port number. However, if there are two servers running on the same machine, ensure that a unique port is set for each server.
Specify the well-known members as follows: (The port value for the WKA node must be the same value as it's localMemberPort (in this case it is 4100).
You can also use IP address ranges for the hostname (e.g., 192.168.1.2-10). However, you can define a range only for the last portion of the IP address. Smaller the range, the faster the time it takes to discover members since each node has to scan a lesser number of potential members.
Uncomment and edit the
- Enable clustering for this node as follows:
Specify the port offset value in the
This step is optional and only required if all server instances are running on the same machine. This is not recommended for production environments. Change all ports used in your configurations based on the offset value if you are setting a port offset.Click here for more information on configuring the port offset.
When you run multiple products/clusters or multiple instances of the same product on the same server or virtual machines (VMs), change their default ports with an offset value to avoid port conflicts. An offset defines the number by which all ports in the runtime (e.g., HTTP(S) ports) are increased. For example, if the default HTTP port is 9763 and the offset is 1, the effective HTTP port will change to 9764. For each additional product instance, set the port offset to a unique value. The offset of the default ports is zero.
The port value will automatically increase as shown in the Port Value column in the following table, allowing all five WSO2 product instances or servers to run on the same machine.
WSO2 product instance
WSO2 server 1
WSO2 server 2
WSO2 server 3
WSO2 server 4
WSO2 server 5
<EI_HOME>/conf/carbon.xml file as follows to configure the hostname:
Add the host entries to your DNS, or “/etc/hosts” file (in Linux) in all the nodes of the cluster to map the hostnames to the IP addresses. For an example, you can map the IP address of the database server. In this example, MySQL is used as the database server, so
<MYSQL-DB-SERVER-IP>is the actual IP address of the database server and the host entry is as follows:
<EI_HOME>/conf/tomcat/catalina-server.xmlfile as follows toClick here for more information on this configuration.
Connector protocoltag sets the protocol to handle incoming traffic. The default value is
HTTP/1.1, which uses an auto-switching mechanism to select either a blocking Java-based connector or an APR/native connector. If the
LD_LIBRARY_PATH(on most UNIX systems) environment variables contain the Tomcat native library, the APR/native connector will be used. If the native library cannot be found, the blocking Java-based connector will be used. Note that the APR/native connector has different settings from the Java connectors for HTTPS.
The non-blocking Java connector used is an explicit protocol that does not rely on the auto-switching mechanism described above. The following is the value used:
portnumber is the value that this
Connectorwill use to create a server socket and await incoming connections. Your operating system will allow only one server application to listen to a particular port number on a particular IP address. If the special value of 0 (zero) is used, Tomcat will select a free port at random to use for this connector. This is typically only useful in embedded and testing applications.
Deploying artifacts across the nodes
One common approach for synchronizing artifacts across all cluster nodes is to use rsync tool, which is a file copying tool. Therefore, you can first deploy artifacts in one node of the cluster and then use rsync to copy those artifacts to other nodes as described below.
Use the following deployment synchronization recommendations based on the rate of change of artifacts that happen in your cluster:
- For a high rate of changes (i.e., if changes happen very frequently):
- network file share
- For a medium rate of change
- For a low rate of changes (i.e., if changes happen once a week):
- use the configuration management system to handle artifact
- other deployment options (e.g., Puppet, Chef etc.)
Create a file called
nodes-list.txtthat lists all the nodes in the deployment. The following is a sample of the file for two nodes.
Different nodes are separated by new lines.
Create a file to synchronize the
<PRODUCT_HOME>/repository/deployment/server/directory between the nodes.
You must create your own SSH key and define it as the
pem_file. Alternatively, you can use an existing SSH key. Specify the
ei_server_dirdepending on the location in your local machine. Change the
logs.txtfile path and the lock location based on where they are locatedin your machine.
Configure rsync in the
<EI_HOME>/repository/tenant/directory to share the tenant artifacts across the cluster.
Execute the following command in your CLI to create a Cron job that executes the above file every minute for deployment synchronization.
Testing the cluster
Follow the steps below to test the cluster.
Deploy artifacts to each product deployment location.
Use a deployment synchronization mechanism to synchronize the artifacts in the
<EI_HOME>/repository/deployment/directory. Always deploy artifacts first to the WSO2 EI server profile node with the registry configured as read/write. Next, deploy the artifacts to the other nodes.
Restart the configured load balancer.
- Execute the following command and start both WSO2 EI nodes:
Check for ‘member joined’ log messages in all consoles.
Additional information on logs and new nodes
When you terminate one node, all nodes identify that the node has left the cluster. The same applies when a new node joins the cluster. If you want to add another new node, copy existing node without any changes if you are running it on a new server (such as xxx.xxx.xxx.184). If you intend to use the new node on a server where another WSO2 product is running, use a copy of node and change the port offset accordingly in the
<EI_HOME>/conf/carbon.xmlfile. You also have to change
<EI_HOME>/conf/axis2/axis2.xml file if that product has clustering enabled. Also, map all hostnames to the relevant IP addresses when creating a new node. The log messages indicate if the new node joins the cluster.
- Access the Management Console through the LB using the following URL:
- Test load distribution via the following URLs:
Add a sample proxy service with the log mediator in the inSequence so that it will display logs in the terminals, and then observe the cluster messages sent.
Send a request to the endpoint through the load balancer to verify that the proxy service is activated only on the active node(s) while the nodes remain passive. This is to test that the load balancer manages the active and passive states of the nodes, activating nodes as needed and leaving the rest in passive mode. For example, you would send the request to the following URL: http
Tuning performance of the cluster
Follow the steps below to tune performance of the cluster:
The below example parameter values might not be the optimal values for the specific hardware configurations in your environment. Therefore, it is recommended to carry out load tests on your environment to tune the load balancer and other configurations accordingly.
- Change the following default memory allocation settings for the server node and the JVM tuning parameters in the server startup scripts (i.e., the
<EI_HOME>/bin/integrator.batfile) according to the expected server load:
-Xms256m -Xmx1024m -XX:MaxPermSize=256m
Modify important system files, which affect all programs running on the server. It is recommended to familiarize yourself with these files using Unix/Linux documentation before editing them.