This documentation is for older WSO2 products. View documentation for the latest release.
Clustering Application Server - Clustering Guide 4.2.0 - WSO2 Documentation
||
Skip to end of metadata
Go to start of metadata

This section describes how to set up a WSO2 Application Server worker/manager separated cluster and how to configure this cluster with different load balancers. The following sections give you information and instructions on how to set up your cluster.

See here for information on why you would want to separate the worker and manager nodes in your cluster.

Worker/manager separated clustering deployment pattern

In this pattern there are three WSO2 Application Server nodes; 1 node acts as the manager node and 2 nodes act as worker nodes for high availability and serving service requests. Although you can use any standard WSO2 product, WSO2 AS is used here for the purposes of this example. In this pattern we allow access to the admin console through an external load balancer. Additionally, service requests are directed to worker nodes through this load balancer. The following image depicts the sample pattern this clustering deployment scenario will follow.

Here, we use two nodes as well-known members, one is the manager node and the other is one of the worker nodes. It is always recommended to use at least two well-known members to prevent restarting all the nodes in the cluster in case a well known member is shut down.

Configuring the third-party load balancer

About clustering without a load balancer

Note that the configurations in this subsection are not required if your clustering setup does not have a load balancer. If you follow the rest of the configurations in this topic, you will be able to set up your cluster without the load balancer.

The configuration steps in this document are written assuming that default 80 and 443 ports are used and exposed by the 3rd party load balancer for this AS cluster.

If any other ports are used instead of the default ones, please replace 80 and 443 values with the corresponding ports in the relevant places.

Things to keep in mind:

  • Load balancer ports are HTTP 80 and HTTPS 443 as indicated in the deployment pattern above.
  • Direct the HTTP requests to the worker nodes using http://xxx.xxx.xxx.xx3/<service> via HTTP 80 port.
  • Direct the HTTPS requests to the worker nodes using https://xxx.xxx.xxx.xx3/<service> via HTTPS 443 port.
  • Access the management console as https://xxx.xxx.xxx.xx2/carbon via HTTPS 443 port
  • In a WSO2 AS cluster, the worker nodes address service requests on HTTP 9764 and https 9443 ports and can access the management console using the HTTPS 9443 port.

The following are some of the load balancers that you can configure and their respective configurations.

Configuring the manager node

  1. Download and unzip the WSO2 AS binary distribution. Consider the extracted directory as <PRODUCT_HOME>.
  2. Set up the cluster configurations. Edit the <PRODUCT_HOME>/repository/conf/axis2/axis2.xml file as follows.
    1. Enable clustering for this node: 
      <clustering class="org.wso2.carbon.core.clustering.hazelcast.HazelcastClusteringAgent" enable="true">
    2. Set the membership scheme to wka to enable the well-known address registration method (this node will send cluster initiation messages to WKA members that we will define later): 
      <parameter name="membershipScheme">wka</parameter>
    3. Specify the name of the cluster this node will join:
      <parameter name="domain">wso2.as.domain</parameter>
    4. Specify the host used to communicate cluster messages:

      You may run into issues when using host names in products based on WSO2 Carbon 4.2.0, so it is recommended that you use the IP address directly here. You can also use IP address ranges here. For example, 192.168.1.2-10.

      <parameter name="localMemberHost">xxx.xxx.xxx.xx2</parameter>

    5. Specify the port used to communicate cluster messages: 
      <parameter name="localMemberPort">4100</parameter>

    6. Specify the well known member. Here, the well known member is a worker node.

      <members>
      	<member>
      		<hostName>xxx.xxx.xxx.xx3</hostName>
      		<port>4200</port> 
      	</member>
      </members>

      Although this example only indicates one well-known member, it is recommended to add at least two well-known members here. This is done to ensure that there is high availability for the cluster. You can also use IP address ranges for the hostName. For example, 192.168.1.2-10.

  3. Configure the HostName. To do this, edit the <PRODUCT_HOME>/repository/conf/carbon.xml file as follows.

    <HostName>as.wso2.com</HostName>
    <MgtHostName>mgt.as.wso2.com</MgtHostName>
  4. Enable SVN-based deployment synchronization with the AutoCommit property marked as true. To do this, edit the <PRODUCT_HOME>/repository/conf/carbon.xml file as follows.

    <DeploymentSynchronizer>
    	<Enabled>true</Enabled>
    	<AutoCommit>true</AutoCommit>
    	<AutoCheckout>true</AutoCheckout>
    	<RepositoryType>svn</RepositoryType>
    	<SvnUrl>https://svn.wso2.org/repos/as</SvnUrl>
    	<SvnUser>svnuser</SvnUser>
    	<SvnPassword>xxxxxx</SvnPassword>
    	<SvnUrlAppendTenantId>true</SvnUrlAppendTenantId>
    </DeploymentSynchronizer>
  5. Map the host names to the IP. Add the below host entries to your DNS, or “/etc/hosts” file (in Linux) in all the nodes of the cluster. You have to map the hostnames with the IP address of the Load balancer machine.

    <IP-of-LB>	as.wso2.com
    <IP-of-LB>	mgt.as.wso2.com
  6. Allow access the management console only through the load balancer. Configure the HTTP/HTTPS proxy ports to communicate through the load balancer by editing the <PRODUCT_HOME>/repository/conf/tomcat/catalina-server.xml file as follows.

    <Connector protocol="org.apache.coyote.http11.Http11NioProtocol"
    	port="9763"
    	proxyPort="80"
    	...
    	/>
    <Connector protocol="org.apache.coyote.http11.Http11NioProtocol"
    	port="9443"
    	proxyPort="443"
    	...
    	/>

Configuring the worker node

  1. Download and unzip the WSO2 AS binary distribution. Consider the extracted directory as <PRODUCT_HOME>.
  2. Set up the cluster configurations. Edit the <PRODUCT_HOME>/repository/conf/axis2/axis2.xml file as follows.
    1. Enable clustering for this node: 
      <clustering class="org.wso2.carbon.core.clustering.hazelcast.HazelcastClusteringAgent" enable="true">
    2. Set the membership scheme to wka to enable the well-known address registration method (this node will send cluster initiation messages to WKA members that we will define later): 
      <parameter name="membershipScheme">wka</parameter>
    3. Specify the name of the cluster this node will join:
      <parameter name="domain">wso2.as.domain</parameter>
    4. Specify the host used to communicate cluster messages:

      You may run into issues when using host names in products based on WSO2 Carbon 4.2.0, so it is recommended that you use the IP address directly here. You can also use IP address ranges here. For example, 192.168.1.2-10.

      <parameter name="localMemberHost">xxx.xxx.xxx.xx3</parameter>

    5. Specify the port used to communicate cluster messages: 
      <parameter name="localMemberPort">4200</parameter>

    6. Specify the well known member. Here, the well known member is the manager node.

      <members>
      	<member>
      		<hostName>xxx.xxx.xxx.xx2</hostName>
      		<port>4100</port> 
      	</member>
      </members>

      Although this example only indicates one well-known member, it is recommended to add at least two well-known members here. This is done to ensure that there is high availability for the cluster. You can also use IP address ranges for the hostName. For example, 192.168.1.2-10.

  3. Configure the HostName. To do this, edit the <PRODUCT_HOME>/repository/conf/carbon.xml file as follows.

    <HostName>as.wso2.com</HostName>
  4. Enable SVN-based deployment synchronization with the AutoCommit property marked as false. To do this, edit the <PRODUCT_HOME>/repository/conf/carbon.xml file as follows.

    <DeploymentSynchronizer>
    	<Enabled>true</Enabled>
    	<AutoCommit>false</AutoCommit>
    	<AutoCheckout>true</AutoCheckout>
    	<RepositoryType>svn</RepositoryType>
    	<SvnUrl>https://svn.wso2.org/repos/as</SvnUrl>
    	<SvnUser>svnuser</SvnUser>
    	<SvnPassword>xxxxxx</SvnPassword>
    	<SvnUrlAppendTenantId>true</SvnUrlAppendTenantId>
    </DeploymentSynchronizer>

    Tip: It is recommendation is to delete the <PRODUCT_HOME>/repository/deployment/server directory and create an empty server directory in the worker node. This is done to avoid any SVN conflicts that may arise.

  5. Configure the HTTP/HTTPS proxy ports to communicate through the load balancer by editing the <PRODUCT_HOME>/repository/conf/tomcat/catalina-server.xml file as follows.

    <Connector protocol="org.apache.coyote.http11.Http11NioProtocol"
    port="9763"
    proxyPort="80"
    	...
    	/>
    <Connector protocol="org.apache.coyote.http11.Http11NioProtocol"
    port="9443"
    proxyPort="443"
    	...
    	/>
  6. Create the second worker node by getting a copy of worker node you just configured and change the following in the <PRODUCT_HOME>/repository/conf/axis2/axis2.xml file.
    <parameter name="localMemberPort">4300</parameter>

Testing the cluster

  1. Restart the configured load balancer.
  2. Start the manager node.
    sh <PRODUCT_HOME>/bin/wso2server.sh -Dsetup
  3. Start the two worker nodes.
    sh <PRODUCT_HOME>/bin/wso2server.sh -DworkerNode=true
  4. Check for ‘member joined’ log messages in all consoles.
  5. Access management console through the LB using the following URL: https://xxx.xxx.xxx.xx2:443/carbon
  6. Test load distribution via http://xxx.xxx.xxx.xx3:80/ or http://xxx.xxx.xxx.xx3:80/.
  • No labels