WSO2 Carbon version 4.0.0 supports an improved deployment model in which its architecture components are separated as 'worker' nodes and 'management' nodes. The management node(s) is used to deploy and configure artifacts (web applications, services, proxy services etc.) whereas the worker nodes are used to serve requests received by clients.
This worker/manager deployment setup provides proper separation of concerns between a Carbon-based product's UI components, management console and related functionality with its internal framework serving requests to deployment artifacts. Typically, the management nodes are in read-write mode and authorized to add new artifacts or make configuration changes, whereas the worker nodes are in read-only mode, authorized only to deploy artifacts and read configuration. This deployment model is improved security-vise since its management nodes can be set up behind an internal firewall and only exposed to internal clients, while only worker nodes can be exposed externally. Also, since the UI-related OSGi bundles are not loaded to 'worker' nodes, the deployment model is more efficient in memory utilization.
A worker/manager separated cluster can typically be implemented in the following ways:
This model consists of two sub cluster domains as worker domain and management domain. These sub domains take up load according to a defined load balancing algorithm and auto-scales according to the load on its nodes.
This model consists of a single cluster, where a selected node works as both a worker and a manager. This worker node requires two load balancers and configured in read-write mode, while the other worker nodes are set up in read-only mode. The management node also should be a well-known member in the non-management worker nodes so that state replication and cluster messaging works.
Shown below is the minimum configuration instructions to cluster two WSO2 Application Server instances. The cluster consists of two sub cluster domains as worker/management and is fronted by a single load balancer. Altogether, we will be configuring three instances as follows:
Using similar instructions, this minimum configuration can be extended to include many worker/manager nodes into the cluster.
1. Download and extract WSO2 ELB. This folder will be referred to as <elb-home>.
2. Go to <elb-home>/repository/conf/loadbalancer.conf and add the following entries under services.
The above configuration includes two sub domains under a single cluster domain named as "wso2.as.domain". Each sub domain consists of a single product instance (for the sake of simplicity). Management sub domain includes one management node and the worker sub domain includes one worker node.
3. These two product instances can be setup in separate physical servers, 2 VM instances or a single machine. If they are set up in one machine, update the host file accordingly. For example,
4. Uncomment <localMemberHost> element in <elb-home>/repository/conf/axis2/axis2.xml and specify the IP address (or host name) to be exposed to members of the cluster.
5. Start the WSO2 Elastic Load Balancer instace.
1. Download and extract the WSO2 as distribution. (Will be referred to as <manager-home>)
2. First, clustering should be enabled at axis2 level in order for management node to communicate with load balancer and the worker nodes. Open <manager-home>/repository/conf/axis2/axis2.xml and update the clustering configuration as follows:
3. Specify the cluster domain as defined in loadbalancer.conf.
4. Add a new property "subDomain" and set it to "mgt" to denote that this node belongs to mgt subdomain of the cluster defined in loadbalancer.conf.
5. Add load balancer’s IP or host name. For example, 127.0.0.1 for Load Balancer and the local member port (4000) as defined in the axis2.xml of WSO2 LB before.
Since the WSO2 as management node is fronted by the WSO2 Load Balancer, the proxy ports associated with HTTP and HTTPS connectors should be configured. These proxy ports are the corresponding transport receiver ports opened by WSO2 LB (configured in transport listeners section in axis2.xml).
6. Open <manager-home>/repository/conf/tomcat/catalina-server.xml and add the proxyPort attribute for both HTTP and HTTPS connectors as shown below.
7. Since multiple WSO2 Carbon-based products are run in same host, to avoid possible port conflicts, the port offset of <manager-home>/repository/conf/carbon.xml should be changed as follows.
8. Update mgtHostName and HostName elements in carbon.xml as shown below.
9. The as management node is used for deploying artifacts. These artifacts should be synchronized automatically to the worker nodes in the cluster. This is handled by the deployment synchronization mechanism in WSO2 Carbon based products. The default SVN based deployment synchronizer can be used to auto-commit the deployment artifacts to a pre-configured SVN repository. Then, the worker nodes can be configured to automatically check-out the artifacts from the same SVN location.
Include the following in <manager-home>/repository/conf/carbon.xml file to configure SVN based deployment synchronizer.
Make sure to replace the SvnUrl, SvnUser and SvnPassword according to your SVN repository.
10. Start the as instance in the management node.
Once the work-manager clustering configurations are added, the management console should be accessed using the URL: https://mgt.as.cloud-test.wso2.com:9444/carbon
11. Refer to the logs to ensure that the product instance has successfully joined the cluster and ready to receive requests through the load balancer.
12. Also, try deploying an artifact through the manager node. You should receive an error since the server looks for the worker node, which is not in the cluster yet. Although the request is issued through the management node, all requests are served by the worker node(s).
1. Download and extract the WSO2 as distribution. (Will be referred to as <worker-home>)
Do the following modifications in addition to the changes done to aixs2.xml of the management node.
2. If two instances of cluster nodes are in the same machine, update localxMemberPort element in <worker-home>/repository/conf/axis2/axis2.xml as follow:
3. The worker node belongs to the "worker" sub domain of the cluster domain configured in loadbalancer.conf of WSO2 Load Balancer. Add a new property to <worker-home>/repository/conf/axis2/axis2.xml as "subDomain" to represent this.
4. Since multiple WSO2 Carbon-based products are run in same host, to avoid possible port conflicts, the port offset of <worker-home>/repository/conf/carbon.xml should be changed as follows.
5. Update the HostName element as shown below. MgtHostName element is not needed since this node is designated as the worker node.
6. Next, configure the SVN-based deployment synchronizer to automatically check-out deployment artifacts from a common SVN repository. The worker nodes of a cluster SHOULD NOT commit (write) artifacts. Therefore, disable AutoCommit property in the deployment synchronizer configuration as follows:
7. Start the worker product instance. The workerNode system property must be set to true when starting the workers in a cluster. For example,
8. Refer to the logs to ensure that the product instance has successfully joined the cluster.