This documentation is for older versions of WSO2 products and may not be relevant now. Please see your respective product documentation for clustering details and configurations.
Skip to end of metadata
Go to start of metadata

Introduction to clustering

You can install multiple instances of WSO2 products in a cluster. A cluster consists of multiple instances of a product that act as if they are a single instance and divide up the work. This approach improves performance, because requests are distributed among several servers instead of just one, and it ensures reliability, because if one instance becomes unavailable or is experiencing high traffic, another instance will seamlessly handle the requests. Clustering also provides the following benefits.

  • High availability: Some systems may require high availability percentages like two-nines (99%) availability. A server may go down due to many reasons, such as system failures, planned outage, or hardware or network problems. Clustering for high availability results in fewer service interruptions, and since downtime is costly to any business, clustering has a direct and positive impact on costs.

  • Simplified administration: You can add and remove resources to meet the size and time requirements for your workloads. You can also launch compute jobs using simple APIs or management tools and automate workflows for maximum efficiency and scalability. Administration is simplified by using tools like the deployment synchronizer and log collector.

  • Increased scalability: Scalability is the ability of a system, network, or process to handle a growing amount of work in a capable manner, or its ability to be enlarged to accommodate that growth. Scalability enables us to leverage resources more effectively. By distributing processing, we can make vertical or horizontal scalability possible. 

  • Failover and switchover capabilities: Failover can occur automatically or manually. You can prepare a redundant backup system or use load-balanced servers to serve the failover function. You address failover through your system design and characteristics, and clustering helps you design your applications against interruptions and with improved recovery time. Even if a failover occurs, it is important to bring the system back up as quickly as possible.

  • Low cost: Clustering improves scalability and fault tolerance, so business continuity is guaranteed even in the case of node failure. Also, it facilitates automatically scaling up the system when there is a burst load, which means the business will not lose any unforeseen opportunities.

These characteristics are essential for enterprise applications deployed in a production environment. If you are still in development mode, you do not need a cluster, but once you are ready to start testing and to go into production, where performance and reliability are critical, you should create a cluster.

For additional general information on clustering, see the following resources:

WSO2 provides Hazelcast Community Edition as its default clustering engine. For clustering on a secure channel (i.e., secure Hazelcast), you have to use Hazelcast Enterprise and this is a commercial version of Hazelcast. To integrate with Hazelcast Enterprise, there are provisions to provide license key under clustering configurations. Advanced users can fine-tune Hazelcast by creating a <PRODUCT_HOME>/repository/conf/ file and adding the relevant Hazelcast properties as described in the Hazelcast Advanced Configuration Properties documentation. If you use Hazelcast Enterprise Edition or Hazelcast Management Center, see the Hazelcast documentation for details on configuring those products and also Advanced Configurations and Information for further details.

Add the following property to file to add the license key of Hazelcast Enterprise.


About membership schemes

A cluster should contain two or more instances of a product that are configured to run within the same domain. To make an instance a member of the cluster, you must configure it to either of the available membership schemes.

  • Well Known Address (WKA) membership scheme
  • Multicast membership scheme
  • AWS membership scheme
  • Kubernetes membership scheme

All of these membership schemes are ready to use for production but the choice of which of these to use should depend on your production environment. The following table compares the two membership schemes:

All nodes should be in the same subnetNodes can be in different networksAmazon EC2 nodesKubernetes pods
All nodes should be in the same multicast domainNo multicasting requirementNo multicasting requirementNo multicasting requirement
Multicasting should not be blockedNo multicasting requirementNo multicasting requirementNo multicasting requirement
No fixed IP addresses or hosts requiredAt least one well-known IP address or host requiredNo fixed IP addresses or hosts requiredNo fixed IP addresses or hosts required
Failure of any member does not affect membership discoveryNew members can join with some WKA nodes down, but not if all WKA nodes are downFailure of any member does not affect membership discoveryFailure of any member does not affect membership discovery
Does not work on IaaSs such as Amazon EC2IaaS-friendlyWorks on Amazon EC2Works with Kubernetes and OpenShift Environments
No WKA requirementRequires keepalive, elastic IPs, or some other mechanism for re-mapping IP addresses of WK members in cases of failureNo WKA requirementNo WKA requirement

Note that some production environments do not support multicast. However, if your environment supports multicast, there are no issues in using this as your membership scheme.

About Well-Known Addresses (WKA)

The Well-Known Addresses (WKA) feature is a mechanism that allows cluster members to discover and join a cluster using unicast instead of multicast. WKA is enabled by specifying a small subset of cluster members (referred to as WKA members) that are able to start a cluster. The WKA member starts the cluster and the other members join the cluster through this WKA member. If the WKA member is down, the cluster breaks, and the members will not be able to communicate with each other.

The system should have at least two well-known address (WKA) members in order to work correctly and to recover if a single WKA member fails.

Clustering compatibility with WSO2 products

WSO2 products are compatible with each other if they are based on the same WSO2 Carbon version. See the release matrix for compatibility information.

About performance of WSO2 products in a cluster

If you are setting up multiple WSO2 products in a cluster, it is recommended to set up each product on a separate server. For example, WSO2 ESB is used for message mediation, so a considerable amount of processing happens in the ESB. The DSS does data service hosting and has a different architecture layer from the ESB. If you deploy both the ESB and DSS in the same instance/runtime, it can negatively impact the performance of both, and it also makes scaling difficult. However, you can set up hybrid servers (installing selected DSS features on top of the ESB and vice versa) using WSO2 products without the above performance concerns.

Deciding how to set up your cluster

When setting up your cluster, you must consider differed approaches that you need to take. You must decide how you want to setup and share your databases, whether to front your cluster with a load balancer, and whether to use sticky sessions. You also need to make a decision on whether to separate worker and manager concerns in the cluster. The following topics provide more details on these and can help you make a more informed decision.

High-level steps for creating a cluster

Following are the high-level steps for creating a cluster. The Setting up a Cluster section walks you through these steps in detail by describing how to configure the clustering example described above. Although those steps are for clustering WSO2 AS, they apply to all WSO2 products. For details on additional configuration required for a specific WSO2 product, see Configuring Clustering for Specific Products.

To create a cluster:
  1. Install the load balancer and instances of the product you are clustering.
  2. Configure the load balancer. 
  3. Setup the central database.
  4. Configure the manager node:
    1. Define the data source(s) for the central database in master-datasources.xml.
    2. Configure clustering in axis2.xml.
    3. Configure the cluster host name (so that requests to the manager node are redirected to the cluster) in carbon.xml.
    4. Map the database and cluster host name to the IP addresses in the /etc/hosts file.
    5. Start the manager node.
  5. Configure the worker nodes:
    1. Define the data source(s) for the central database in master-datasources.xml.
    2. Configure clustering in axis2.xml.
    3. Configure the cluster host name in carbon.xml.
    4. Map the database and cluster host name to the IP addresses in the /etc/hosts file.
    5. Start the worker nodes.
  6. Test the cluster.
  • No labels