The process of separating the worker and manager nodes depends on the worker/manager clustering pattern you choose, so ensure that you follow the correct configurations where appropriate. Although this topic refers to WSO2 Application Server (AS) in most examples, these concepts apply to other WSO2 products as well.
Why separate the worker and manager nodes?
When clustering, you must ensure that the manager node is well protected. Generally, you would deploy the manager in a secure location behind a firewall, which will only allow admin traffic from within your network to reach the manager node. This is why we recommend having the manager node separately. It is also best to separate the management and worker concerns in your deployment. To list them out, the advantages of this separation include:
- Proper separation of concerns: Management nodes specialize in management of the setup, while worker nodes specialize in serving requests to deployment artifacts. Only management nodes are authorized to add new artifacts into the system or make configuration changes.
- Specific worker node tasks: Worker nodes can only deploy artifacts and read configuration. The separation enables you to have worker nodes with limited and yet specific tasks.
- Less memory requirements: There is a lower memory footprint in the worker nodes because the OSGi bundles related to the management console are not loaded.
- Improved security: Management nodes can be behind the internal firewall and can be exposed to clients running within the organization only, while worker nodes can be exposed to external clients.
About worker/manager separation in WSO2 products
Worker/manager separation applies to the following WSO2 products.
|Product||Versions||Description||Released Versions||Upcoming Versions|
|API Manager||1.5.0, 1.6.0, 1.7.0, 1.8.0, 1.9.0, 1.9.1, 1.10.0 and 2.0.0||Requires configurations in addition to the standard worker/manager cluster, particularly if distributed mode is used. See here for the additional configurations. Worker/Manager separation is usually done for the Gateway, but can be done for the Publisher and Store as well.||1.5.0, 1.6.0, 1.7.0, 1.8.0, 1.9.0, 1.9.1, 1.10.0 and 2.0.0||NA|
|5.2.0, 5.2.1 and 5.3.0||Standard worker/manager separation.||5.2.0, 5.2.1 and 5.3.0||NA|
|Business Process Server||3.1.0, 3.2.0, 3.5.0, 3.5.1 and 3.6.0||While clustering BPS is relevant to this topic, it requires configurations in addition to the standard worker/manager cluster. See here for the additional configurations.||3.1.0, 3.2.0, 3.5.0, 3.5.1 and 3.6.0||NA|
|Business Rules Server||2.1.0||Standard worker/manager separation.||2.1.0||NA|
|Data Services Server||3.1.0, 3.1.1, 3.2.0, 3.2.1, 3.2.2, 3.5.0 and 3.5.1||Standard worker/manager separation.||3.1.0, 3.1.1, 3.2.0, 3.2.1, 3.2.2, 3.5.0 and 3.5.1||NA|
|Enterprise Service Bus||4.8.0, 4.8.1, 4.9.0 and 5.0.0||Standard worker/manager separation with some additional ESB-specific configurations.||4.8.0, 4.8.1, 4.9.0 and 5.0.0||5.1.0|
|Task Server||1.1.0||Standard worker/manager separation. See here for additional steps needed.||1.1.0||NA|
The following WSO2 products do NOT commonly use worker/manager separation in a clustered environment.
|Product||Versions||Description||Released Versions||Upcoming Versions|
|Business Activity Monitor||2.4.0, 2.4.1 and 2.5.0||BAM does not have a concept of worker/manager separation, but the Analyzer nodes require clustering. See here for BAM clustering.||2.4.0, 2.4.1 and 2.5.0||NA|
|Complex Event Processor||3.0.0, 3.1.0, 4.0.0, 4.1.0 and 4.2.0||CEP does not use a standard worker/manager cluster in a typical production environment.||3.0.0, 3.1.0, 4.0.0, 4.1.0 and 4.2.0||NA|
|Data Analytics Server||3.0.0, 3.0.1 and 3.1.0||DAS does not use a standard worker/manager cluster in a typical production environment.||3.0.0, 3.0.1 and 3.1.0||NA|
|Enterprise Store||2.0.0 and 2.1.0||ES does not support worker/manager separation.||2.0.0 and 2.1.0||NA|
|Governance Registry||4.6.0, 5.0.0, 5.1.0, 5.2.0 and 5.3.0||No concept of worker/manager separation. See here for clustering Governance Registry.||4.6.0, 5.0.0, 5.0.1, 5.1.0, 5.2.0 and 5.3.0||NA|
|Identity Server||4.5.0, 4.6.0, 5.0.0, 5.1.0 and 5.2.0||Worker/manager separation is not possible in the Identity Server and is not used as a typical scenario when deploying the Identity Server. See here for clustering the Identity Server.||4.5.0, 4.6.0, 5.0.0, 5.1.0 and 5.2.0||5.3.0|
|Message Broker||2.2.0 and 3.0.0||MB does not use a standard worker/manager cluster in a production environment.||2.2.0 and 3.0.0||NA|
Worker/Manager separated clustering patterns
Since all WSO2 products are built on the cluster-enabled Carbon platform, you can cluster most WSO2 products in a similar way, although there may be a few differences depending on the product that you use and the deployment pattern you have chosen. WSO2 Carbon version 4.0.0 onwards supports deployment models that consist of 'worker' nodes and 'management' nodes. A worker node is used to serve requests received by clients, whereas a management node is used to deploy and configure artifacts (web applications, services, proxy services, etc.).
This worker/manager deployment setup provides proper separation of concerns between a Carbon-based product's UI components, management console, and related functionality with its internal framework serving requests to deployment artifacts. Typically, the management nodes are in read-write mode and authorized to add new artifacts or make configuration changes, whereas the worker nodes are in read-only mode, authorized only to deploy artifacts and read configurations. This deployment model provides improved security since its management nodes can be set up behind an internal firewall and only exposed to internal clients while only worker nodes can be exposed externally. Also, since the user interface is not loaded onto 'worker' nodes, the deployment model is more efficient in memory utilization.
You can select one of the following patterns based on your load coupled with your targeted expenditure.
The clustering deployment pattern you choose is very important, as your configurations will change based on the deployment pattern that you use. For the purposes of this worker/manager separation example, we have set the IP addresses of the WSO2 products in the clustering deployment patterns.
Worker/Manager clustering deployment pattern 1
This pattern involves two worker nodes within a cluster. Here the worker is in high availability mode while the manager is not. This pattern is suitable in situations where it is rare to deploy applications or modify a running application (hence you would need only a single management node). However, the application should run continuously, hence the worker nodes are in a cluster.
This mode is rarely used. The preferred mode is two management nodes in Active/Passive mode (as is the case in deployment pattern 2).
Worker/Manager clustering deployment pattern 2
This pattern has two manager nodes in one cluster and two worker nodes in a separate cluster. This is similar to deployment pattern 1, but the management node is in Active/Passive mode for high availability. Active/Active mode for a management node is generally not recommended, but if you want high availability for your data center and location based services, it is useful to set up Active/Active mode. This pattern is useful in scenarios where the application deployment/modification might be frequent (hence you would need a cluster for the management node). However, the load you may have is less, which is why you would share a single load balancer with the worker cluster.
Worker/Manager clustering deployment pattern 3
This pattern has two manager nodes in one sub-domain and two worker nodes in a separate sub-domain. The manager and worker sub-domains are part of a single WSO2 product cluster domain. Both sub-domains use their own load balancer while existing within the same cluster. Multiple load balancers result in several unique configuration steps, so please ensure that you follow the relevant steps carefully. This pattern is similar to deployment pattern 2. However, the application/modification load (or other administrative load) might be high, so there is a dedicated load balancer for the manager cluster to prevent this load from affecting the load of the worker cluster.