This documentation is for older WSO2 products. View documentation for the latest release.
Configuring the Manager Node of a Cluster - Clustering Guide 4.2.0 - WSO2 Documentation
||
Skip to end of metadata
Go to start of metadata

The WSO2 ELB is now retired. Please set up your cluster with an alternative load balancer, preferably Nginx Plus. See Setting up a Cluster for more information.

Configuring the manager node involves the following tasks:

This topic is not relevant if you are not clustering the manager node, which is the case in clustering deployment pattern 1. This is because in clustering deployment pattern 1, the manager is not exposed to the ELB.

Configuring the datasource

You configure datasources to allow the manager node to point to the central database. Make sure that you copy the database driver JAR to the manager node and follow the steps described in Setting up the Database.

In most WSO2 products, only one data source is used. If there is more than one data source, make sure they reference the central databases accordingly. For example, the API Manager deployment setup requires more specific data source configurations.

Setting up cluster configurations for the manager node

Configuring clustering for the manager node is similar to the way you configured it for the load balancer node, but the localMemberPort is 4001 instead of 4000, and you define the load balancer node instead of the AS manager node as the well-known member.

  1. Open the <AS_HOME>/repository/conf/axis2/axis2.xml file.
  2. Locate the clustering section and verify or configure the properties as follows (some of these properties are already set correctly by default):
    1. Enable clustering for this node: 
      <clustering class="org.wso2.carbon.core.clustering.hazelcast.HazelcastClusteringAgent" enable="true">
    2. Set the membership scheme to wka to enable the well-known address registration method (this node will send cluster initiation messages to WKA members that we will define later): 
      <parameter name="membershipScheme">wka</parameter>
    3. Specify the name of the cluster this node will join:
      <parameter name="domain">wso2.as.domain</parameter>
    4. Specify the host used to communicate cluster messages:

      You may run into issues when using host names in products based on WSO2 Carbon 4.2.0, so it is recommended that you use the IP address directly here.

      <parameter name="localMemberHost">xxx.xxx.xxx.xx3</parameter>

    5. Specify the port used to communicate cluster messages: 
      <parameter name="localMemberPort">4100</parameter>

      This port number will not be affected by the port offset in carbon.xml. If this port number is already assigned to another server, the clustering framework will automatically increment this port number. However, if two servers are running on the same machine, you must ensure that a unique port is set for each server. 

    6. The receiver's HTTP/HTTPS port values are without the portOffset addition; they get auto-incremented by portOffset. In the case of an ESB cluster, the 'WSDLEPRPrefix' parameter should point to the worker node's host name (as.wso2.com) and load balancer's HTTP (80)/HTTPS (443) transport ports.

    7. Change the members listed in the <members> element so that it is applicable for your worker/manager clustering deployment pattern. Note that clustering deployment pattern 1 is not relevant here; hence its exclusion.

      This configuration is for clustering deployment pattern 2. The port value of the ELB must be the same as the group_mgt_port you specified in the loadbalancer.conf file here. The port value for the WKA manager node must be the same value as it's localMemberPort (in this case 4200).

      <members>
          <member>
                <hostName>xxx.xxx.xxx.xx1</hostName>
                <port>4500</port>
          </member>
      	<member>
                <hostName>xxx.xxx.xxx.xx6</hostName>
                <port>4200</port>
          </member>
      </members>

      Here we configure the ELB and the second manager node as the well-known members.

      When configuring the second manager node (xxx.xxx.xxx.xx6), declare the first manager node (xxx.xxx.xxx.xx3) as the well-known member. 

      You can also use IP address ranges for the hostName. For example, 192.168.1.2-10. This should ensure that the cluster eventually recovers after failures. This would also be the solution to the many issues where the logs in the ELB show the "No members available" message. One shortcoming of doing this is that you can define a range only for the last portion of the IP address. You should also keep in mind that the smaller the range, the faster the time it takes to discover members, since each node has to scan a lesser number of potential members.

      This configuration is for clustering deployment pattern 3. The port value of the ELB must be the same as the group_mgt_port you specified in the loadbalancer.conf file here. Use the following as the configurations for the <members> element.

      <members>
      	<member>
      		<hostName>xxx.xxx.xxx.xx1</hostName>
      		<port>4500</port>
      	</member>
       
      	<member>
      		<hostName>xxx.xxx.xxx.xx2</hostName>
      		<port>4600</port>
      	</member>
      </members>

      Here we are configuring both the ELBs as well-known members even though ELB2 serves requests for the worker sub-domain. This is done as both ELBs are in the same cluster and it is a best practice to have the ELB as a well-known member.

      You can also use IP address ranges for the hostName. For example, 192.168.1.2-10. This should ensure that the cluster eventually recovers after failures. This would also be the solution to the many issues where the logs in the ELB show the "No members available" message. One shortcoming of doing this is that you can define a range only for the last portion of the IP address. You should also keep in mind that the smaller the range, the faster the time it takes to discover members, since each node has to scan a lesser number of potential members.

    8. Change the following clustering properties. Ensure that you set the value of the subDomain as mgt to specify that this is the manager node, which will ensure that traffic for the manager node is routed to this member.

      <parameter name="properties">
                  <property name="backendServerURL" value="https://${hostName}:${httpsPort}/services/"/>
                  <property name="mgtConsoleURL" value="https://${hostName}:${httpsPort}/"/>
      			<property name="subDomain" value="mgt"/>
      </parameter>

Configuring the port offset and host name

If we are running two Carbon-based products on the same server, we must change the port offset to avoid port conflicts. Additionally, we will add the cluster host name so that any requests sent to the manager host are redirected to the cluster, where the load balancer will pick them up and manage them.

  1. Open <AS_HOME>/repository/conf/carbon.xml.
  2. Locate the <Ports> tag and appropriately change the value of its sub-tag titled Offset. If you run three Carbon-based products on the same server, note that you would have a value of 2 here. This value increments based on the number of products you are running.
    <Offset>0</Offset> 
  3. Locate the <HOSTNAME> tag and add the cluster host name: 
    <HostName>as.wso2.com</HostName>
  4. Locate the <MgtHostName> tag and uncomment it. Make sure that the management host name is defined as follows:
    <MgtHostName>mgt.as.wso2.com</MgtHostName> 

If desired, you can also configure the deployment synchronizer in the carbon.xml file.

Configuring the catalina-server.xml file

Specify the following configurations in the catalina-server.xml file located in the <AS_HOME>/repository/conf/tomcat/ directory.

<Connector  protocol="org.apache.coyote.http11.Http11NioProtocol"
                port="9763"
                proxyPort="80"
--------
/>
<Connector  protocol="org.apache.coyote.http11.Http11NioProtocol"
                port="9443"
                proxyPort="443"
--------
/>

Note that you should not configure the proxyPort in the catalina-server.xml file if you want to call the manager node directly without going through the ELB.

The Connector protocol tag sets the protocol to handle incoming traffic. The default value is HTTP/1.1, which uses an auto-switching mechanism to select either a blocking Java-based connector or an APR/native connector. If the PATH (Windows) or LD_LIBRARY_PATH (on most UNIX systems) environment variables contain the Tomcat native library, the APR/native connector will be used. If the native library cannot be found, the blocking Java-based connector will be used. Note that the APR/native connector has different settings from the Java connectors for HTTPS.

The non-blocking Java connector used is an explicit protocol that does not rely on the auto-switching mechanism described above. The following is the value used:
org.apache.coyote.http11.Http11NioProtocol

The TCP port number is the value that this Connector will use to create a server socket and await incoming connections. Your operating system will allow only one server application to listen to a particular port number on a particular IP address. If the special value of 0 (zero) is used, Tomcat will select a free port at random to use for this connector. This is typically only useful in embedded and testing applications.

In the next section, we will map the host names we specified to real IPs.

Mapping host names to IPs

In the manager node we have specified two host names: carbondb.mysql-wso2.com for the MySQL server and as.wso2.com for the cluster. We will now map them to the actual IPs. Note that if you created the database on the same server as the manager node, you would have already added the first line, and if you created it on the same server as the load balancer, you would have already added the second line.

Open the server's /etc/hosts file and add the following lines, where <MYSQL-DB-SERVER-IP> is the actual IP address (in this example, xxx.xxx.xxx.206) of the database server.

<MYSQL-DB-SERVER-IP> carbondb.mysql-wso2.com
<AS MANAGER IP> mgt.as.wso2.com

In this example, it would look like this:

xxx.xxx.xxx.206 carbondb.mysql-wso2.com
xxx.xxx.xxx.xx3 mgt.as.wso2.com

We have now finished configuring the manager node and are ready to start the AS server.

Starting the AS server

Start the AS server by typing the following command in the terminal:

sh <AS_HOME>/bin/wso2server.sh -Dsetup 

The additional -Dsetup argument will create the required tables in the database.

If you are using MySQL in Windows

It is possible that you may encounter an error (error code: 1709) when starting your cluster using a MySQL database on Microsoft Windows. This error occurs in certain versions of MySQL (5.6.x) and is related to the UTF-8 encoding. MySQL originally used the latin1 character set by default, which stored characters in a 2-byte sequence. In recent versions MySQL defaults to UTF-8 to be friendlier to international users. Use latin1 instead to avoid this problem but note that this may result in issues with non-latin characters (like Hebrew, Japanese, etc.).

AS should print logs to the server console indicating that the cluster initialization is complete.

We have now finished configuring the manager node. Next, we will configure the AS worker nodes.

  • No labels