This sample demonstrates how you can handle peak load scenarios using load-balancing endpoints. In the configuration of this sample, three load balancing endpoints are used to handle the peak load by dividing the load equally among each other.
For a list of prerequisites, see Prerequisites to Start the ESB Samples.
Building the sample
The XML configuration for this sample is as follows:
This configuration file
synapse_sample_52.xml is available in the
To build the sample
Start the ESB with the sample 52 configuration. For instructions on starting a sample ESB configuration, see Starting the ESB with a sample configuration.
The operation log keeps running until the server starts, which usually takes several seconds. Wait until the server has fully booted up and displays a message similar to "WSO2 Carbon started in n seconds."
Start three instances of the sample Axis2 server on HTTP ports 9001, 9002 and 9003 and give unique names to each server. For instructions on starting the Axis2 server, see Starting the Axis2 server.
Deploy the back-end service
LoadbalanceFailoverService. For instructions on deploying sample back-end services, see Deploying sample back-end services.
Executing the sample
The sample client used here is the Load Balance and Failover Client.
To execute the sample client
Run the following command from the
- Run the client again using the above command without the
- While running the client, stop the server named MyServer1.
- Restart MyServer1.
Analyzing the output
When the client is run for the first time, the client sends 100 requests to the
LoadbalanceFailoverService through the ESB. The ESB will distribute the load among the three endpoints specified in the configuration file in a round-robin manner. The
LoadbalanceFailoverService appends the name of the server to the response, so that the client can determine which server has processed the message.
When you analyze the output on the client console, you will see that the requests are processed by three servers.
The output on the client console will be as follows:
When you run the client without the
-Di=100 parameter, the client sends infinite requests. When you stop the server named MyServer1 while running the client, the requests will be distributed only among MyServer2 and MyServer3.
You can see this by analyzing the log output on the client console, which will be as follows:
By analysing the above output, you will understand that MyServer1 was stopped after request 63. You can come to this conclusion because all requests coming after request 63 are distributed only among MyServer2 and MyServer3.
When you restart MyServer1, you will see that the requests are now sent again to all three servers roughly after 60 seconds. This is because
<suspendDurationOnFailure> is specified as 60 seconds in the configuration file.
<suspendDurationOnFailure> is specified as 60 seconds, the load balance endpoint will suspend any failed child endpoint only for 60 seconds on detecting the failure.