This documentation is for WSO2 Enterprise Service Bus version 4.8.0 . View documentation for the latest release.

All docs This doc
Skip to end of metadata
Go to start of metadata

This section describes some recommended performance tuning configurations to optimize the ESB. It assumes that you have set up the ESB on a server running Unix/Linux, which is recommended for a production deployment.

Important

  • Performance tuning requires you to modify important system files, which affect all programs running on the server. We recommend you to familiarize yourself with these files using Unix/Linux documentation before editing them.
  • The parameter values we discuss below are just examples. They might not be the optimal values for the specific hardware configurations in your environment. We recommend you to carry out load tests on your environment to tune the ESB accordingly.

OS-Level Settings

1. To optimize network and OS performance, configure the following settings in /etc/sysctl.conf file of Linux. These settings specify a larger port range, a more effective TCP connection timeout value, and a number of other important parameters at the OS-level.

It is not recommended to use net.ipv4.tcp_tw_recycle = 1 when working with network address translation (NAT), such as if you are deploying products in EC2 or any other environment configured with NAT.

net.ipv4.tcp_fin_timeout = 30
fs.file-max = 2097152
net.ipv4.tcp_tw_recycle = 1
net.ipv4.tcp_tw_reuse = 1
net.core.rmem_default = 524288
net.core.wmem_default = 524288
net.core.rmem_max = 67108864
net.core.wmem_max = 67108864
net.ipv4.tcp_rmem = 4096 87380 16777216
net.ipv4.tcp_wmem = 4096 65536 16777216
net.ipv4.ip_local_port_range = 1024 65535      

2. To alter the number of allowed open files for system users, configure the following settings in /etc/security/limits.conf file of Linux.

* soft nofile 4096
* hard nofile 65535

Optimal values for these parameters depend on the environment.

WSO2 Carbon platform-level settings

In multitenant mode, the WSO2 Carbon runtime limits the thread execution time. That is, if a thread is stuck or taking a long time to process, Carbon detects such threads, interrupts and stops them. Note that Carbon prints the current stack trace before interrupting the thread. This mechanism is implemented as an Apache Tomcat valve. Therefore, it should be configured in the <PRODUCT_HOME>/repository/conf/tomcat/catalina-server.xml file as shown below.

<Valve className="org.wso2.carbon.tomcat.ext.valves.CarbonStuckThreadDetectionValve" threshold="600"/>
  • The className is the Java class used for the implementation. Set it to org.wso2.carbon.tomcat.ext.valves.CarbonStuckThreadDetectionValve.
  • The threshold gives the minimum duration in seconds after which a thread is considered stuck. The default value is 600 seconds.

ESB-Level Settings

1. Memory allocated for the ESB can be increased by modifying $ESB_HOME/bin/wso2server.sh file.

  • Default setting for WSO2 ESB 4.6.0 and later is: -Xms256m -Xmx512m -XX:MaxPermSize=256m
  • This can be changed for benchmarking as shown in the following example: -Xms2048m -Xmx2048m -XX:MaxPermSize=1024m

2. Add the following parameter to enable streaming XPath in $ESB_HOME/repository/conf/synapse.properties file. For example:

synapse.streaming.xpath.enabled=true
synapse.temp_data.chunk.size=3072 

3. Disable the service/API invocation access logs as follows:

  • If you have not yet started the server, you can disable log4j.logger.org.apache.synapse.transport.http.access in the $ESB_HOME/repository/conf/log4j.properties file.
  • If the server has already been started, go to Configure -> Logging in the management console, and in the Configure Log4J Loggers section, set org.apache.synapse.transport.http.access to OFF.

For more information on when to use log4j.properties or the management console, see Setting Up Logging.

Pass-Through Transport Configurations

4. Set the following properties in <ESB_HOME>/repository/conf/passthru-http.properties file to optimize the pass-through transport.

http.socket.timeout=120000
worker_pool_size_core=400
worker_pool_size_max=500
io_buffer_size=16384

Each parameter in the above configuration is described below.

Parameter NameDescription
http.socket.timeout

Maximum period of inactivity between two consecutive data packets. Given in milliseconds.

Also defined as SO_TIMEOUT.

worker_pool_size_coreInitial number of threads in the worker thread pool.
worker_pool_size_maxMaximum number of threads in the worker thread pool.
io_buffer_sizeSize in bytes of the buffer through which data passes.
HTTP-NIO Transport Configurations

5. PTT is the default transport used by the ESB. If you want to use the HTTP-NIO transport, comment out PPT and un-comment the HTTP-NIO transport in <ESB_HOME>/repository/conf/axis2/axis2.xml file.

6. To tune the HTTP-NIO transport performance, create a nhttp.properties file for the ESB in the <ESB_HOME>/repository/conf directory, and configuring the socket timeout values, connection timeout values, and HTTP receiver thread pool parameters. For example,

http.socket.timeout.sender=120000
http.socket.timeout.receiver=120000
http.socket.buffer-size=8192
http.tcp.nodelay=1
http.connection.stalecheck=0

# HTTP Listener thread pool parameters
snd_t_core=200
snd_t_max=250
snd_alive_sec=5
snd_qlen=-1
snd_io_threads=16

# HTTP Sender thread pool parameters
lst_t_core=200
lst_t_max=250
lst_alive_sec=5
lst_qlen=-1
lst_io_threads=16

Each parameter in the above configuration is described below.

Parameter NameDescription
http.socket.timeout.receiver

Maximum period of inactivity between two consecutive data packets for transport listener side. Given in milliseconds. This means socket timeout value for connection between the Client and ESB server.

http.socket.timeout.sender
Maximum period of inactivity between two consecutive data packets for transport sender side. Given in milliseconds. This means socket timeout value for connection between the ESB server and the Backend server.
http.socket.buffer-sizeDetermines the size of the internal socket buffer used to retain data while receiving / transmitting HTTP messages.
http.tcp.nodelayDetermines whether Nagle's algorithm is to be used, which improves efficiency of TCP/IP networks by reducing the number of packets that need to be sent over the network.
http.connection.stalecheck

Determines whether a stale connection check is to be used.

WSO2 ESB uses the following listener/sender architectural style with non-blocking IO: Client -> ESB: Non-blocking Transport Listener -> Mediation Flow -> ESB: Non-blocking Transport Sender -> Back-End.

lst_t_coreTransport Sender worker pool's initial thread count.
lst_t_maxTransport Sender worker pool's maximum thread counts.
lst_io_threads

Sender-side IO workers, which is recommended to be equal to the number of CPU cores.

I/O reactors usually employ a small number of dispatch threads (often as few as one) to dispatch I/O event notifications to a greater number (often as many as several thousands) of I/O sessions or connections. Generally, one dispatch thread is maintained per CPU core.

lst_alive_secsender-side keep-alive seconds.
lst_qlenSender queue length, which is infinite by default.

snd_t_core

snd_t_max

snd_io_threads 

snd_alive_sec

snd_qlen

These listener-side parameters have the same definition of their sender-side counterpart. Generally, same values of the sender-side parameters are set to listener-side parameters as well.

For examples that illustrate how to tune the performance of the ESB, go to Performance Tuning WSO2 ESB with a practical example and WSO2 ESB tuning performance with threads.

  • No labels