All docs This doc

Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

This section lists performance analysis experiments conducted focusing on stream query processing and event ingestion with persistence.

Table of Contents
maxLevel2
minLevel2

Summary

The following table shows a complete summary of the content presented in this section.

...

Average Latency

(milliseconds)

...

Event Ingestion with Persistence

Oracle Event Store

...

MS SQL Event Store

...

MySQL Event Store

...

Info
titleNotes
  • Event ingestion with persistence tests were conducted using the default Amazon RDS configurations.

  • All the tests for event ingestion with persistence were conducted for 1 hour.

  • Performance results were aggregated in 5000K event windows.
  • The above table had an input rate of 1000K events per second during the first four tests.
  • All the tests were conducted using TCP transport.

Table of Contents
maxLevel4

Stream query processing

Image Removed

Scenario: Running multiple Siddhi queries

Infrastructure used
  • The experiments were carried out in two c4.2xlarge (8 vCPU, 16GB RAM, EBS storage with 1000 Mbps max dedicated bandwidth)  Amazon EC2 instances.

    • Linux kernel 4.44, java version "1.8.0_131", JVM flags -Xmx4g -Xms2g

  • One node operated as a client.

  • Another node operated as a Stream Processor node.

  • Experiments were carried out using TCP as the transport.

...

@App:name("TCP_Benchmark")

@source(type = 'tcp', context='inputStream',@map(type='binary'))

define stream inputStream (iijtimestamp long,value float);

from inputStream

select iijtimestamp,value

insert into tempStream;

...

Filter

...

define stream inputStream (iijtimestamp long,value float);

...

@App:name("TCP_Benchmark")

...

@App:name("TCP_Benchmark")

...

Window

...

Scenario: Running Siddhi Pattern query on debs-2013-grand-challenge-soccer-monitoring dataset

Infrastructure used
  • The experiments were carried out in two c4.2xlarge (8 vCPU, 16GB RAM, EBS storage with 1000 Mbps max dedicated bandwidth)  Amazon EC2 instances.

    • Linux kernel 4.44, java version "1.8.0_131", JVM flags : -Xmx4g -Xms2g
  • One node operated as a client.

  • Another node operated as a Stream Processor node.

  • Experiments were carried out using TCP as the transport.

Dataset

The data used in 2013 DEBS Grand Challenge is collected by the Real-Time Locating System deployed on a football field of the Nuremberg Stadium in Germany. Data originates from sensors located near the players’ shoes (1 sensor per leg) and in the ball (1 sensor). The goalkeeper is equipped with two additional sensors in each hand. The sensors in the players’ shoes and hands produce data at a frequency of 200Hz, while the sensor in the ball produces data at a frequency of 2000Hz. The total data rate reaches roughly 15 position events per second. Every position event describes the position of a given sensor in a three-dimensional coordinate system. The center of the playing field is at coordinate (0, 0, 0) for the dimensions of the playing field and the coordinates of the kickoff.

For more details about the dataset, see DEBS 2013 Grand Challenge: Soccer monitoring.

Pattern used

We created patterns for goal scoring scenario. In this scenario, the pattern matching query involves two events referrred to as e1 and e2 that should occur one after the other with some preconditions being satisfied. These preconditions include the position (denoted by x, y, and z) and the acceleration of the ball (denoted by a_abs). The numerical constants (such as 29880, 22560, etc.,) in the sample pattern matching query with which the values for x, y, and z are compared correspond to the boundary points of the goal reg.

Code Block
languagesql
titleSample Siddhi App
@App:name("TCP_Benchmark")

@source(type = 'tcp', context='inputStream',@map(type='binary'))
define stream innerStream(iij_timestamp long,sid int, eventtt long, x double, y, double, z int, v_abs double, a_abs int, vx int, vy int, vz int, ax int,ay int, az int);

from  e1=innerStream[(x>29880 or x<22560) and y>-33968 and y<33965 and (sid==4 or sid ==12 or sid==10 or sid==8)]
    -> e2=innerStream[(x<=29898 and x>22579) and y<=-33968 and z<2440 and a_abs>=55000 and (sid==4 or sid ==12 or sid==10 or sid==8)]

select
 e2.sid as sid, e2.eventtt as eventtt, e2.x as x,e2.y as y, e2.z as z,e2.v_abs as v_abs,e2.a_abs as a_abs, e2.vx as vx,e2.vy as vy, e2.vz as
 vz, e2.ax as ax,e2.ay as ay, e2.az as az, e1.iij_timestamp as iij_timestamp
insert into outputStream;
Summary Results

...

Ingesting events with persistence

All the event ingestion with persistence tests were conducted using the following deployment configuration.

Image Removed

Oracle event store

Infrastructure used

The following infrastructure was used in both scenarios:

  • c4.2xlarge (8 vCPU, 16GB RAM, EBS storage with 1000 Mbps max dedicated bandwidth) Amazon EC2 instances operated as the SP node and the TCP client.

    • Linux kernel 4.44, java version "1.8.0_131", JVM flags : -Xmx4g -Xms2g

  • db.m4.2xlarge (8 vCPU, 32 GB RAM, EBS-optimized storage with 100 Mbps max dedicated bandwidth) Amazon RDS instance with Oracle operated as the database node.

  • Customized TCP client operated as the data publisher (TCP producer found in samples).

  • Experiments were carried out using Oracle 12g.

Scenario 1: Insert query - Persisting 252 million events of process monitoring events in Oracle

This test involved persisting process monitoring events of approximately 180 bytes each. The test injected 252 million events into WSO2 Stream Processor with a publishing TPS of 70,000 events per second during a time period of one hour.

Throughput Graph

Image Removed
                                                                               

                     

 Latency Graph

Image Removed
Summary Results

...

Scenario 2: Update Query - Updating 10 million events in Oracle Data store

The test injected 10 million events into the properly indexed Oracle Database. The test involved persisting process monitoring events of approximately 180 bytes each. 75 million update queries were performed. The publishing throughput was 20,000 events per second during a time period of one hour.

Throughput Graph

Image Removed        

                                             

            Latency Graph                                 

Image Removed

Summary Results

...

Microsoft SQL server event store

Infrastructure used
  • c4.2xlarge (8 vCPU, 16GB RAM, EBS storage with 1000 Mbps max dedicated bandwidth) Amazon EC2 instance operated as the SP node.

    • Linux kernel 4.44, java version “1.8.0_131", JVM flags : -Xmx4g -Xms2g

  • db.m4.2xlarge (8 vCPU, 32 GB RAM, EBS-optimized storage with 100 Mbps max dedicated bandwidth) Amazon RDS instance with MS SQL Enterprise Edition 2016 operated as the database node

  • Customized TCP client operated as the data publisher (Sample TCP client found in samples).

Scenario 1: Insert Query - Persisting 198 million process monitoring events in MS SQL

This test involved persisting process monitoring events of approximately 180 bytes each. The test injected 198 million events into WSO2 Stream Processor with a publishing throughput of 55,000 events per second during a time period of one hour.

Throughput Graph

Image Removed
                                                                                             

                                                                                                                                                                                                                                                                                                   
                    Latency Graph

Image Removed

Summary Results

...

                           

Scenario 2: Update Query - Updating 10 million events in MS SQL data store 

This test injected 10 million events into the properly indexed MS SQL Database. This test involved persisting process monitoring events of approximately 180 bytes each. 3.6 million update queries were performed with a publishing throughput of 1000 events per second during a time period of one hour.

                                                           Throughput Graph                                                                                                         

Image Removed      

                                                                                                                                                                                                                                                                                                    

Summary Results

...

MySQL event store

Infrastructure Used
  • c4.2xlarge (8 vCPU, 16GB RAM, EBS storage with 1000 Mbps max dedicated bandwidth) Amazon EC2 instances operated as the SP node and the TCP client.

    • Linux kernel 4.44, java version "1.8.0_131", JVM flags : -Xmx4g -Xms2g

  • db.m4.2xlarge (8 vCPU, 32 GB RAM, EBS-optimized storage with 100 Mbps max dedicated bandwidth) Amazon RDS instance with MySQL Community Edition version 5.7 operated as the database node.

  • Customized TCP client operated as the data publisher (TCP producer found in samples).

  • Experiments were carried out using 5.7.19 MySQL Community Server.

Scenario 1: Insert Query - Persisting 12.2 million process monitoring events in MySQL

This test involved persisting process monitoring events of approximately 180 bytes each. The test injected 12.2 million events into WSO2 Stream Processor with a publishing throughput of 3400 events per second during a time period of one hour.

Throughput Graph

Image Removed

Latency Graph

Image Removed

Summary Results

...

Info
titleMySQL Upper Limit

After about 12.2 million events are published, a sudden drop can be observed in the receiver performance. This number can be considered as the upper limit of MySQL event store with default settings. In order to continue receiving events without a major performance degradation, data should be purged periodically from the event store before it reaches the upper limit.

Scenario 2: Update Query - Updating 100K events in MySQL data store

The test injected 100,000 events into the properly indexed My SQL Database. This test involved persisting process monitoring events of approximately 180 bytes each. Three million update queries were performed with a publishing throughput of 500 events per second during a time period of one hour.

Throughput Graph

Image Removed

Latency Graph

Image Removed

Summary Results

...

Info
titleConclusion

The performance test results indicate that the event persistence performance of WSO2 Stream Processor is characterized by the event store database performance.

presents the results of the latest performance test carried out for WSO2 Stream Processor.

Info

These performance statistics were taken when the load average was below 3.8 in the 4 core instance.


Table of Contents
maxLevel3
minLevel3

Consuming events using a Kafka source
Anchor
Kafka Source
Kafka Source

Specifications for EC2 instances

  • Stream Processor: c5.xLarge

  • Kafka server: c5.xLarge

  • Kafka publisher: c5.xLarge

Siddhi application used

Code Block
languagesql
@App:name("HelloKafka")

@App:description('Consume events from a Kafka Topic and publish to a different Kafka Topic')

@source(type='kafka',
    	topic.list='kafka_topic',
    	partition.no.list='0',
    	threading.option='single.thread',
    	group.id="group",
    	bootstrap.servers='172.31.0.135:9092',
    	@map(type='json'))
define stream SweetProductionStream (name string, amount double);

@sink(type='log')
define stream KafkaSourceThroughputStream(count long);

from SweetProductionStream#window.timeBatch(5 sec)
select count(*)/5 as count
insert into KafkaSourceThroughputStream;

Results

  • Average Publishing TPS to Kafka : 1.1M

  • Average Consuming TPS from Kafka: 180K

Consuming messages from an HTTP source
Anchor
HTTP Source
HTTP Source

Specifications for EC2 instances

  • Stream Processor : c5.xLarge

  • JMeter: c5.xLarge

Siddhi application used

Code Block
languagesql
@App:name("HttpSource")

@App:description('Consume events from http clients')

@source(type='http', worker.count='20', receiver.url='http://172.31.2.99:8081/service',
@map(type='json'))
define stream SweetProductionStream (name string, amount double);

@sink(type='log')
define stream HttpSourceThroughputStream(count long);

from SweetProductionStream#window.timeBatch(5 sec)
select count(*)/5 as count
insert into HttpSourceThroughputStream;

Results

  • Average Publishing TPS to Http Source : 30K

  • Average Consuming TPS from Http Source: 30K

Sending HTTP requests and consuming the responses
Anchor
Sending requests
Sending requests

Specifications for EC2 instances

  • Stream Processor : c5.xLarge

  • JMeter: c5.xLarge

  • Web server : c5.xLarge

Siddhi application used

Code Block
languagesql
@App:name("HttpRequestResponse")

@App:description('Consume events from an HTTP source, ')

@source(type='http', worker.count='20', receiver.url='http://172.31.2.99:8081/service',
@map(type='json'))
define stream SweetProductionStream (name string, amount double);

@sink(type='http-request', l, sink.id='production-request', publisher.url='http://172.17.0.1:8688//netty_echo_server', @map(type='json'))
define stream HttpRequestStream (batchNumber double, lowTotal double);

@source(type='http-response' , sink.id='production-request', http.status.code='200',
@map(type='json'))
define stream HttpResponseStream(batchNumber double, lowTotal double);

@sink(type='log')
define stream FinalThroughputStream(count long);

@sink(type='log')
define stream InputThroughputStream(count long);

from SweetProductionStream
select 1D as batchNumber, 1200D as lowTotal
insert into HttpRequestStream;

from SweetProductionStream#window.timeBatch(5 sec)
select count(*)/5 as count
insert into InputThroughputStream;

from HttpResponseStream#window.timeBatch(5 sec)
select count(*)/5 as count
insert into FinalThroughputStream;


Results

  • Average Publishing TPS to HTTP Source : 29K

  • Average Publishing TPS from HTTP request sink: 29K

  • Average Consuming TPS from HTTP response source: 29K