The properties of the above event stream definition are described below.
|Event Stream Name||Name of the event stream.|
|Event Stream Version||Version of the event stream. (Default value is 1.0.0.)|
|Event Stream Description||Description of the events stream. (This is optional.)|
|Event Stream Nick-Name||Nick-names of an event streams separated by commas.(This is optional.)|
Stream attributes contains the data of the event. Data is divided into the following 3 logical categories for maintenance and usability. It is not required to
have attributes for all 3 categories, but there should be at least one category with at least one attribute defined. The attribute names should be unique within each category.
e.g., The following attributes exist in a single event.
These attributes can be logically categorized as follows.
Adding an event stream
WSO2 CEP/DAS facilitates the following default and custom event formats.
Default event formats
By default, WSO2 CEP/DAS represents an event as a WSO2Event object. Furthermore, WSO2 CEP/DAS supports events in XML, JSON, Text and Map formats. The default event formats of the XML, JSON, Text and Map representations for the following sample event stream definition are as follows.
Sample event stream definition
Default XML format
Default JSON format
Default text format
Default map format
Custom event formats
If you receive and publish events with a different format than the default format, you need to provide appropriate mappings for the system to interpret the events.
@from(eventtable = 'analytics.table' , table.name = <analytics_table_name>, primary.keys = <primary_keys>, indices = <indices>, wait.for.indexing = <wait_for_indexing_flag>, merge.schema = <merge_schema_flag>) define table <EventTableName> (<schema>);
|Field Name||Description||Required||Default Value|
|table.name||The name of the analytics table, this can be an existing table, or else, a new one will be created.||Yes|
The list of fields to be used as the primary keys of the table, this can be useful, if the lookup operations are done only using primary key values,
which is the most efficient to execute.
The list of index fields separated by commas, each entry consists of the format "<index_column_name> -sp", where "-sp" is optional property to say,
if this index column should be treated as a score param.
The indexing operations in the analytics tables happens asynchronously, if events coming from a specific stream changing the event table's data needs
to be finalized, setting this flag to 'true' would wait till the background indexing to finish and continue with the execution of the flow.
In the case of an existing table given to the analytics event table, if this flag is set to 'true', the existing schema and the given schema will be merged together.
That is, the merging of columns and its indexing information, if set to 'false', the schema given by the analytics event table will overwrite the existing one.
Enables caching for the analytics table. This will store any looked up data from the analytics table and store the most recently accessed data,
with the given capacity and timeout restrictions of the cache.
|cache.timeout.seconds||The timeout of the cache entries in seconds||No||10|
|cache.size.bytes||The maximum capacity of the cache in bytes||No||10485760|
@from(eventtable = 'analytics.table' , table.name = 'stocks', primary.keys = 'symbol', indices = 'price, volume -sp', wait.for.indexing = 'true', merge.schema = 'false') define table StockTable (symbol string, price float, volume long);