WSO2 DAS SiddhiQL : Dynamic event table / persist event stream - wso2

I would like to know if WSO2 Data Analytics Server allows to define a dynamic events tables or dynamic streams.
For example, imagine one event represent a car, and in this event, an attribute is the 'brand' of the car (Ford, Mercedes, Audi ...).
And I would like to add a column each time there is a new different brand. So my table would look like this :
And thus, if I receive an event with the brand 'Toyota', it would add a column to my table which would look like this:
Considering that I don't know in advance the number of different brands I will receive, I need this to be dynamic.

Dynamically changing the schema of an event table is not possible.
This is because the schema of the event table is defined when the Siddhi execution plan is deployed. Once it is deployed, the schema cannot be changed.
On the other hand,
it looks like what you need here is not an event table.
Perhaps, what you need to do is to update the schema of a table (an RDBMS table) when a certain event happens (for example, when a car event arrives with a new brand). Do you use this updated table in your Siddhi execution plan? If you do not use it, then you do not need an event table.
Please correct me if I have misunderstood your requirement.
If your requirement is to update the schema of a table when a certain event happens, then you might need to write a custom event publisher to do that. If so, please refer the document on the same: Building Custom Event Publishers.

Related

How does google Big Query handle table updates with missing fields?

I'm interested in using a streaming pipeline from google pub/sub to big query, I wanted to know how it would handle a case where an updated json object is sent with missing fields/branches that are presently already in a big query table / schema. For example will it set the value in the table to empty/null, retain what's in the table and update fields/branches that are present, or simply fail because the sent object does not match the schema one to one.

Best practice of using Dynamo table when it needs to be periodically updated

In my use case, I need to periodically update a Dynamo table (like once per day). And considering lots of entries need to be inserted, deleted or modified, I plan to drop the old table and create a new one in this case.
How could I make the table queryable while I recreate it? Which API shall I use? It's fine that the old table is the target table. So that customer won't experience any outage.
Is it possible I have something like version number of the table so that I could perform rollback quickly?
I would suggest table name with a common suffix (some people use date, others use a version number).
Store the usable DynamoDB table name in a configuration store (if you are not already using one, you could use Secrets Manager, SSM Parameter Store, another DynamoDB table, a Redis cluster or a third party solution such as Consul).
Automate the creation and insertion of data into a new DynamoDB table. Then update the config store with the name of the newly created DynamoDB table. Allow enough time to switchover, then remove the previous DynamoDB table.
You could do the final part by using Step Functions to automate the workflow with a Wait of a few hours to ensure that nothing is happening, in fact you could even add a Lambda function that would validate whether any traffic is hitting the old DynamoDB.

WSO2 CEP Event Tables - How to see the records on an event table

I am trying to check the events inside of an event table without joining it with an incoming stream of data.
Is this even possible in WSO2 CEP?
The following is not possivle:
from event_table select * insert into print_output_stream;
Is it possible to check the records on a WSO2 event table? something like a file or something like sql server management studio.
To my knowledge, it is not possible to read an (in-memory) event table without a JOIN, because;
When it comes to event processing, an action is taken upon arrival of an event. In other words, a query is written to be executed upon arrival of an event.
Therefore, it will only be required to take an action (in this case, read the event table) upon arrival of an event.
Hence a query cannot exist which does not get triggered by an event arrival.
As such, you will need a stream which triggers the action of reading from the event table (say trigger_stream)
When an event arrives to trigger_stream, you can read the event table by joining the event against the records in the event table unconditionally. In other words, you can omit the ON condition of the JOIN statement. By doing this, you will get all rows from the event table.
Reading event table for debugging purpose:
If your intention of reading the event table is debugging your Siddhi script, then you can remote debug Siddhi as you run WSO2 CEP server.

Replay events from Akka.net Persistence Journal

I'm implementing a CQRS/ES solution with Akka.Net and Akka.Net.Persistence with a SQL Server Journal. So far everything seems to work great with the default sql-server plugin.
Last thing to verify was the ability to be able to reload/replay events from a specific AR, e.g. to rebuild a read model or to fill a newly implemented projection for a read model. The way I would go about this is reading the events from de DB and putting them on the eventbus or directly into the mailbox of the "projection actor".
I can't seem to find any examples of manually reloading events and besides querying the Journal table myself (executing sql query) and using the built-in serializer I'm basically stuck with this.
Is there anyone trying to do, more or less, the same thing?
Depending on your needs there are few ways:
Using PersistentView - it's a dedicated actor, which is correlated with some specific persistent actor, and it's able to receive it's events to build some different state from them. It's readonly. Pros: it's keeping itself up to date with events produced by peristent actor (however some delay between updates applies). Cons: it's related with a single actor, it cannot be used to aggregate event's from many of them.
Using journal query (SQL journals only) - it allows you to query journal using some specific filters. Pros: it can be used across multiple aggregates. Cons: it's not automatically kept up to date, you need to send subsequent queries to get updates. I'm not sure, if it has official documentation, but flow itself is described here.
PS: some of the journal implementations have their own dedicated serializers, but not SQL-based ones. And belive me, you never want to rely on default serializer for persisting events.

Multiple SqlDependencies firing the same OnChange Event

I have a product catalog with a few hundred categories in it and I am dynamically creating an SqlDependency for each category in the catalog. The SqlCommands that these dependencies will be based on, will differ only on the categoryID. The problem that I have is that I want all these dependencies to perform different actions depending on the SqlDependency that fired them. How can I do that? Do I have to create a different OnChange event for each SqlDependency? Is there a way all these dependencies to fire the same OnChange event and this event to know which dependency fired it or receive a parameter which will be passed during the dependency creation?
This problem arised trying to create a Sql Dependency mechanism that will work with AppFabric Cache.
Thank you in advance.
See if you can look into the cache tables that asp.net is creating for you and the triggers that are being created on the original tables. Once you see what is going on, you can create the tables and triggers yourself and can implement the caching through your own asp.net code. It really is not that hard. Then, not when a table is updated(when you use SQLDependency), but relevant rows in that table are updated, you can refresh the relevant cache or write your own code to perform the whatever unique actions you want. Better off doing it yourself when you learn how to!