Following link provide a WSO2 CEP sample
https://docs.wso2.com/display/CEP310/Getting+Started+with+CEP
I sequentially proceed the document and have no problems.
But i have a question about following Siddhi Language
define table pizza_deliveries (deliveredTime long, order_id string);
from deliveryStream
select time, orderNo
insert into pizza_deliveries;
from orderStream#window.time(30 seconds)
insert into overdueDeliveries for expired-events;
from overdueDeliveries as overdueStream unidirectional join pizza_deliveries
on pizza_deliveries.order_id == overdueStream.orderNo
select count(overdueStream.orderNo) as sumOrderId, overdueStream.customerName
insert into deliveredOrders;
In this execution plan, pizza_deliveries are defined as table.
orderStream, deliveryStream, deliveredOrders are defined as document.
I can't find where and when "overdueDeliveries" is defined. But, it's working..
My question is
when or where overdueDeliveries is defined? automatically generated?
And...
Is overdueDeliveries stream or table?
overdueDeliveries is a stream and it's defined implicitly by Siddhi engine.
If you look at this query:
from orderStream#window.time(30 seconds)
insert into overdueDeliveries for expired-events;
in this query all attributes coming through the orderStream are added to the overdueDeliveries stream and Siddhi engine defines the stream with them.
similarly if you write a query like following:
from orderStream
select orderNo
insert into orderNumbersStream;
in this case the Siddhi engine will define a stream named orderNumbersStream with the attribute 'orderNo' only since it's explicitly selected. if there's no select statement, by default, all attributes are added to the stream.
Also, orderStream, deliveryStream and deliveredOrders are streams. In siddhi, events flow through 'streams' and you can imagine it as a way to pass events from one query to another (one or more).
Regarding tables - When you define a table, you have to explicitly define it using a define table query as given in this execution plan.
Related
I'm using QuestDB and SQL for the first time, and I stumbled upon the LATEST_ON syntax used in QuestDB. Can someone explain it's usage and where to use it?
Quoted from the docs:
For scenarios where multiple time series are stored in the same table, it is relatively difficult to identify the latest items of these time series with standard SQL syntax. QuestDB introduces LATEST ON clause for a SELECT statement to remove boilerplate clutter and splice the table with relative ease.
For more information visit the official documentation
LATEST ON is to find the latest record for each unique time series in a table. See this page for some examples: https://questdb.io/docs/reference/sql/latest-on/
It gives you the latest available record for each combination of the PARTITION BY values, according to the ON timestamp
Maybe easier to understand with an example. If you go to https://demo.questdb.io you can execute this query
select * from trades latest on timestamp
partition by symbol, side
It will then show you the latest existing row for each combination of Symbol and Side. If you wanted to do this using standard SQL, you would probably have to use a window function, something like this
select * from
(select *
,ROW_NUMBER() over (partition by Symbol, Side
order by timestamp DESC) AS RowNumber
from trades where timestamp > '2022-10-01') t
where t.RowNumber = 0
Latest on retrieves the latest entry by timestamp for a given key or combination of keys, for scenarios where multiple time series are stored in the same table.
Check this link for some examples: https://questdb.io/docs/reference/sql/latest-on/
I have a Target insert transformation that I'd like to do a delete on the row before insertion (weird niche case that may pop up).
I know the update override allows for :TU.xyz to point at incoming data, but Pre/Post SQL doesn't have the same configure menu.
How would I accomplish this correctly?
From what I recall, Pre- and Post- SQL uses a separate connection so there is no way of referring incoming data.
One thing you could do is flagging/storing the key somewhere and using the flag/instance in the PostSQL query, for example.
Maciejg is correct, there is no dynamic use of Pre and Post SQL.
I would normally recommend an Upsert approach.
But, I found using a MS SQL target, IICS has a bug with doing Insert and Update off a Router. The workaround of using a data driven operation removes batch loading on your insert, so... I now recommend a full data load approach.
From a target with the operation set to Insert, I do batch deletes with Pre SQL.
I found this faster and more cost affective than doing delete/insert/update operations individually.
I am reading json files from GCS and I have to load data into different BigQuery tables. These file may have multiple records for same customer with different timestamp. I have to pick latest among them for each customer. I am planning to achieve as below
Read files
Group by customer id
Apply DoFn to compare timestamp of records in each group and have only latest one from them
Flat it, convert to table row insert into BQ.
But I am unable to proceed with step 1. I see GroupByKey.create() but unable to make it use customer id as key.
I am implementing using JAVA. Any suggestions would be of great help. Thank you.
Before you GroupByKey you need to have your dataset in key-value pairs. It would be good if you had shown some of your code, but without knowing much, you'd do the following:
PCollection<JsonObject> objects = p.apply(FileIO.read(....)).apply(FormatData...)
// Once we have the data in JsonObjects, we key by customer ID:
PCollection<KV<String, Iterable<JsonObject>>> groupedData =
objects.apply(MapElements.via(elm -> KV.of(elm.getString("customerId"), elm)))
.apply(GroupByKey.create())
Once that's done, you can check timestamps and discard all bot the most recent as you were thinking.
Note that you will need to set coders, etc - if you get stuck with that we can iterate.
As a hint / tip, you can consider this example of a Json Coder.
I have an existing HANA warehouse which was built without create/update timestamps. I need to generate a number of nightly batch delta files to send to another platform. My problem is how to detect which records are new or changed so that I can capture those records within the replication process.
Is there a way to use HANA's built-in features to detect new/changed records?
SAP HANA does not provide a general change data capture interface for tables (up to current version HANA 2 SPS 02).
That means, to detect "changed records since a given point in time" some other approach has to be taken.
Depending on the information in the tables different options can be used:
if a table explicitly contains a reference to the last change time, this can be used
if a table has guaranteed update characteristics (e.g. no in-place update and monotone ID values), this could be used. E.g.
read all records where ID is larger than the last processed ID
if the table does not provide intrinsic information about change time then one could maintain a copy of the table that contains
only the records processed so far. This copy can then be used to
compare the current table and compute the difference. SAP HANA's
Smart Data Integration (SDI) flowgraphs support this approach.
In my experience, efforts to try "save time and money" on this seemingly simple problem of a delta load usually turn out to be more complex, time-consuming and expensive than using the corresponding features of ETL tools.
It is possible to create a Log table and organize columns according to your needs so that by creating a trigger on your database tables you can create a log record with timestamp values. Then you can query your log table to determine which records are inserted, updated or deleted from your source tables.
For example, following is from one of my test trigger codes
CREATE TRIGGER "A00077387"."SALARY_A_UPD" AFTER UPDATE ON "A00077387"."SALARY" REFERENCING OLD ROW MYOLDROW,
NEW ROW MYNEWROW FOR EACH ROW
begin INSERT
INTO SalaryLog ( Employee,
Salary,
Operation,
DateTime ) VALUES ( :mynewrow.Employee,
:mynewrow.Salary,
'U',
CURRENT_DATE )
;
end
;
You can create AFTER INSERT and AFTER DELETE triggers as well similar to AFTER UPDATE
You can organize your Log table so that so can track more than one table if you wish just by keeping table name, PK fields and values, operation type, timestamp values, etc.
But it is better and easier to use seperate Log tables for each table.
I want to implement the below scenario without using pl/sql procedure or trigger
I have a table called emp_details with coulmns (empno,ename,salary,emp_status,flag,date1) .
If someone updates the columns emp_status='abc' and flag='y', Informatica WF 1 would be in continuous running status and checking emp_status value "ABC"
If it found record / records then query all the records and it will invoke WF 2.
WF 1 will pass value ename,salary,Date1 to WF 2 (Wf2 will populate will insert the records into the table emp_details2).
How can I do this using the informatica approach instead of plsql or trigger?
If you want to achieve this in real time, write the output of WF1 to a message queue and in the second workflow WF2 subscribe to the message queue produced from WF1.
If you have batch process in place. Produce a output file from WF1 and use this output file in WF2. You can easily setup this dependency using job schedulers.
I don't understand why do you need two workflows in the first place. Why not accomplish emp_details2 table updates with the very same workflow that is looking for differences.
Anyway, this can be done using indicator file:
WF1 running continously should create a file if any changes have been found.
WF2 should be running continously with EventWait set to wait for the indicator file specified above. Once found it should use the Assignment Task to rename/delete the file and fetch the desired data from source and populate the emp_details2 table.
If you need it this way, you can pass the data through the indicator file
You can do this in a single workflow, Create a dummy session which which check for the flag in table after this divide the flow into two based on the below link conditions,
Flow one: Link condition, Session.Status=SUCCEEDED and SOURCE_SUCCESS_ROWS(count)>=1 then run your actual session which will load the data
Flow two: Link Condition, Session.Status=SUCCEEDED and SOURCE_SUCCESS_ROWS=0, connect this to control task and mark the workflow as complete.
Make sure you schedule the workflow at Informatica level to run continousuly.
Cheers