WSO2 DAS SiddhiQL: Initialize an event table and update it after - wso2

I have some related questions :
Fisrt, I want to initialize an event table with default values and 100 rows like on this picture:
Second, once the initialization is done I would like to update this table. How can I execute in the same execution plan a query2 once the query1 execution is finished?
To finish, I have an event with 'altitude' attribute. In my execution plan, for each event, I want to increment count1 of every row of my event table where the num column is smaller than the alitude. I tried it but that doesn't increment count of all rows.
FROM inputStream JOIN counterTable
SELECT count1+1 as count1, altitude as tempNum
update counterTable on counterTable.count1 < tempNum;
FROM inputStream JOIN counterTable
SELECT counterTable.num as theAltitude, counterTable.count1 as countAltitude
INSERT INTO outputStream;

If you want to initialize each time the execution plan gets deployed, you should use an in-memory event table (as shown below). Otherwise, you can simply use an RDBMS event table, where it's already been initialized.
Queries will run in the order that they have defined, but that process will occur per each event (not as a batch, i.e if there two queries which consume from inputStream, when event 1 comes into inputStream it goes to query 1, then to query 2, and then only event 2 will get consumed..).
Refer to the below snippet;
/* Define the trigger to be used with initialization */
define trigger triggerStream at 'start';
/* Define streams */
define stream inputStream (altitude int);
define stream outputStream (theAltitude long, countAltitude int);
/* Define table */
define table counterTable (num long, count1 int, count2 int, tempNum int);
/* Iterate and generate 100 events */
from triggerStream[count() < 100]
insert into triggerStream;
/* Using above 100 events, initialize the event table */
from triggerStream
select count() as num, 0 as count1, 0 as count2, 0 as tempNum
insert into counterTable;
/* Perform the update logic here */
from inputStream as i join counterTable as c
on c.count1 < i.altitude
select c.num, (c.count1 + 1) as count1, c.count2, altitude as tempNum
insert into updateStream;
from updateStream
insert overwrite counterTable
on counterTable.num == num;
/* Join the table and get the updated results */
from inputStream join counterTable as c
select c.num as theAltitude, c.count1 as countAltitude
insert into outputStream;

Table values can be initialized as follows.
#info(name='initialize table values')
from inputStream[count()==1]
select 1 as id, 0 as counter1, 0 as counter2, 0 as counter3
insert into counterTable;

Related

mariadb how to find longest sequence of concurrent numbers

I have a table describing members and when they were on runs e.g.:
memberid(varchar), RunNo(integer)
"1017",1868
"1017",1875
"1017",1877
"1017",1878
"1017",1879
"1017",1880
"1017",1882
"1017",1884
"1017",1885
"1017",1886
"1017",1887
"1017",1889
"1017",1894
"1017",1895
"1017",1896
"1017",1897
"1017",1902
"1017",1903
"1017",1904
"1017",1906
"1017",1907
"1017",1909
"1017",1910
"1017",1911
"1017",1929
"1017",1930
"1017",1931
"1017",1934
"1017",1935
"1079",1840
"1079",1844
"1079",1846
"1079",1847
"1079",1850
"1079",1854
"1079",1857
"1079",1859
"1079",1861
"1079",1863
"1079",1865
"1079",1866
"1079",1869
"1079",1870
"1079",1871
"1079",1872
"1079",1873
"1079",1874
"1079",1875
"1079",1876
"1079",1877
"1079",1878
"1079",1879
"1079",1880
"1079",1882
"1079",1884
"1079",1885
"1079",1886
"1079",1889
"1079",1890
"1079",1891
"1079",1893
"1079",1895
"1079",1897
"1079",1902
"1079",1903
"1079",1904
"1079",1905
"1079",1907
"1079",1908
"1079",1910
"1079",1911
"1079",1923
I would like to find for each memberid what is the longest consecutive sequence of run numbers for each runner and what is the latest and longest sequence assuming there are a number of similar sequences and assuming the runnos are in date order.
For example 1017 has a maximum of 4 runs in a row and 1079 has a maximum of 12.
There should be a way of solving this but I have not been able to find a solution.
I am using MariaDB v10.4.22 on Windows 10.
This is a problem that can be solved with recursive CTEs like below:
WITH RECURSIVE runlength AS
(SELECT memberId AS id,
RunNo + 1 AS next,
1 AS length
FROM members
UNION ALL SELECT members.memberId,
RunNo+1,
length+1
FROM members
JOIN runlength ON members.memberId = runlength.id
AND members.runNo = next)
SELECT id,
max(length)
FROM runlength
GROUP BY id;
The anchor sets up the initial runlength contents to be the initial values, each of length 1 and the next value it wants to increase the sequence.
SELECT memberId AS id,
RunNo + 1 AS next,
1 AS length
FROM members
The recursive part looks for the next part of the sequence and increases the length:
SELECT members.memberId,
RunNo+1,
length+1
FROM members
JOIN runlength ON members.memberId = runlength.id
AND members.runNo = next
Finally, as this will have far too many short sequences, we only want the maximum per user:
SELECT id,
max(length)
FROM runlength
GROUP BY id;
Note: Because of the large number of generated rows this is by no means an efficient query and should only be used for small datasets.
ref: fiddle

Coldfusion/Lucee - Loop over 3D array to insert into database using multiple inserts using one query

I know the title is a mouth-full - sorry about that but trying to be specific here.
DB: MySql (technically Maria)
ColdFusion (technically Lucee: 5.x)
The array looks like the following:
NOTE: the outter most array only shows part of 2 and could continue through into the 30's.
I'm looking to perform a loop over the array to insert the elements marked as "string" in the image into the database using one query. Query has been trimmed for the sake of clarity and conciseness:
for (outer = 1; outer <= ArrayLen(myArray); outer++) {
local.currentrow = local.currentrow + 1;
for (inner = 1; inner <= ArrayLen(myArray[outer]); inner++) {
local.sql = "
INSERT INTO table (uuid, typeID, menuId, activityID, userID)
VALUES (
'#local.uuid#',
#myArray[outer][inner][1]#,
#myArray[outer][inner][2]#,
#myArray[outer][inner][3]#,
#arguments.formDataStruct.userID#
)";
queryExecute(local.sql);
}
}
I'm looking for something along this line but as written, it isn't working:
local.sql = "
INSERT INTO table (uuid, typeID, menuId, activityID, userID)
VALUES (
if (local.currentrow gt 1) {
,
}
for (outer = 1; outer <= ArrayLen(myArray); outer++) {
local.currentrow = local.currentrow + 1;
for (inner = 1; inner <= ArrayLen(myArray[outer]); inner++) {
'#local.uuid#',
'#myArray[outer][inner][1]#',
'#myArray[outer][inner][2]#',
'#myArray[outer][inner][3]#',
'#arguments.formDataStruct.userID#'
}
})
";
queryExecute(local.sql);
The error message I'm getting is
Element at position [1] doesn't exist in array
but if I perform a writedump[1][3][3] (e.g.), I'll get the value 24.
I would recommend against looping over an INSERT statement and rather just loop over VALUES to generate a single INSERT statement. A single INSERT will perform significantly faster, plus it will minimize the connections to your database.
Build out the list of values with something like:
for (var outer in arguments.inArray) {
for (var inner in outer) {
// Concat elements of inner array to a SQL VALUE string. If UUID is supposed to be a unique identity for the row, use Maria's uuid() instead of CF (or skip the UUID insert and let Maria do it).
// inArray elements and inUserID should be sanitized.
local.values &= "( uuid(), '" & inner[1] & "','" & inner[2] & "','" & inner[3] & "'," & local.userID & ")," ;
}
}
local.values = left(local.values,len(local.values)-1) ; // Get rid of the last comma.
local.sql = "INSERT INTO table (uuid, typeID, menuId, activityID, userID) VALUES " & local.values ;
After you've built up the SQL INSERT string, execute the query to INSERT the records. (You would probably build the above function differently to handle building the query string and parameters and then executing it all in one place.)
Don't forget to sanitize your array and other inputs. Does the array come from a source you control or is it user input?
https://trycf.com/gist/7ad6af1e84906b601834b0cc5ff5a392/lucee5?theme=monokai
http://dbfiddle.uk/?rdbms=mariadb_10.2&fiddle=d11f45f30723ba910c58a1e3f7a7c30b

Hash Join not behaving as required

I am using a hash join on some sample data to join a small table on a larger one. In this example '_1080544_27_08_2016' is the larger table and '_2015_2016_playerlistlookup' the smaller one. Here is my code:
data both(drop=rc);
declare Hash Plan
(dataset: 'work._2015_2016_playerlistlookup'); /* declare the name Plan for hash */
rc = plan.DefineKey ('Player_ID'); /* identify fields to use as keys */
rc = plan.DefineData ('Player_Full_Name',
'Player_First_Name', 'Player_Last_Name',
'Player_ID2'); /* identify fields to use as data */
rc = plan.DefineDone (); /* complete hash table definition */
do until (eof1) ; /* loop to read records from _1080544_27_08_2016 */
set _1080544_27_08_2016 end = eof1;
rc = plan.add (); /* add each record to the hash table */
end;
do until (eof2) ; /* loop to read records from _2015_2016_playerlistlookup */
set _2015_2016_playerlistlookup end = eof2;
call missing(Player_Full_Name,
Player_First_Name, Player_Last_Name); /* initialize the variable we intend to fill */
rc = plan.find (); /* lookup each plan_id in hash Plan */
output; /* write record to Both */
end;
stop;
run;
This is producing a table that has the same numbers of rows as the smaller, lookup table. What I would like to see if a table the same size as the larger one with the additional fields from the lookup table joined on via the primary key.
The larger table has repeating primary keys. That is to say the primary key is not unique (based on row number for example).
Can someone please tell me what I need to amend in the code?
Thanks
You are loading both datasets into your hash object - the small one when you declare it, and then the large one as well in your first do-loop. This makes no sense to me, unless you have lookup values already populated for some but not all of the rows in your large dataset, and you are trying to carry them over between rows.
You are then looping through the lookup dataset and producing 1 output row for each row of that dataset.
It is unclear exactly what you are trying to do here, as this is not a standard use case for hash objects.
Here's my best guess - if this isn't what you're trying to do, please post sample input and intended output datasets.
data want;
set _1080544_27_08_2016;
if 0 then set _2015_2016_playerlistlookup;
if _n_ = 1 then do;
declare Hash Plan(dataset: 'work._2015_2016_playerlistlookup');
rc = plan.DefineKey ('Player_ID');
rc = plan.DefineData ('Player_Full_Name', 'Player_First_Name', 'Player_Last_Name', 'Player_ID2');
rc = plan.DefineDone ();
end;
call missing(Player_Full_Name, Player_First_Name, Player_Last_Name);
rc = plan.find();
drop rc;
run;

CDaoRecordSet select all from column

How can i store all the records of a column in a CDaoRecordSet? I've tried this, but will only return the first record of that column:
rs.Open(dbOpenSnapshot, "SELECT Numar_inmatriculare FROM Masini");
short nFields = rs.GetFieldCount();//returns 1
If i make a "SELECT count(*) AS Numar_inmatriculare FROM Masini" and use rs.GetFieldValue(0) it returns me 13, the correct number of records.
GetFieldCount returns the number of columns in your resultset.
To iterate through the records (=rows), you have to call MoveNext until IsEOF() returns true.
rs.Open(dbOpenSnapshot, "SELECT Numar_inmatriculare FROM Masini");
while(!rs.IsEOF())
{
// do something
rs.MoveNext();
}

sqlite: How to get the column id of the selected column?

Given:
an sqlite database with a table T
the table T contains 10 columns - C0, C1 ... C9.
an sqlite3_stmt pointer corresponding to select C3,C2 from T
OK, so I can fetch the selected column values using the sqlite3_column_XXX family of methods (http://www.sqlite.org/capi3ref.html#sqlite3_column_blob), like this:
sqlite3_stmt *s;
sqlite3_prepare_v2(db, query, sizeof(query), &s, NULL);
while ((result = sqlite3_step(s)) == SQLITE_ROW)
{
const char *v3 = reinterpret_cast<const char *>(sqlite3_column_text(s, 0);
const char *v2 = reinterpret_cast<const char *>(sqlite3_column_text(s, 1);
}
What I need is the real index of the selected columns, i.e. 3 for v3 and 2 for v2.
Motivation: I want to be able to parse the returned string value into the real column type. Indeed, my schema says that c3 is a datetime, which sqlite treats as TEXT. So, sqlite3_column_type(s, 0) returns SQLITE3_TEXT, but the table metadata (available from pragma table_info(T)) retains the string datetime, which is the intended type of the column. Knowing it, I can parse the returned string into the respective unix time since the epoch, for instance.
But how can I map the query column index to the table column index:
query column 0 -> table column 3
query column 1 -> table column 2
Thanks.
You could use the sqlite C function sqlite3_column_decltype to get the declared column data type from the result stmt? It doesn't specifically answer your question (getting the original column's index), but could be an alternative way to achieve what you need?