I have a userStream having name,address,status
I want to store this details in UserDetailsTable (In memory table)
Result of UserDetailsTable is below
"Jose", "address1","false"
"Rockey","address2","false"
"sibin", "address3","false"
I have another triggerStream having name,triggerStatus
"Rockey","delete"
"Jose" ,"update"
Case 1)
When triggerStream comes as "Rockey", I want to join this triggerStream with UserDetailsTable according to (name and triggerStatus) and delete the row from UserDetailsTable.
Case 2) When triggerStream comes as "Jose", I want to join this triggerStream with UserDetailsTable according to (name and triggerStatus) and update the status as "true" in UserDetailsTable.
Final state of UserDetailsTable is below.
"Jose", "address1","true"
"sibin", "address3","false"
how can able to do this with WSO2 CEP?
Assuming you've defined the streams and tables and an insert query to populate the in-memory table.
For case 1, you can use a delete query as follows with a condition:
from triggerStream
delete userDetailsTable
on name == userDetailsTable.name and triggerStatus == userDetailsTable.status;
If you want to delete specific names such as 'rockey' you can add a filter to the above query as follows:
from triggerStream[name == 'rockey']
delete userDetailsTable
on name == userDetailsTable.name and triggerStatus == userDetailsTable.status;
For case 2, you can use an update query with a filter for 'jose' as follows:
from triggerStream[name == 'jose']
select name, triggerStatus as status
update userDetailsTable on name == userDetailsTable.name
in this query, we rename the attribute 'triggerStatus' as 'status' to make it equal to table's attribute name.
Related
I am using Postman and Netsuite's SuiteQL to query some tables. I would like to write two queries. One is to return all items (fulfillment items) for a given sales order. Two is to return all sales orders that contain a given item. I am not sure what tables to use.
The sales order I can return from something like this.
"q": "SELECT * FROM transaction WHERE Type = 'SalesOrd' and id = '12345'"
The item I can get from this.
"q": "SELECT * FROM item WHERE id = 1122"
I can join transactions and transactionline for the sale order, but no items.
"q": "SELECT * from transactionline tl join transaction t on tl.transaction = t.id where t.id in ('12345')"
The best reference I have found is the Analytics Browser, https://system.netsuite.com/help/helpcenter/en_US/srbrowser/Browser2021_1/analytics/record/transaction.html, but it does not show relationships like an ERD diagram.
What tables do I need to join to say, given this item id 1122, return me all sales orders (transactions) that have this item?
You are looking for TransactionLine.item. That will allow you to query transaction lines whose item is whatever internal id you specify.
{
"q": "SELECT Transaction.ID FROM Transaction INNER JOIN TransactionLine ON TransactionLine.Transaction = Transaction.ID WHERE type = 'SalesOrd' AND TransactionLine.item = 1122"
}
If you are serious about getting all available tables to query take a look at the metadata catalog. It's not technically meant to be used for learning SuiteQL (supposed to make the normal API Calls easier to navigate), but I've found the catalog endpoints are the same as the SuiteQL tables for the most part.
https://{{YOUR_ACCOUNT_ID}}.suitetalk.api.netsuite.com/services/rest/record/v1/metadata-catalog/
Headers:
Accept application/schema+json
You can review all the available records, fields and joins in the Record Catalog page (Customization > Record Catalog).
I got lots of example to append/overwrite table in sql from AZ Databricks Notebook. But no single way to directly update, insert data using query or otherway.
ex. I want to update all row where (identity column)ID = 1143, so steps which I need to taken care are
val srMaster = "(SELECT ID, userid,statusid,bloburl,changedby FROM SRMaster WHERE ID = 1143) srMaster"
val srMasterTable = spark.read.jdbc(url=jdbcUrl, table=srMaster,
properties=connectionProperties)
srMasterTable.createOrReplaceTempView("srMasterTable")
val srMasterTableUpdated = spark.sql("SELECT userid,statusid,bloburl,140 AS changedby FROM srMasterTable")
import org.apache.spark.sql.SaveMode
srMasterTableUpdated.write.mode(SaveMode.Overwrite)
.jdbc(jdbcUrl, "[dbo].[SRMaster]", connectionProperties)
Is there any other sufficient way to achieve the same.
Note : Above code is also not working as SQLServerException: Could not drop object 'dbo.SRMaster' because it is referenced by a FOREIGN KEY constraint. , so it look like it drop table and recreate...not at all the solution.
You can use insert using a FROM statement.
Example: update values from another table in this table where a column matches.
INSERT INTO srMaster
FROM srMasterTable SELECT userid,statusid,bloburl,140 WHERE ID = 1143;
or
insert new values to rows where one of the existing column value matches
UPDATE srMaster SET userid = 1, statusid = 2, bloburl = 'https://url', changedby ='user' WHERE ID = '1143'
or just insert multiple values
INSERT INTO srMaster VALUES
(1, 10, 'https://url1','user1'),
(2, 11, 'https://url2','user2');
In SQL Server, you cannot drop a table if it is referenced by a FOREIGN KEY constraint. You have to either drop the child tables before removing the parent table, or remove foreign key constraints.
For a parent table, you can use the below query to get foreign key constraint names and the referencing table names:
SELECT name AS 'Foreign Key Constraint Name',
OBJECT_SCHEMA_NAME(parent_object_id) + '.' + OBJECT_NAME(parent_object_id) AS 'Child Table'
FROM sys.foreign_keys
WHERE OBJECT_SCHEMA_NAME(referenced_object_id) = 'dbo' AND
OBJECT_NAME(referenced_object_id) = 'PARENT_TABLE'
Then you can alter the child table and drop the constraint by its name using the below statement:
ALTER TABLE dbo.childtable DROP CONSTRAINT FK_NAME;
I'm learning APEX 5
I have a control named X_CONTROL, where I want to populate his content with an SQL query.
To do that, I need the ID primary key from a table, which should be the ID of the row selected on a Select List control named MY_LIST_CONTROL.
MY_LIST_CONTROL has a list of values taken from a column of the table "MyTable", which is not the ID primary key.
I tried to populate X_CONTROL with this SQL
Select ID from MyTable where ColumnName=:MY_LIST_CONTROL
It doesn't work, and should not work because ColumnName is not "unique", like ID is.
So, the question is, how do I recover, with SQL, the ID of the selected row which correspond to the selected value in MY_LIST_CONTROL.
It should be SQL, because APEX 5 demands an SQL query to populate the X_CONTROL.
I have set up a simple example here on apex.oracle.com:
Whenever a Department is selected (item P32_DEPTNO), its Location is copied into the second item (P32_LOC).
This is done by a dynamic action on P32_DEPTNO defined as follows:
Event: Change
Selection Type: Item(s)
Item(s): P32_DEPTNO
TRUE Action:
Action: Set Value
Set Type: SQL Statement
SQL Statement:
select loc
from dept
where deptno = :P32_DEPTNO
Items to Submit: P32_DEPTNO
I have a use case where I need to query dynamo db programatically using dynamo db query expression.
e.x suppose there is two attributes for the A and B and I want a filter expression like (A='Test' OR A ='Test1') and B='test2'.
I searched related to this but din't find useful resource.
I am new to dynamo db.
Thats how you do it in java
Table table = dynamoDB.getTable(tableName);
QuerySpec spec = new QuerySpec()
// .withKeyConditionExpression("partitionKey = :id and sortKey > :range") // In case filter expression is on key attributes
.withFilterExpression("(A = :a1 or A = :a2) and B = :b")
.withValueMap(new ValueMap()
//.withString(":id", "Partition key value")
//.withString(":range", 100)
.withString(":a1", "Test")
.withString(":a2", "Test1")
.withString(":b", "test2"))
// .withConsistentRead(true);
ItemCollection<QueryOutcome> items = table.query(spec);
If your A and B are key attributes you specify them in KeyConditionExpression else everything goes in FilterExpression.
Main Difference is Key expressions are applied on key attributes as name suggests and records getting fetched due to it is what you get charged for, while filter expression comes free and is applied after fetching those records to return you only matching the filter condition records.
To understand more read
http://docs.aws.amazon.com/amazondynamodb/latest/developerguide/QueryAndScan.html#FilteringResults
http://docs.aws.amazon.com/amazondynamodb/latest/developerguide/ExpressionPlaceholders.html
Is it possible to add a postgresql hstorefield (django >= 1.8) to a model where values in the hstore are unique?
Keys are obviously unique but can values be unique as well? I suppose custom validators could be added to the model but I am curious to know if this can be done on the database level
A single hstore value can contain multiple key => value pairs, making a solution based on a unique index impossible. Additionally, your new hstore value can also have multiple key => value pairs. The only viable alternative is then a BEFORE INSERT OR UPDATE trigger on the table:
CREATE FUNCTION trf_uniq_hstore_values() RETURNS trigger AS $$
DECLARE
dups text;
BEGIN
SELECT string_agg(x, ',') INTO dups
FROM (SELECT svals(hstorefield) AS x FROM my_table) sub
JOIN (SELECT svals(NEW.hstorefield) AS x) vals USING (x);
IF dups IS NOT NULL THEN
RAISE NOTICE format('Value(s) %s violate(s) uniqueness constraint. Operation aborted.', dups);
RETURN NULL;
ELSE
RETURN NEW;
END IF;
END; $$ LANGUAGE plpgsql;
CREATE TRIGGER tr_uniq_hstore_values
BEFORE INSERT OR UPDATE ON my_table
FOR EACH ROW EXECUTE PROCEDURE trf_uniq_hstore_values();
Note that this will not trap existing duplicates in the table.