Teiid execute immediate gives a parsing error when executing long queries - teiid

I'm using virtual procedures to expose a REST API using teiid. In my virtual procedure i am using execute immediate to execute SQL queries which takes input parameters from the virtual procedure as filters to the where clause (dynamic where cluase). This works fine for small select queries but when the query length is above a particular length it gives an parsing error.
Is there any solution for this problem?
Is there any alternative way of implementing dynamic where clauses in my SQL query?
Lets assume that the fallowing query has around 4000 characters. this works fine.
CREATE VIRTUAL PROCEDURE GetVals(IN filters string) RETURNS (json clob) OPTIONS (UPDATECOUNT 0, "REST:METHOD" 'GET', "REST:URI" 'get_vals')
AS
BEGIN execute immediate
'SELECT JSONOBJECT(JSONARRAY_AGG(JSONOBJECT(
col1,
col2,
col3,
col4,
col5,
col6,
....
....
)) as "data"
) as json FROM(
SELECT SUM((CASE
WHEN ((CASE
.....
....
.....
FROM ex_table AS ex
JOIN table1
ON ...
.....
WHERE a=b AND ' || filters || '
GROUP BY col)
) AB';
END
But as soon as I add more lines into above SQL query then it give an parsing error login an arbitrary line. There is nothing wrong with the syntax of my query. The only change I make is the length of the query adding more lines into it(eg. In my SELECT statement If I select onemore extra column this gives an parsing error)This happens only when I am using execute immediate to execute queries

What version or Teiid are you using? And what is your parsing exception?
If it is due to truncation, then you'll need to use a 9.1 or later release, which allows for longer evaluated sql strings - https://issues.jboss.org/browse/TEIID-4376

Related

Is it possible to update a piece of data in a pre-existing table using the output of data from a CTE?

I need to correct a datapoint in a pre-existing table. I am using multiple CTEs to find the bad value and the corresponding good value. I am having trouble working out how to overwrite the value in the table using the output of the CTE. Here is what I am trying:
with [extra CTEs here]....
,CTE3 AS (
SELECT c1.FIELD_1, c1.FIELD_2 AS GOOD, c2.FIELD_3 AS BAD
FROM CTE1 c1
JOIN CTE2 c2 ON c1.FIELD_1 = c2.FIELD_1
)
update TABLE1
set TABLE1.FIELD_3 = CTE3.GOOD
from CTE3
INNER JOIN TABLE1 ON CTE3.BAD = TABLE1.FIELD_3
Is it even possible to achieve this?
If so, how should I change my logic to get it to work?
Trying the above logic is throwing the following error:
SQL Error [42601]: An unexpected token "WITH CTE1 AS ( SELECT
FIELD_1" was found following "BEGIN-OF-STATEMENT". Expected tokens
may include: "<update>".. SQLCODE=-104, SQLSTATE=42601,
DRIVER=4.27.25
Table designs and expected output:

Python Cx_Oracle; How Can I Execute a SQL Insert using a list as a parameter

I generate a list of ID numbers. I want to execute an insert statement that grabs all records from one table where the ID value is in my list and insert those records into another table.
Instead of running through multiple execute statements (as I know is possible), I found this cx_Oracle function, that supposedly can execute everything with a single statement and list parameter. (It also avoids the clunky formatting of the SQL statement before passing in the parameters) But I think I need to alter my list before passing it in as a parameter. Just not sure how.
I referenced this web page:
https://dev.mysql.com/doc/connector-python/en/connector-python-api-mysqlcursor-executemany.html
ids = getIDs()
print(ids)
[('12345',),('24567',),('78945',),('65423',)]
sql = """insert into scheme.newtable
select id, data1, data2, data3
from scheme.oldtable
where id in (%s)"""
cursor.prepare(sql)
cursor.executemany(None, ids)
I expected the SQL statement to execute as follows:
Insert into scheme.newtable
select id, data1, data2, data3 from scheme.oldtable where id in ('12345','24567','78945','65423')
Instead I get the following error:
ORA-01036: illegal variable name/number
Edit:
I found this StackOverflow: How can I do a batch insert into an Oracle database using Python?
I updated my code to prepare the statement before hand and updated the list items to tuples and I'm still getting the same error.
You use executemany() for batch DML, e.g. when you want to insert a large number of values into a table as an efficient equivalent of running multiple insert statements. There are cx_Oracle examples discussed in https://blogs.oracle.com/opal/efficient-and-scalable-batch-statement-execution-in-python-cx_oracle
However what you are doing with
insert into scheme.newtable
select id, data1, data2, data3
from scheme.oldtable
where id in (%s)
is a different thing - you are trying to execute one INSERT statement using multiple values in an IN clause. You would use a normal execute() for this.
Since Oracle keeps bind data distinct from SQL, you can't pass in multiple values to a single bind parameter because the data is treated as a single SQL entity, not a list of values. You could use %s string substitution syntax you have, but this is open to SQL Injection attacks.
There are various generic techniques that are common to Oracle language interfaces, see https://oracle.github.io/node-oracledb/doc/api.html#sqlwherein for solutions that you can rewrite to Python syntax.
using temporary table to save ids (batch insert)
cursor.prepare('insert into temp_table values (:1)')
dictList = [{'1': x} for x in ids]
cursor.executemany(None, dictList)
then insert selected value into newtable
sql="insert into scheme.newtable (selectid, data1, data2, data3 from scheme.oldtable inner join temp_table on scheme.oldtable.id = temp_table.id)"
cursor.execut(sql,connection)
the script of create temporary table in oracle
CREATE GLOBAL TEMPORARY TABLE temp_table
(
ID number
);
commit
I hope this useful.

Special character to query from latest timestamp sharded table in BigQuery

From
https://cloud.google.com/bigquery/docs/partitioned-tables:
you can shard tables using a time-based naming approach such as [PREFIX]_YYYYMMDD
This enables me to do:
SELECT count(*) FROM `xxx.xxx.xxx_*`
and query across all the shards. Is there a special notation that queries only the latest shard? For example say I had:
xxx_20180726
xxx_20180801
could I do something along the lines of
SELECT count(*) FROM `xxx.xxx.xxx_{{ latest }}`
to query xxx_20180801?
SINGLE QUERY INSPIRED BY Mikhail Berlyant:
SELECT count(*) as c FROM `XXX.PREFIX_*` WHERE _TABLE_SUFFIX IN ( SELECT
SUBSTR(MAX(table_id), LENGTH('PREFIX_') + 2)
FROM
`XXX.__TABLES_SUMMARY__`
WHERE
table_id LIKE 'PREFIX_%')
If you do care about cost (meaning how many tables will be scaned by your query) - the only way to do so is to do in two steps like below
First query
#standardSQL
SELECT SUBSTR(MAX(table_id), LENGTH('PREFIX') + 1)
FROM `xxx.xxx.__TABLES_SUMMARY__`
WHERE table_id LIKE 'PREFIX%'
Second Query
#standardSQL
SELECT COUNT(*)
FROM `xxx.xxx.PREFIX_*`
WHERE _TABLE_SUFFIX = '<result of first query>'
so, if result of first query is 20180801 so, second query will obviously look like below
#standardSQL
SELECT COUNT(*)
FROM `xxx.xxx.PREFIX_*`
WHERE _TABLE_SUFFIX = '20180801'
If you don't care about cost but rather need just result - you can easily combine above two queries into one - but - again - remember - even though result will be out of last table - cost will be as you query all table that match xxx.xxx.PREFIX_*
Forgot to mention (even though it should be obvious): of course when you have only COUNT(1) in your SELECT - the cost will be 0(zero) for both options - but in reality - most likely you will have something more valuable than just count(1)
I know this is a kind of an old thread but I was surprised why no one offers an answer using Variables.
"Héctor Neri" already mentioned this in the comments but I thought might be better to have an actual answer with a sample code posted.
#standardSQL
DECLARE SHARD_DATE STRING;
SET SHARD_DATE=(
SELECT MAX(REPLACE(table_name,'{TABLE}_',''))
FROM `{PRJ}.{DATASET}.INFORMATION_SCHEMA.TABLES`
WHERE table_name LIKE '{TABLE}_20%'
);
SELECT * FROM `{PRJ}.{DATASET}.{TABLE}_*`
WHERE _TABLE_SUFFIX = SHARD_DATE
Make sure to replace {PRJ}, {DATASET}, and {TABLE} values with your table location.
If you run this on BigQuery Web UI, you will see this message:
WARNING: Could not compute bytes processed estimate for script.
But you can see that variable properly reduce the table scan to the latest partition and does not cause any extra cost after running the script.

Bulk inserts with Sitecore Rocks

Is it possible to do a bulk insert with Sitecore Rocks? Something along the lines of SQL's
INSERT INTO TABLE1 SELECT COL1, COL2 FROM TABLE2
If so, what is the syntax? I'd like to add an item under any other item of a given template type.
I've tried using this syntax:
insert into (
##itemname,
##templateitem,
##path,
[etc.]
)
select
'Bulk-Add-Item',
//*[##id='{B2477E15-F54E-4DA1-B09D-825FF4D13F1D}'],
Path + '/Item',
[etc.]
To this, Query Analyzer responds:
"values" expected at position 440.
Please note that I have not found a working concatenation operator. For example,
Select ##item + '/value' from //sitecore/content/home/*
just returns '/value'. I've also tried ||, &&, and CONCATENATE without success.
There is apparently a way of doing bulk updates with CSV, but doing bulk updates directly from Sitecore Query Analyzer would be very useful
Currently you cannot do bulk inserts, but it is a really nice idea. I'll see what I can do.
Regarding the concatenation operator, this following works in the Query Analyzer:
select #Text + "/Value" from /sitecore/content/Home
This returns "Welcome to Sitecore/Value".
The ##item just returns empty, because it is not a valid system attribute.

c++ how to make two statement sql work with OLEDB?

I have the following SQL statement:
USE "ws_results_db_2011_09_11_09_06_24";SELECT table_name FROM INFORMATION_SCHEMA.Tables WHERE table_name like 'NET_%_STAT' order by table_name
I am using the following C++ code to execute it:
IDBCreateCommandPtr spDBCreateCommand = GetTheDBCreateCommandPointer();
ICommandTextPtr spCommandText;
spDBCreateCommand->CreateCommand(NULL, IID_ICommandText, reinterpret_cast<IUnknown **>(&spCommandText));
spCommandText->SetCommandText(DBGUID_SQL, GetTheQueryText());
IRowsetPtr spRowset;
spCommandText->Execute(NULL, IID_IRowset, NULL, NULL, reinterpret_cast<IUnknown **>(&spRowset));
RowHandles hRows(spRowset, 0);
ULONG rowCount;
ULONG maxRowCount = 1;
spRowset->GetNextRows(DB_NULL_HCHAPTER, 0, maxRowCount, &rowCount, hRows.get_addr());
Two notes:
Error handling is omitted for brevity
RowHandles implements the RAII concept for HROW *
Anyway, I fail to execute the two SQL statements. What happens is that spCommandText->Execute returns S_OK, but sets spRowset to NULL.
If I execute the same spCommandText->Execute the second time (by moving back the instruction pointer during the debugging session), then a valid IRowset pointer is returned - I successfully obtain the correct column information using it. But spRowset->GetNextRows sets rowCount to 0 and returns DB_S_ENDOFROWSET - no luck.
The code is working fine when I execute a single SQL statement.
What am I doing wrong?
Thanks.
It is up to the client to split the sql commands - isql does this on the ; ie you are asking for two commands the use and the select.
So the fix is to do the two commands by separate sets of CreateCommands and execute.
Also note in this case you can do the commands as one SQL statement
SELECT table_name FROM ws_results_db_2011_09_11_09_06_24.INFORMATION_SCHEMA.Tables
WHERE table_name like 'NET_%_STAT'
order by table_name