Teiid Query engine modifies the optimize version of my Query to a un-optimized query when executing - teiid

I am executing fairly large SQL select queries against a redshift DB using teiid. I have optimized my query to give a better response time by avoiding using inner queries and inner select statements in my query. How ever when I execute the query, the teiid query engine changes my query to a different version which uses inner queries and inner select statements. Is there any way of bypassing this behavior and directly use the query which i provide.
Her is the original teiid query that I execute
CREATE VIRTUAL PROCEDURE GetTop() RETURNS (json clob) OPTIONS (UPDATECOUNT 0, "REST:METHOD" 'GET', "REST:URI" 'GetTop')
AS
/*+ cache(pref_mem ttl:14400000) */
BEGIN
execute immediate
'SELECT JSONOBJECT(
JSONARRAY_AGG(
JSONOBJECT(
total_purchases,
total_invoice,
total_records,
period
)
)
AS "dd"
) as json FROM(
SELECT SUM((CASE
GROUP BY period
Teiid query engine converts above query into bellow version which has an inner SELECT Statement
SELECT SUM(v_0.c_1),
COUNT(DISTINCT v_0.c_2),
COUNT(v_0.c_2),
v_0.c_0
FROM (SELECT CASE
GROUP BY v_0.c_0
I would like to knowhHow can I by pass this behavior and execute my original query?

Teiid will only create inline views for specific purposes - it generally removes them when possible. You'll need to provide more context of the user query, the processor plan, or debug plan so that we can understand why the inline views are needed.

Related

AWS Quicksight : How to use parameter value inside SQL (data set) to render dynamic data on dashboard?

There is a provision to pass value for quicksight parameters via URL. But how can I use the value of the parameter inside SQL (data set) to get dynamic data on dashboard?
For example:
QUERY as of now:
select * from CITYLIST;
Dashboard:
CITYLIST
city_name | cost_of_living
AAAAAAAAA | 20000
BBBBBBBBB | 25000
CCCCCCCCC | 30000
Parameter Created : cityName
URL Triggered : https://aws-------------------/dashboard/abcd123456xyz#p.cityName=AAAAAAAAA
Somehow I need to use the value passed in URL inside SQL so that I can write a dynamic query as below:
select * from CITYLIST where city_name = SomeHowNeedAccessOfParameterValue;
QuickSight doesn't provide a way to access parameters via SQL directly.
Instead you should create a filter from the parameter to accomplish your use-case.
This is effectively QuickSight's way of creating the WHERE clause for you.
This design decision makes sense to me. Though it takes filtering out of your control in the SQL, it makes your data sets more reusable (what would happen in the SQL if the parameter weren't provided?)
Create a parameter, then control and then filter ("Custom filter" -> "Use parameters").
If you select Direct query option and Custom SQL query for data set, then SQL query will be executed per each visual change/update.
The final query on DB side will look like [custom SQL query] + WHERE clause.
For example:
Visual side:
For control Control_1 selected values "A", "B", "C";
DB side:
[Custom SQL from data set] + 'WHERE column in ("A", "B", "C")'
Quicksight builds a query for you and runs it on DB side.
This allows to reduce data sent over network.
Yes now it provides you to sql editor amd you can use it for the same purpose
l
For full details please find the below reference
https://docs.aws.amazon.com/quicksight/latest/user/adding-a-SQL-query.html

Pentaho PDI get SQL SUM() with conditions

I'm using Pentaho PDI 7.1. I'm trying to convert data from Mysql to Mysql changing the structure of data.
I'm reading the source table (customers) and for each row I've to run another query to calculate the balance.
I was trying to use Database value lookup to accomplish it but maybe is not the best way.
I've to run a query like this to get the balance:
SELECT
SUM(
CASE WHEN direzione='ENTRATA' THEN -importo ELSE +importo END
)
FROM Movimento WHERE contoFidelizzato_id = ?
I should set the parameter taking it from the previous step. Some advice?
The Database lookup value may be a good idea, especially if you are used to database reasoning, but it may result in many queries which may not be the most efficient.
A more PDI-ish style would be to make the query like:
SELECT contoFidelizzato_id
, SUM(CASE WHEN direzione='ENTRATA' THEN -importo ELSE +importo END)
FROM Movimento
GROUP BY contoFidelizzato_id
and use it as the info source of a Lookup Stream Step, like this:
An even more PDI-ish style would be to divert the source table (customer) in two flows : one in which you keep the source rows, and one that you group by contoFidelizzato_id. Of course, you need a formula, or a Javascript, or to put a formula in the SQL of the Table input to change the sign when needed.
Test to know which strategy is better in your case. You'll soon discover that the PDI is very good at handling large data.

in qt how can i run two query sucessfully?

How can I run two query if I run one query is executed successfully then the second query must also be executed else the value changed by first query should be reverted?
QSqlQuery *query=new QSqlQuery(connector.db);
QSqlQuery *query2=new QSqlQuery(connector.db);
here both query and query 2 should run or none query should run.
You will need a transaction. Using the following code, you can ensure that either both of them or none of them will be executed.
QSqlDatabase::database().transaction();
// execute your queries here!
QSqlDatabase::database().commit();
Documentation:
If the underlying database engine supports transactions, QSqlDriver::hasFeature(QSqlDriver::Transactions) will return true. You can use QSqlDatabase::transaction() to initiate a transaction, followed by the SQL commands you want to execute within the context of the transaction, and then either QSqlDatabase::commit() or QSqlDatabase::rollback(). When using transactions you must start the transaction before you create your query.

QSqlQuery using with indexes

I have my own data store mechanism for store data. but I want to implement standards data manipulation and query interface for end users,so I thought QT sql is suitable for my case.
but I still cannot understand how do I involved my indexes for sql query.
let say for example,
I have table with column A(int),B(int),C(int),D(int) and column A is indexed.assume I execute query like select * from Foo where A = 10;
How do I involved my index for search the results?.
You have written your own storage system and want to manipulate it using an SQL like syntax? I don't think Qt SQL is the right tool for that job. It offers connectivity to various SQL servers and is not meant for parsing SQL statements. Qt expects to "pass through" the queries and then somehow parse the result set and transform it into a Qt friendly representation.
So if you only want to have a Qt friendly representation, I wouldn't see a reason to go the indirection with SQL.
But regarding your problem:
In SQL, indexes are usually not stated in the queries, but during the creation of the table schema. But SQL server has a possibility to "hint" indexes, is that what you are looking for?
SELECT column_list FROM table_name WITH (INDEX (index_name) [, ...]);

Concat strings in SQL Server and Oracle with the same unmodified query

I have a program that must support both Oracle and SQL Server for it's database.
At some point I must execute a query where I want to concatenate 2 columns in the select statement.
In SQL Server, this is done with the + operator
select column1 + ' - ' + column2 from mytable
And oracle this is done with concat
select concat(concat(column1, ' - '), column2) from mytable
I'm looking for a way to leverage them both, so my code has a single SQL query literal string for both databases and I can avoid ugly constructs where I need to check which DBMS I'm connected to.
My first instinct was to encapsulate the different queries into a stored procedure, so each DBMS can have their own implementation of the query, but I was unable to create the procedure in Oracle that returns the record set in the same way as SQL Server does.
Update: Creating a concat function in SQL Server doesn't make the query compatible with Oracle because SQL Server requires the owner to be specified when calling the function as:
select dbo.concat(dbo.concat(column1), ' - '), column2) from mytable
It took me a while to figure it out after creating my own concat function in SQL Server.
On the other hand, looks like a function in Oracle with SYS_REFCURSOR can't be called with a simple
exec myfunction
And return the table as in SQL Server.
In the end, the solution was to create a view with the same name on both RDBMs but with different implementations, then I can do a simple select on the view.
If you want to go down the path of creating a stored procedure, whatever framework you're using should be able to more or less transparently handle an Oracle stored procedure with an OUT parameter that is a SYS_REFCURSOR and call that as you would a SQL Server stored procedure that just does a SELECT statement.
CREATE OR REPLACE PROCEDURE some_procedure( p_rc OUT sys_refcursor )
AS
BEGIN
-- You could use the CONCAT function rather than Oracle's string concatenation
-- operator || but I would prefer the double pipes.
OPEN p_rc
FOR SELECT column1 || ' - ' || column2
FROM myTable;
END;
Alternatively, you could define your own CONCAT function in SQL Server.
Nope, sorry.
As you've noted string concatentaion is implemented in SQL-Server with + and Oracle with concat or ||.
I would avoid doing some nasty string manipulation in stored procedures and simply create your own concatenation function in one instance or the other that uses the same syntax. Probably SQL-Server so you can use concat.
The alternative is to pass + or || depending on what RDBMS you're connected to.
Apparently in SQL Server 2012 they have included a CONCAT() function:
http://msdn.microsoft.com/en-us/library/hh231515.aspx
If you are trying to create a database agnostic application, you should stick to either
Stick to very basic SQL and do anything like this in your application.
Create different abstractions for different databases. If you hope to get any kind of scale out of your application, this is the path you'll likely need to take.
I wouldn't go down the stored procedure path, you can probably get it to work, but but week you'll find out you need to support "database X", then you'll need to rewrite your stored proc in that database as well. Its a recipe for pain.