I have a program that must support both Oracle and SQL Server for it's database.
At some point I must execute a query where I want to concatenate 2 columns in the select statement.
In SQL Server, this is done with the + operator
select column1 + ' - ' + column2 from mytable
And oracle this is done with concat
select concat(concat(column1, ' - '), column2) from mytable
I'm looking for a way to leverage them both, so my code has a single SQL query literal string for both databases and I can avoid ugly constructs where I need to check which DBMS I'm connected to.
My first instinct was to encapsulate the different queries into a stored procedure, so each DBMS can have their own implementation of the query, but I was unable to create the procedure in Oracle that returns the record set in the same way as SQL Server does.
Update: Creating a concat function in SQL Server doesn't make the query compatible with Oracle because SQL Server requires the owner to be specified when calling the function as:
select dbo.concat(dbo.concat(column1), ' - '), column2) from mytable
It took me a while to figure it out after creating my own concat function in SQL Server.
On the other hand, looks like a function in Oracle with SYS_REFCURSOR can't be called with a simple
exec myfunction
And return the table as in SQL Server.
In the end, the solution was to create a view with the same name on both RDBMs but with different implementations, then I can do a simple select on the view.
If you want to go down the path of creating a stored procedure, whatever framework you're using should be able to more or less transparently handle an Oracle stored procedure with an OUT parameter that is a SYS_REFCURSOR and call that as you would a SQL Server stored procedure that just does a SELECT statement.
CREATE OR REPLACE PROCEDURE some_procedure( p_rc OUT sys_refcursor )
AS
BEGIN
-- You could use the CONCAT function rather than Oracle's string concatenation
-- operator || but I would prefer the double pipes.
OPEN p_rc
FOR SELECT column1 || ' - ' || column2
FROM myTable;
END;
Alternatively, you could define your own CONCAT function in SQL Server.
Nope, sorry.
As you've noted string concatentaion is implemented in SQL-Server with + and Oracle with concat or ||.
I would avoid doing some nasty string manipulation in stored procedures and simply create your own concatenation function in one instance or the other that uses the same syntax. Probably SQL-Server so you can use concat.
The alternative is to pass + or || depending on what RDBMS you're connected to.
Apparently in SQL Server 2012 they have included a CONCAT() function:
http://msdn.microsoft.com/en-us/library/hh231515.aspx
If you are trying to create a database agnostic application, you should stick to either
Stick to very basic SQL and do anything like this in your application.
Create different abstractions for different databases. If you hope to get any kind of scale out of your application, this is the path you'll likely need to take.
I wouldn't go down the stored procedure path, you can probably get it to work, but but week you'll find out you need to support "database X", then you'll need to rewrite your stored proc in that database as well. Its a recipe for pain.
Related
There is a provision to pass value for quicksight parameters via URL. But how can I use the value of the parameter inside SQL (data set) to get dynamic data on dashboard?
For example:
QUERY as of now:
select * from CITYLIST;
Dashboard:
CITYLIST
city_name | cost_of_living
AAAAAAAAA | 20000
BBBBBBBBB | 25000
CCCCCCCCC | 30000
Parameter Created : cityName
URL Triggered : https://aws-------------------/dashboard/abcd123456xyz#p.cityName=AAAAAAAAA
Somehow I need to use the value passed in URL inside SQL so that I can write a dynamic query as below:
select * from CITYLIST where city_name = SomeHowNeedAccessOfParameterValue;
QuickSight doesn't provide a way to access parameters via SQL directly.
Instead you should create a filter from the parameter to accomplish your use-case.
This is effectively QuickSight's way of creating the WHERE clause for you.
This design decision makes sense to me. Though it takes filtering out of your control in the SQL, it makes your data sets more reusable (what would happen in the SQL if the parameter weren't provided?)
Create a parameter, then control and then filter ("Custom filter" -> "Use parameters").
If you select Direct query option and Custom SQL query for data set, then SQL query will be executed per each visual change/update.
The final query on DB side will look like [custom SQL query] + WHERE clause.
For example:
Visual side:
For control Control_1 selected values "A", "B", "C";
DB side:
[Custom SQL from data set] + 'WHERE column in ("A", "B", "C")'
Quicksight builds a query for you and runs it on DB side.
This allows to reduce data sent over network.
Yes now it provides you to sql editor amd you can use it for the same purpose
l
For full details please find the below reference
https://docs.aws.amazon.com/quicksight/latest/user/adding-a-SQL-query.html
I'm using Pentaho PDI 7.1. I'm trying to convert data from Mysql to Mysql changing the structure of data.
I'm reading the source table (customers) and for each row I've to run another query to calculate the balance.
I was trying to use Database value lookup to accomplish it but maybe is not the best way.
I've to run a query like this to get the balance:
SELECT
SUM(
CASE WHEN direzione='ENTRATA' THEN -importo ELSE +importo END
)
FROM Movimento WHERE contoFidelizzato_id = ?
I should set the parameter taking it from the previous step. Some advice?
The Database lookup value may be a good idea, especially if you are used to database reasoning, but it may result in many queries which may not be the most efficient.
A more PDI-ish style would be to make the query like:
SELECT contoFidelizzato_id
, SUM(CASE WHEN direzione='ENTRATA' THEN -importo ELSE +importo END)
FROM Movimento
GROUP BY contoFidelizzato_id
and use it as the info source of a Lookup Stream Step, like this:
An even more PDI-ish style would be to divert the source table (customer) in two flows : one in which you keep the source rows, and one that you group by contoFidelizzato_id. Of course, you need a formula, or a Javascript, or to put a formula in the SQL of the Table input to change the sign when needed.
Test to know which strategy is better in your case. You'll soon discover that the PDI is very good at handling large data.
I have this PostgreSQL PL/pgSQL function:
CREATE OR REPLACE FUNCTION get_people()
RETURNS SETOF people AS $$
BEGIN
RETURN QUERY SELECT * FROM people;
END;
$$ LANGUAGE plpgsql;
Then I try to read the data in an application using SOCI, with this code:
session sql {"postgresql://dbname=postgres"};
row person {};
procedure proc = (sql.prepare << "get_people()", into(person));
proc.execute(true);
I would expect that person have the data of the first person, but it contains only one column with the name of the stored procedure (i.e., "get_people").
So I don't know what I am doing wrong here, or not doing. Is it the PL/pgSQL code or the SOCI code? Maybe SOCI does not support dynamic binding for stored procedures. Also, this method would allow me to read the first row only, but what about the rest of rows? I know SOCI comes with the rowset class for reading result sets, but the documentation says it only works with queries. Please help.
SELECT get_people() will return a single column, of type people, named after the procedure.
SELECT * FROM get_people() will give you the expected behaviour, decomposing the people records into their constituent fields.
Judging from the source, it looks like the SOCI procedure class (or at least, its Postgres implementation) is hard-wired to run procedures as SELECT ... rather than SELECT * FROM ....
I guess this means you'll need to write your own query, i.e.:
statement stmt = (sql.prepare << "SELECT * FROM get_people()", into(person));
I am trying to use Django's initial SQL data functionality to create an SQL function. The docs state I can do this:
https://docs.djangoproject.com/en/1.6/howto/initial-data/#providing-initial-sql-data
Django provides a hook for passing the database arbitrary SQL that’s executed just after the CREATE TABLE statements when you run migrate. You can use this hook to populate default records, or you could also create SQL functions, views, triggers, etc.
After some googling I found that django's customsql code splits any sql files and runs them line by line, creating this error,
Failed to install custom SQL for myapp.somemodel model: unterminated dollar-quoted string at or near "$$ BEGIN;"
Is there an accepted work around for this? Or a better way to load custom sql functions?
Yeah, I've seen this problem before. If you stick a multi-line function in your app's sql/<modelname>.sql like so:
CREATE OR REPLACE FUNCTION increment(i integer) RETURNS integer AS $$
BEGIN
RETURN i + 1;
END;
$$ LANGUAGE plpgsql;
you'll get the error you saw, namely something like:
Failed to install custom SQL for mysite.Poll model: unterminated dollar-quoted string at or near "$$ BEGIN RETURN i + 1;"
LINE 1: ... FUNCTION increment(i integer) RETURNS integer AS $$ BEGIN R...
I think you should be able to work around the problem by squeezing the function definition all onto one line, e.g.
CREATE OR REPLACE FUNCTION increment(i integer) RETURNS integer AS $$BEGIN RETURN i + 1; END; $$ LANGUAGE plpgsql;
Looks like this bug affects any multi-line functions (both dollar-quoted and single-quoted). I tested on Django 1.6, no idea if it's been fixed already.
I have my own data store mechanism for store data. but I want to implement standards data manipulation and query interface for end users,so I thought QT sql is suitable for my case.
but I still cannot understand how do I involved my indexes for sql query.
let say for example,
I have table with column A(int),B(int),C(int),D(int) and column A is indexed.assume I execute query like select * from Foo where A = 10;
How do I involved my index for search the results?.
You have written your own storage system and want to manipulate it using an SQL like syntax? I don't think Qt SQL is the right tool for that job. It offers connectivity to various SQL servers and is not meant for parsing SQL statements. Qt expects to "pass through" the queries and then somehow parse the result set and transform it into a Qt friendly representation.
So if you only want to have a Qt friendly representation, I wouldn't see a reason to go the indirection with SQL.
But regarding your problem:
In SQL, indexes are usually not stated in the queries, but during the creation of the table schema. But SQL server has a possibility to "hint" indexes, is that what you are looking for?
SELECT column_list FROM table_name WITH (INDEX (index_name) [, ...]);