I'm seeing a problem when querying Sybase IQ with a prepared statement. The query works fine when I type the entire query as text and then call PrepareStatement on it with no parameters. But when I stick in one parameter, then I get back errors, even though my sql is correct. Any idea why?
This code works perfectly fine and runs my query:
errorquery<<"SELECT 1 as foobar \
, (SUM(1) over (partition by foobar) ) as myColumn \
FROM spgxCube.LPCache lpcache \
WHERE lpcache.CIG_OrigYear = 2001 ";
odbc::Connection* connQuery= SpgxDBConnectionPool::getInstance().getConnection("MyServer");
PreparedStatementPtr pPrepStatement(connQuery->prepareStatement(errorquery.str()));
pPrepStatement->executeQuery();
But this is the exact same thing except instead of typing "2001" directly in the code, I insert it with a parameter:
errorquery<<"SELECT 1 as foobar \
, (SUM(1) over (partition by foobar) ) as myColumn \
FROM spgxCube.LPCache lpcache \
WHERE lpcache.CIG_OrigYear = ? ";
odbc::Connection* connQuery = SpgxDBConnectionPool::getInstance().getConnection("MyServer");
PreparedStatementPtr pPrepStatement(connQuery->prepareStatement(errorquery.str()));
int intVal = 2001;
pPrepStatement->setInt(1, intVal);
pPrepStatement->executeQuery();
That yields this error:
[Sybase][ODBC Driver][Adaptive Server Anywhere]Invalid expression near '(SUM(1) over(partition by foobar)) as myColumn'
Any idea why the first one works if the second one fails? Are you not allowed to use "partition by" with inserted sql parameters or something like that?
The Sybase (ASA) Adaptive Server Anywhere error is fine, there is a Sybase ASA instance included in the IQ DB, used for the SYSTEM space.
I do not know if partition by is supported / fully supported in versions prior to Sybase IQ v12.7. I recall having problems with it under v12.6. Under v12.7 or better it should be fine and otherwise you command looks good to me.
I know very little about Adaptive Server Anywhere, but you're using the Adaptive Server Anywhere driver to query Sybase IQ.
Is that really what you want?
Related
I want to translate the following snippet of code for use in Proc SQL for SAS:
SUM( IIF( INLIST( a.Department, DptGI, DptOncology, DptSurgery ), 1, 0 )) as TotalApts,;
However IIF() is not recognized by PROC SQL.
Can I implement an if/else or some kind of CASE statement?
None of that seems to be working for me.
The SAS version of IIF is IFN or IFC, depending on numeric or character return value (doesn't matter what the input columns are, just what you want back). As long as you're using it in native SAS PROC SQL (and not pass-through SQL) this should work.
However, INLIST is also not a SAS function, you'd have to rewrite that to a normal in operator as well.
You should be able to use case when statements as well, however, if you would like help with those you can post your code that you've tried.
IIF() is not an SQL function. It might be a function that is implemented as an extension in some version of SQL, most likely those created by Microsoft.
I am not sure what INLIST() is either. I will assume it is some type of test of whether the first argument is in the other arguments. If so you could replace it with one of the SAS functions WHICHN() or WHICHC(), depending on whether the values in the variables are numbers or strings.
The normal SQL syntax for that type of operation is CASE.
SUM( case when ( a.Department= DptGI) then 1
when ( a.Department=DptOncology) then 1
when ( a.Department=DptSurgery ) then 1
else 0 end
) as TotalApts
If you are just trying to get result of 1 or 0 and you are ok with using SAS specific syntax then you can just sum the result of the boolean expression.
SUM( (a.Department=DptGI) or (a.Department=DptOncology)
or (a.Department=DptSurgery)) as TotalApts
Now if DPTGI and the other values are constants and not variables then you could use the IN operator.
SUM(a.Department in ('GI','Oncology','Surgery')) as TotalApts
After going through the documentations of Utplsql 3.0.2 , I couldn't find any references the assertion api as available in the older versions. Please let me know whether is there a equivalent assertion like utassert.eqtable available in newer versions.
I have just recently gone through the same pain. Most utPLSQL examples out there are for utPLSQL v2. It transpires appears that the assertions have been deprecated, and have been replaced by "Expects". I found a great blog post by Jacek Gebal that describes this. I've tried to put this and other useful links a page about how unit testing fits into Redgate's Oracle DevOps pipeline (I work for Redgate and we often get asked how to best implement automated unit testing for Oracle).
I don't think you can compare tables straight away, but you can compare cursors, which is quite flexible, because you can, for instance, set-up a cursor with test data based on a dual query, and then check that against the actual data in the table, something like this:
procedure TestCursorExample is
v_Expected sys_refcursor;
v_Actual sys_refcursor;
begin
-- Arrange (Nothing really to arrange, except setting the expectation).
open v_Expected for
select 'me#example.com' as Email
from dual;
-- Act
SomeUpsertProc('me', 'me#example.com');
-- Assert
open v_Actual for
select Email
from Tbl_User
where UserName = 'me';
ut.expect(v_Actual).to_equal(v_Expected);
end;
Also, the example above works in Oracle 11, but if you're in 12c, apparently things got even easier, because you can use the table operator with locally defined types.
I've used a similar solution to verify that certain columns of a row were updated, while others were not. You can easily open a cursor for the original data, with some columns replaces by the new fixed values. Then do the update. Then open a cursor with the new actual data of all columns. You still have to write the queries, but it's way more compact than querying everything into variables and comparing those individually.
And, because you can open the 'expected' cursor before doing the actual 'act' step of the test, you can be sure that the query with 'expected' data is not affected by the test itself, and can even base that cursor on the data you are going to modify.
For comparing the data, the cursors are serialized to XML. This may have some side effects. In the test example above, my act step didn't actually do anything, so I got this difference, showing the count as well as showing the missing data.
If your cursors have more columns, and multiple difference, it can sometimes take a seconds to spot the differences between the XML tags. Also, there are currently some edge-case issues with this, I think because of how trimming works in XML.
1) testcursorexample
Actual: refcursor [ count = 0 ] was expected to equal: refcursor [ count = 1 ]
Diff:
Rows: [ 1 differences ]
Row No. 1 - Missing: <EMAIL>me#example.com</EMAIL>
at "MySchema.MyTestPackage", line 410 ut.expect(v_Actual).to_equal(v_Expected);
See also: 'comparing cursors' from utPLSQL 3 concepts
Long time reader first-time questioner.
Using SAS Data Integration studio, when you create a summary transformation in the table options advanced tab you can add a where statement to your code automatically. Unfortunately, it adds some code that makes this resolve incorrectly. Putting the following in the where text box:
TESTFIELD = "TESTVALUE"
creates
%let _INPUT_options = %nrquote(WHERE = %(TESTFIELD = %"TESTVALUE%"%));
In the code, used
proc tabulate data = &_INPUT (&_INPUT_options)
But resolves to
WHERE = (TESTFIELD = "TESTVALUE")
_
22
ERROR: Syntax error while parsing WHERE clause. ERROR 22-322: Syntax
error, expecting one of the following: a name, a quoted string, a
numeric constant, a datetime constant,
a missing value, (, *, +, -, :, INPUT, NOT, PUT, ^, ~.
My question is this: Is there a way to add a function to the where statement box that would allow this quotation mark to be properly added here?
Note that all functions get the preceding % when added to the where statement automatically and I have no control over that. This seems like something that should be relatively easy to fix but I haven't found a simple way yet.
The % are simply escaping the " and () characters; they're perfectly harmless, really. The bigger problem is the %NRQUOTE "quotes" (which are nonprinting characters that tell SAS this is macro-quoted); they mess up the WHERE processing.
Use %UNQUOTE( ... ) to remove these.
Example:
data have;
testfield="TESTVALUE";
output;
testfield="AMBASDF";
output;
run;
%let _INPUT_options = %nrquote(WHERE = %(TESTFIELD = %"TESTVALUE%"%));
%put &=_input_options;
data want;
set have(%unquote(&_INPUT_options.));
run;
Thank you all for your responses. Long story short, I ended up creating a SAS Troubleshooting ticket. The analyst told me that they have now documented the issue, which should now be resolved in a future iteration of DI.
The temporary solution was to create a new transformation, with a slight alteration, adding an UNQOUTE (as mentioned above by Joe) to the source code before the input options:
proc tabulate data = &_INPUT (%unquote(&_INPUT_options)) %unquote(&procOptions);
For those interested you will need to create the transformation in a public subfolder of your project so others can use it. Not what I was hoping for, but a workable solution while waiting for the version update.
The query has around 40k rows taken generally from a cached query. For whatever reason the QoQ is just SLOW. I have tried to remove most of the logic (distinct, grouping etc) to no avail which leads me to believe something is wrong in the settings. Anybody have an idea about what is going on and how to speed this up?
subcats (Datasource=, Time=42979ms, Records=14)
SELECT
DISTINCT(SNGP.subtyp1) AS cat,
MIN(SNGP.sortposition) AS sortposition,
MIN(taxonomy.web_url) AS url
FROM
SNGP,
taxonomy
WHERE
SNGP.typ > ''
AND UPPER(SNGP.typ) <> 'EMPTY'
AND UPPER(SNGP.DEPT) = 'SHOES' AND UPPER(SNGP.TYP) = 'FASHION' AND SNGP.SUBTYP1 <> 'EMPTY'
GROUP BY SNGP.subtyp1
ORDER BY SNGP.sortposition ASC
Do you have to do a QoQ; could your original query be amended to give you the data you need? Could you even cache all the possible QoQs you're doing, on a schedule?
You're selecting from two tables (SNGP,taxonomy), but I don't see a join between them
web_url sounds like a string, why are you doing a MIN() on it?
In your WHERE clause move the most restrictive parts of that first. e.g. if typ > '' restricts the results to 1000 rows, but UPPER(SNGP.typ) <> 'EMPTY' would restrict it to just 100 rows, then you should put that first. This is general SQL advice, not sure how well it works with QoQ.
40k rows to then select just 14 results sounds like quite a data mismatch; is there any other way you'd be able to get the data more restricted before you try your QoQ.
I am using the libodbc++ ODBC wrapper for C++, designed similar to JDBC. I have a prepared statement "INSERT INTO t1 (col1) VALUES (?)", where t1.col1 is defined as VARCHAR(500).
When I call statement->setString(1, s), the value of s is truncated to 255. I suspect the libodbc++ library, but as I am not very familiar with ODBC I'd like to be sure that the wrapper doesn't just expose a restriction of the underlying ODBC. The ODBC API reference is too complicated to be skimmed quickly and frankly I really don't want to do that, so pardon me for asking a basic question.
NOTE: an un-prepared and un-parameterized insert statement via the same library inserts a long value ok, so it isn't a problem of the MySql DB.
For long string, use PreparedStatement::setAsciiStream() instead of PreparedStatement::setString().
But when use stream, I often encounter error "HY104 Invalid Precision Value", which is annoying because I have no idea how to tackle it head on, however I work around it with following steps:
1, order the columns in SQL statement, non-stream columns go first;
2, if that doesn't work, split the statement to multiple ones, update or query a single column per statement.
But(again), in order to insert a row first and then update some columns in stream manner, one may have to get the last insert id, which turns out to be another challenge which I again failed to tackle head on for now...
I don't know libodbc++, but PreparedStatements available via ODBC API can store more characters. I use it with Delphi/Kylix ODBC wrapper.
Maybe there is some configuration in libodbc++ to set value length limit? I have such setting in my Delphi library. If you use PreparedStatement then you can allocate big chunk of memory, divide it into fields and show ODBC where block for each column starts and how long is it via SQLBindParameter() function.