The TOP keyword in the generated SQL wraps the number in brackets (I persume for SQL compact support), however this errors on my SQL 2000 server as it doesn't expect the brackets.
Example C# Code:
var doc = Logic.Document.All().FirstOrDefault(d=> d.Guid == Request.QueryString["guid"]);
Produces the following SQL error:
Line 1: Incorrect syntax near '('.
as it generates the following SQL:
exec sp_executesql N'SELECT TOP (1) .....'
If I execute the same SQL manually without the brackets the SQL executes just fine.
Is this a bug?
After futher digging around the SubSonic SourceCode I answered a resolution here:
SubSonic3: Method "FirstOrDefault" throws exception with SQL Server 2000
Related
I am trying to run this simple code in SSMS connected to Azure SQL DW
and it fails. I have tried some different variation but none of them seems to be working.
BEGIN
PRINT 'Hello ';
WAITFOR DELAY '00:00:02'
PRINT 'Another';
END
Msg 103010, Level 16, State 1, Line 47
Parse error at line: 2, column: 16: Incorrect syntax near ';'.
A bloody workaround until we have that simple built-in function:
1- Create a Proc Named "spWait" as follows:
CREATE PROC spWait #Seconds INT
AS
BEGIN
DECLARE #BEGIN DATETIME
DECLARE #END DATETIME
SET #BEGIN = GETDATE()
SET #END = DATEADD("SECOND",#Seconds,#BEGIN)
WHILE (#BEGIN<#END)
BEGIN
SET #BEGIN=GETDATE()
END
END
2- Call this between your commands
--Do this
EXEC spWait 3
--Do that
Correct. At the moment the WAITFOR statement isn't supported in Azure SQL DW. Note on the documentation for this statement near the top it says whether this statement "applies to" Azure SQL DW.
Please vote for this feature suggestion to help Microsoft prioritize this enhancement.
It may not help you much, but you can connect to the master database under a separate connection and run the WAITFOR statement.
After going through the documentations of Utplsql 3.0.2 , I couldn't find any references the assertion api as available in the older versions. Please let me know whether is there a equivalent assertion like utassert.eqtable available in newer versions.
I have just recently gone through the same pain. Most utPLSQL examples out there are for utPLSQL v2. It transpires appears that the assertions have been deprecated, and have been replaced by "Expects". I found a great blog post by Jacek Gebal that describes this. I've tried to put this and other useful links a page about how unit testing fits into Redgate's Oracle DevOps pipeline (I work for Redgate and we often get asked how to best implement automated unit testing for Oracle).
I don't think you can compare tables straight away, but you can compare cursors, which is quite flexible, because you can, for instance, set-up a cursor with test data based on a dual query, and then check that against the actual data in the table, something like this:
procedure TestCursorExample is
v_Expected sys_refcursor;
v_Actual sys_refcursor;
begin
-- Arrange (Nothing really to arrange, except setting the expectation).
open v_Expected for
select 'me#example.com' as Email
from dual;
-- Act
SomeUpsertProc('me', 'me#example.com');
-- Assert
open v_Actual for
select Email
from Tbl_User
where UserName = 'me';
ut.expect(v_Actual).to_equal(v_Expected);
end;
Also, the example above works in Oracle 11, but if you're in 12c, apparently things got even easier, because you can use the table operator with locally defined types.
I've used a similar solution to verify that certain columns of a row were updated, while others were not. You can easily open a cursor for the original data, with some columns replaces by the new fixed values. Then do the update. Then open a cursor with the new actual data of all columns. You still have to write the queries, but it's way more compact than querying everything into variables and comparing those individually.
And, because you can open the 'expected' cursor before doing the actual 'act' step of the test, you can be sure that the query with 'expected' data is not affected by the test itself, and can even base that cursor on the data you are going to modify.
For comparing the data, the cursors are serialized to XML. This may have some side effects. In the test example above, my act step didn't actually do anything, so I got this difference, showing the count as well as showing the missing data.
If your cursors have more columns, and multiple difference, it can sometimes take a seconds to spot the differences between the XML tags. Also, there are currently some edge-case issues with this, I think because of how trimming works in XML.
1) testcursorexample
Actual: refcursor [ count = 0 ] was expected to equal: refcursor [ count = 1 ]
Diff:
Rows: [ 1 differences ]
Row No. 1 - Missing: <EMAIL>me#example.com</EMAIL>
at "MySchema.MyTestPackage", line 410 ut.expect(v_Actual).to_equal(v_Expected);
See also: 'comparing cursors' from utPLSQL 3 concepts
I have the following code that is running inside a macro. When it is run in interactive mode, it runs absolutely fine, no errors or warning. That was the case for last two year.
The same code has now been deployed in batch mode and it generates a warning WARNING: Apparent symbolic reference FIRSTRECCOUNT not resolved. and no value assigned to macro variable.
My question is, does anyone have any ideas why batch mode and interactive mode would behave differently?
Here some more information:
The dataset is being created and it is in work library.
The dataset does get opened by data step.
`firstreccount' doesn't get initialiased anywhere else in the program
I have search sas community. There is a topic here, but I don't have the same errors in batch initilisation as described in the answer.
Detailed information on the warning but it doesn't explain by it would work in interactive mode, but not in batch mode.
.
1735 %LET FIRSTSET = work.dataset1;
1744 DATA _NULL_;
1745 IF 0 THEN
1746 SET &FIRSTSET NOBS=X;
1747 CALL SYMPUT('FIRSTRECCOUNT' ,X);
1748 STOP;
1749 RUN;
1755 DATA _NULL_;
1756 IF 0 THEN
1757 SET &SECONDSET NOBS=X;
1758 CALL SYMPUT('SECONDRECOUNT' ,X);
1759 STOP;
1760 RUN;
WARNING: Apparent symbolic reference FIRSTRECCOUNT not resolved.
Update:
So I have attempted to replicate the error by copying the code with warning into a separate scheduled flow, but it didn't cause any errors at all.
By the way, the original job was deployed from SAS DI studio. I have checked all lines in user written code nodes and made sure that the length was within 80 characters as recommended by #RawFocus, #RobertPentridge, but it didn't solve the issue.
As recomended by #data_null_ I have checked VALIDVARNAME and it was different between interactive (value of "any") and batch mode (value of "V7") but changing these hasn't made any difference.
I have rewritted the logic to get the number of observations by calling attr for an open dataset. This eliminated the warning, but program would still fail with warning popping out in different places. It made me think Robert Partridge is correct. At the same time, I got an error that a macro not being resolved. The macro was inserted by DI studio to collect performance MI even that the job wasn't meant to be collecting MI. This made me think that SAS DI studio is not generating code correctly when deploying it, so I manually edited the deployed code to remove offending macro call and I also spotted that there was one line of code with MD5 function that was too long on one line because of a number of parameters being passed to it, so I inserted some white space. And finally the problem was fixed!!
I still need to do something about the job because when it will get redeployed from SAS DI, it will generate the same errors again. I don't have time to look into this further at the moment.
Conclusion: what you write in SAS DI and what gets deployed could be slightly different which could cause syntax parse to throw errors in random places. So I will mark Robert's answer as correct because it got me closer to solving the problem then any other answer.
The problem could be happening above the code snippet you pasted. The parser got into a funk earlier, and ended up issuing warning about code that is perfectly fine.
Check to make sure that no code within a macro is longer that ~160 chars on a single line. I try to keep my code well below that but long lines of code can run fine interactively and fail in batch - particularly when inside of a macro.
I expect your program has some small error above that does not cause SAS to go into syntax check mode when run interactively but does cause SAS to set obs to 0 and enter syntax check mode when run in batch.
One possibility is the limit (in batch mode) of the length of a line in your submitted SAS program:
See: http://support.sas.com/kb/15/883.html
Which version of SAS are you running?
I'm working on a project that uses the TransposeTupleToBag UDF of LinkedIn's datafu UDF compilation. Found here: https://github.com/linkedin/datafu/tree/master/src/java/datafu/pig/util. I execute the following commands in grunt shell:
REGISTER jar-file;
DEFINE Transpose datafu.pig.util.TransposeTupleToBag();
a = load data 'file' using PigStorage(',') as (schema);
b = foreach a generate select_columns_from_schema;
c = foreach b generate col1, col2, datafu.pig.util.Transpose(col3, col4...coln);
When I execute the last line, I get this error:
[main] ERROR org.apache.pig.tools.grunt.Grunt - ERROR 1200: Instance name is null.
This should not happen unless UDFContextSignature was not set.
What am I doing wrong? How to avoid it? I have not changed any of their code as well. And I'm only using TransposeTupleToBag, FieldNotFound and AliasableEvalFunc as they were the classes required to run Transpose successfully. I even tried the same with all classes loaded and it still gave me the same error. What's going on? Please help. Thanks!
TransposeTupleToBag requires a feature in Pig 0.11 where setUDFContextSignature is called. This is used to distinguish each invocation of the UDF. This method doesn't exist in Pig 0.10.
Turns out, LinkedIn's datafu is tested on pig 0.11.1 and nothing else. I was running pig 0.10 and so it wouldn't work as there was some property that probably is not being set in pig 0.10, but perhaps was fixed in pig 0.11.1.
I'm seeing a problem when querying Sybase IQ with a prepared statement. The query works fine when I type the entire query as text and then call PrepareStatement on it with no parameters. But when I stick in one parameter, then I get back errors, even though my sql is correct. Any idea why?
This code works perfectly fine and runs my query:
errorquery<<"SELECT 1 as foobar \
, (SUM(1) over (partition by foobar) ) as myColumn \
FROM spgxCube.LPCache lpcache \
WHERE lpcache.CIG_OrigYear = 2001 ";
odbc::Connection* connQuery= SpgxDBConnectionPool::getInstance().getConnection("MyServer");
PreparedStatementPtr pPrepStatement(connQuery->prepareStatement(errorquery.str()));
pPrepStatement->executeQuery();
But this is the exact same thing except instead of typing "2001" directly in the code, I insert it with a parameter:
errorquery<<"SELECT 1 as foobar \
, (SUM(1) over (partition by foobar) ) as myColumn \
FROM spgxCube.LPCache lpcache \
WHERE lpcache.CIG_OrigYear = ? ";
odbc::Connection* connQuery = SpgxDBConnectionPool::getInstance().getConnection("MyServer");
PreparedStatementPtr pPrepStatement(connQuery->prepareStatement(errorquery.str()));
int intVal = 2001;
pPrepStatement->setInt(1, intVal);
pPrepStatement->executeQuery();
That yields this error:
[Sybase][ODBC Driver][Adaptive Server Anywhere]Invalid expression near '(SUM(1) over(partition by foobar)) as myColumn'
Any idea why the first one works if the second one fails? Are you not allowed to use "partition by" with inserted sql parameters or something like that?
The Sybase (ASA) Adaptive Server Anywhere error is fine, there is a Sybase ASA instance included in the IQ DB, used for the SYSTEM space.
I do not know if partition by is supported / fully supported in versions prior to Sybase IQ v12.7. I recall having problems with it under v12.6. Under v12.7 or better it should be fine and otherwise you command looks good to me.
I know very little about Adaptive Server Anywhere, but you're using the Adaptive Server Anywhere driver to query Sybase IQ.
Is that really what you want?