Stack depth limit exceeded in Postgres when I cast - casting

INSERT INTO table2 (column1)
SELECT cast(column1 AS INTEGER)
FROM table1
LIMIT 100;
I want to do a cast with data from a table to another, but when I try to do this it appears this mistake:
ERROR: stack depth limit exceeded
HINT: Increase the configuration parameter "max_stack_depth" (currently 2048kB), after ensuring the platform's stack depth limit is adequate.
CONTEXT: SQL function "toint" during inlining
SQL function "toint" during startup
SQL function "toint" statement 1
SQL function "toint" statement 1
SQL function "toint" statement 1
SQL function "toint" statement 1
SQL function "toint" statement 1
SQL function "toint" statement 1
SQL function "toint" statement 1
What can I do to solve this problem?

Related

Get the table name from another table and query on it in Redshift

I want to select all data from a table name where table name needs to extract from another table every time.
select * from (select max(table_name) from tableA)
I am trying to write stored procedure for it.
Below is the syntax.
CREATE OR REPLACE PROCEDURE db.stproced() AS $$
declare tablename VARCHAR(200);
BEGIN
select max(table_name) into tableA from easybuy.current_portfolio_table_name;
EXECUTE 'select * from ' || tablename || ' ;'
RETURN;
END;
$$ LANGUAGE plpgsql;
CALL easybuy.stproced();
Above code is executing fine but it is not printing the records which should be come from EXECUTE statement.
This cannot be done in plain SQL like your example. This is because the query is compiled and the table target needs to known a compilation.
This leads to needing 2 SQL statements to perform this action. As you mention a stored procedure can run a number of SQL statements so is a good idea. In many cases this will work except a stored procedure cannot produce data out the JDBC/ODBC connection (AFAIK). A stored procedure can fill a table with the results or fill a cursor but in both cases you will need to select or fetch to see these in your bench. Again you are back to needing 2 statements - executing the stored procedure and grabbing the results (select or fetch).
You could set up a wrapper around Redshift that takes some "special" command and maps it to the 2 SQL statement and otherwise just passes SQL through. This can work and there are available tools that work like this.
Some benches have the ability to configure macros that you could map to perform the 2 statements in question. This could be a route to look into.
If you explain the overarching problem you are trying to solve there may be other routes to achieve this goal.
==============================================================
Adding a stored procedure example that will perform the desired operation.
First let's set up some dummy tables:
create table test1 (tname varchar(16));
insert into test1 values
('test2'),
('b123'),
('c123');
create table test2 (UUID varchar(16), Key varchar(16), Value varchar(16));
insert into test2 values
('a123', 'Key1', 'Val1'),
('b123', 'Key2', 'Val2'),
('c123', 'Key3', 'Val3');
Next we create the stored procedure:
CREATE OR REPLACE procedure indirect_table(curs1 INOUT refcursor)
AS
$$
DECLARE
row record;
BEGIN
select into row max(tname) as tname from test1;
OPEN curs1 for EXECUTE 'select * from ' || row.tname || ';';
END;
$$ LANGUAGE plpgsql;
A quick explainer - this procedure first takes the name of a cursor as an argument and defines a record for storing the result of a query. Into this record is stored the max table name from table test1. This record is used to construct a query that uses this record's value in the FROM clause. This constructed query is run into a cursor where the results wait for a fetch request.
So the last step is to call the procedure and fetch the results. These are the only steps that will be needed in your script once the procedure is saved (committed).
call indirect_table('mycursor');
fetch all mycursor;
This will produce the desired output into the users bench. (Note that "fetch all" is not supported on a single node cluster and "fetch 1000" will be needed in such a case.)

SAS - How to know if previous PROC SQL modified a Database Table?

With PROC SQL it's possible to connect to a Database (Db2 in my case) and execute Inserts, Deletes, etc.
If one such process causes no modifications to the target Table you will see a note like this in the Log:
NOTE: No data found/modified.
So it's clear that SAS checks for this after every such step.
Can I access this Information during the execution of a Program other than by parsing the Log on the fly?
Perhaps some sort of automatic Macrovariable/Dataset that stores the Status of the last step?
EDIT: I'm using Pass Thru SQL with EXECUTE-Statements.
Check the automatic macro variables PROC SQL creates after remote execution.
SQLXMSG contains descriptive information and the DBMS-specific return
code for the error that is returned by the pass-through facility.
Note: Because the value of the SQLXMSG macro variable can contain
special characters (such as &, %, /, *, and ;), use the %SUPERQ macro
function when printing the following value: %put %superq(sqlxmsg); For
information about the %SUPERQ function, see SAS Macro Language:
Reference.
SQLXRC contains the DBMS-specific return code that is returned by the
pass-through facility.

Why doesn't Snowflake support CTE scope (any workaround?)

I'm a Business Intelligence (BI) consultant and I'm running into an issue where Snowflake doesn't support CTE scope.
In BI, it's incredibly useful to redefine bits of SQL. However, if I define a CTE called revenue_calculations then put something new in the where clause and re-declare revenue_calculations as a new CTE further down in the script(or nested within another CTE declaration), Snowflake only reads Revenue Calculations one time and uses the first CTE declaration throughout the script.
Most other databases (Bigquery for example) and programming languages have scope for objects. Is there any workaround to this? Will this be changing?
***Updated to include code sample
with cte_in_question as (select 1),
cte2 as (
with cte_in_question as (select 2)
select * from cte_in_question
)
SELECT * FROM cte2
Snowflake evaluates this to 1 and BQ to 2. 2 seems much more correct to me. Thoughts?
It turns out that in Snowflake, by default, the data from outer CTE is returned. But this behaviour can be altered. You need to contact Snowflake support and request them to change this behaviour (at your account level) so that data from inner CTE will be returned.

PROC SQL throwing error in cast statement at 'as' and alias at 'as'

PROC SQL;
SELECT end_dt-start_dt as EXPOSURE,
(CASE WHEN (EXPOSURE)<0 THEN 0 ELSE CAST(TRUNC((EXPOSURE)/30+0.99) as
INTEGER) END as bucket) FROM TABLE
This statement works fine in SQL but throws an error in proc sql at both 'as'.
CAST is not a valid SAS SQL function. Use the appropriate SAS SQL function, in this case likely INT(), to convert calculation to an integer value.
If you'd like to use your DB SQL you need to use SAS SQL Pass Through which will pass the code directly to your database, but then the entire query must be valid on that database.
SAS has attributes for every field like Length, Format, Informat. They help store, read and read from a data source.
Your PROC SQL would not require a type cast. Instead use FORMAT statement.
PROC SQL; SELECT end_dt-start_dt as EXPOSURE, CASE WHEN (EXPOSURE)<0 THEN 0 ELSE INT(TRUNC((EXPOSURE)/30+0.99)) END as bucket Format 8. FROM TABLE
Not sure or syntax of the whole statement as I couldn't get to test it, although the whole idea holds true.

invoke function in SAS macro

I have the following SAS prorgram with bugs.
proc sql
create table sigmav_n as
select std(V) as sigmav_new from bb
quit
...
sysfunc(abs(-sigmac_new))<0.01
Here V is a column name in table bb. And the program throws error as follows,
Argument I to function ABS referenced by %SYSFUNC or %QSYSFUNC macro
function is not a number.
Any body know the root cause of this?
Try this:
proc sql
select std(V) into :sigmav_new from bb
quit
...
%sysfunc(abs(-&sigmav_new.))<0.01
This would take the std dev of v, put it into a macro variable (numeric) that could then be used by abs() in the sysfunc command.
You'll probably need a let statement or something else going on or you're going to peform the abs() and the result will just 'float off' into the memory.
It's hard to tell what exactly you want to do with the value without seeing a bit more of your code.
A good SUGI paper on the %sysfunc %SYSFUNC: Extending the SAS Macro Language
Firstly why whould you negate a value before passing it to abs() ?
Nonetheless, try this:
proc sql ;
select case when (abs(std(v)) <0.01) then 1 else 0 end into :sigmav_new from bb ;
quit ;