I created and ran successfully Stored Procedure in Redshift but not working as expected.
For example, I'd like to delete data in the period set by the arguments.
-- Stored Procedure
CREATE OR REPLACE PROCEDURE sp_test(parm0 varchar(100), parm1 date, parm2 date)
AS '
BEGIN
EXECUTE
$_$ DELETE FROM test_table_b
WHERE $_$|| parm0 ||$_$
between $_$|| parm1 ||$_$ and $_$|| parm2 ||$_$ $_$;
end;
' language plpgsql;
-- Run Stored procedure
Begin;
Call sp_test('opsdt', '2021-01-16', '2021-01-17');
Commit;
-- Result
BEGIN executed successfully
Execution time: 0.07s
Statement 1 of 3 finished
0 rows affected
Call executed successfully
Execution time: 0.18s
Statement 2 of 3 finished
COMMIT executed successfully
Execution time: 0.13s
Statement 3 of 3 finished
Script execution finished
Total script execution time: 0.38s
Script ran successfully, but the record '2021-01-16' and '2021-01-17' is still remained in that table.
Any advice would be appreciated. Thanks in advance.
Thanks to #John Rotenstein, now I could run Stored Procedure as expected.
Just simple example for someone who has the same issue.
-- Revised Procedure
CREATE OR REPLACE PROCEDURE sp_del_test(tbl_name varchar(50), col_name varchar(50), start_dt date, end_dt date)
AS $PROC$
DECLARE
sql VARCHAR(MAX) := '';
BEGIN
sql := 'DELETE FROM ' || tbl_name || ' WHERE ' || col_name || ' BETWEEN ''' || start_dt || ''' AND ''' || end_dt || '''';
RAISE INFO '%', sql;
EXECUTE sql;
END;
$PROC$ language plpgsql;
-- Executed Commands
Begin;
Call sp_del_test('test_table_b', 'opsdt', '2021-01-23', '2021-01-24');
Commit;
-- Return Message
BEGIN executed successfully
Execution time: 0.05s
Statement 1 of 3 finished
**Warnings:
DELETE FROM test_table_b WHERE opsdt BETWEEN '2021-01-23' AND '2021-01-24'**
0 rows affected
Call executed successfully
Execution time: 0.2s
Statement 2 of 3 finished
COMMIT executed successfully
Execution time: 0.12s
Statement 3 of 3 finished
Script execution finished
Total script execution time: 0.38s
Related
I want to use the ORACLE DBMS_SCHEDULER on my AWS RDS ORACLE
https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Appendix.Oracle.CommonDBATasks.Scheduler.html
to do the following command every minute:
delete from MYTABLE.RECEIVED_TOKEN where EXPIRY_DATE < systimestamp and rownum <= 1;
commit;
exit
can I do that with this scheduler? I want to avoid the possibility to use a Lambda if it is possible.
I don´t understand too much about how it works or if I can schedule something like that.
I don't know AWS.
As this is an Oracle database, use its scheduling capabilities. How? "Convert" that delete statement into a stored procedure which will then be scheduled by (older and somewhat simpler) DBMS_JOB or (modern, improved and more complex) DBMS_SCHEDULER package.
Here's example.
Procedure:
SQL> CREATE OR REPLACE PROCEDURE p_del_rt
2 IS
3 BEGIN
4 DELETE FROM received_token
5 WHERE expiry_date < SYSTIMESTAMP
6 AND ROWNUM <= 1;
7
8 COMMIT;
9 END;
10 /
Procedure created.
Daily job which runs at 02:00 (2 past midnight):
SQL> BEGIN
2 DBMS_SCHEDULER.CREATE_JOB (
3 job_name => 'delete_received_token',
4 job_type => 'PLSQL_BLOCK',
5 job_action => 'BEGIN p_del_rt; end;',
6 start_date =>
7 TO_TIMESTAMP_TZ ('10.01.2023 02:00 Europe/Zagreb',
8 'dd.mm.yyyy hh24:mi TZR'),
9 repeat_interval =>
10 'FREQ=DAILY; BYDAY=MON,TUE,WED,THU,FRI,SAT,SUN; BYHOUR=2; BYMINUTE=0',
11 enabled => TRUE,
12 comments => 'Delete rows whose expiry date is less than "right now"');
13 END;
14 /
PL/SQL procedure successfully completed.
What is it set to?
SQL> SELECT job_action,
2 TO_CHAR (next_run_date, 'dd.mm.yyyy hh24:mi:ss') next_run_date
3 FROM USER_SCHEDULER_JOBS
4 WHERE job_name = 'DELETE_RECEIVED_TOKEN';
JOB_ACTION NEXT_RUN_DATE
-------------------- -------------------
BEGIN p_del_rt; end; 11.02.2023 02:00:00
SQL>
So that we wouldn't wait until tomorrow, I'll run the job manually. This is table contents before (dates are in DD.MM.YYYY format) (today is 10.02.2023, which means that ID = 1 and ID = 2 have expiry_date less than today):
SQL> SELECT * FROM received_token;
ID EXPIRY_DATE
---------- ------------
1 23.12.2022
2 28.01.2023
3 13.08.2023
SQL> BEGIN
2 DBMS_SCHEDULER.run_job ('delete_received_token');
3 END;
4 /
PL/SQL procedure successfully completed.
Table contents after:
SQL> SELECT * FROM received_token;
ID EXPIRY_DATE
---------- ------------
2 28.01.2023
3 13.08.2023
SQL>
Apparently, it works. Though, I'm not sure what you meant to say by using the following condition:
and rownum <= 1
Why do you want to restrict number of rows deleted to (at most) one? (it'll be zero if no row's expiry_date is less than systimestamp). Without that condition, both ID = 1 and ID = 2 rows would have been deleted.
No problem with me, just saying.
I'm trying to avoid the error:
ERROR: Teradata execute: Object 'MY_TABLE' does not exist.
When executing TERADATA SQL from SAS
This is the original SAS code I'm using:
proc sql;
connect to TERADATA (user='my_user' password=XXXXXXXXXX MODE=TERADATA TDPID='bdpr');
execute(database MY_DB) by TERADATA;
execute(Drop table MY_TABLE;) by TERADATA;
disconnect from TERADATA;
quit;
According to the documentation the .SET ERRORLEVEL 3807 SEVERITY 0 should fix my problem.
I tried inserting the following before my DROP TABLE statement:
execute(.SET ERRORLEVEL 3807 SEVERITY 0) by TERADATA;
execute(ECHO '.SET ERRORLEVEL 3807 SEVERITY 0') by TERADATA;
I tried combining both:
execute(ECHO '.SET ERRORLEVEL 3807 SEVERITY 0'; Drop table MY_TABLE;) TERADATA;
With either a syntax error for the calls without ECHO or no effect on the error when trying the ECHO variants.
The problem is that the .SET ERRORLEVEL is not a SQL statement but a BTEQ command. According to the docs it should be possible to execute BTEQ commands from standard TERADATA SQL should be possible using the ECHO construct. But from SAS this doesn't seem to be working.
I only need a solution to avoid the SAS error, both SAS side solutions as well as TERADATA solutions are ok for me.
Why not ask Teradata if the table exists and then have SAS conditionally run the drop?
%let tablekind=NONE;
select obj into :tablekind trimmed from connection to teradata
(select case when (tablekind in ('T','O')) then 'TABLE'
else 'VIEW' end as obj
from dbc.tablesv
where databasename = 'MY_DB' and tablename= 'MY_TABLE'
and tablekind in ('V','T','O')
)
;
%if &tablekind ne NONE %then %do;
execute(drop &tablekind. MY_DB.MY_TABLE;) by teradata;
%end;
I don't know the syntax to create an SP from SAS, but I doubt you will have the right to do so.
You might ask your DBA to install following SP, which drops any kind of table (including Volatile).
--Drop a table without failing if the table doesn’t exist:
REPLACE PROCEDURE drop_table_if_exists
(
/* database name, uses default database when NULL.
Must be calling user for Volatile Tables
*/
IN db_name VARCHAR(128) CHARACTER SET Unicode,
/* table name */
IN tbl_name VARCHAR(128) CHARACTER SET Unicode,
OUT msg VARCHAR(400) CHARACTER SET Unicode
) SQL SECURITY INVOKER – check the rights of the calling user
BEGIN
DECLARE full_name VARCHAR(361) CHARACTER SET Unicode;
DECLARE sql_stmt VARCHAR(500) CHARACTER SET Unicode;
DECLARE exit HANDLER FOR SqlException
BEGIN
-- catch "table doesn't exist" error
IF SqlCode = 3807 THEN SET msg = full_name || ' doesn''t exist.';
ELSE
-- fail on any other error, e.g. missing access rights or wrong object tye
RESIGNAL;
END IF;
END;
SET full_name = '"' || Coalesce(db_name,DATABASE) || '"."' || Coalesce(tbl_name,'') || '"';
SET sql_stmt = 'DROP TABLE ' || full_name || ';';
EXECUTE IMMEDIATE sql_stmt;
SET msg = full_name || ' dropped.';
END;
Maybe this could help you, if you run this as a proc call:
replace procedure drop_if_exists( in_object varchar(50))
begin
IF EXISTS(SELECT 1 FROM dbc.tables WHERE tablename = in_object
and databasename='<your database name>') THEN
CALL DBC.SysExecSQL('DROP TABLE ' || in_object);
END IF;
END;
And call this from sas via:
execute (call drop_if_exists(MY_TABLE);) by TERADATA;
EDIT: SAS-invoked procedure creation
proc sql;
connect using DBCONN;
execute(
replace procedure drop_if_exists( in_object varchar(50))
begin
IF EXISTS(SELECT 1 FROM dbc.tables WHERE tablename = in_object
and databasename='<my database>') THEN
CALL DBC.SysExecSQL('DROP TABLE ' || in_object);
END IF;
END;
) by DBCONN;
disconnect from DBCONN;
quit;
I am facing a non-expected behaviour when using the clause output every along with table join clause.
I have an basic app, with one input stream, and 2 tables, which store a different list of values. Then, there are also 2 queries,
The first query1 will join with table1, and when there is a match will output first every 5 sec.
Second query2 will do similarly, will join table2, and will output first value found every 5 sec.
The goal of this is, every 5 seconds, when there is a value in input stream which is contained into table 1, there will be a match, and if there is a value contained into table 2, there will be a different match, and both queries will keep silent until next 5 seconds block.
the app is the following
#App:name("delays_tables_join")
define stream input(value string);
define stream table_input(value string);
define table table1(value string);
define table table2(value string);
#sink(type='log')
define stream LogStream (value string);
-- fill table1
#info(name='insert table 1')
from table_input[value == '1']
insert into table1;
-- fill table2
#info(name='insert table 2')
from table_input[value == '2']
insert into table2;
-- query input join with table 1, output once every 5 sec
#info(name='query1')
from input join table1 on input.value == table1.value
select input.value
output first every 5 sec
insert into LogStream;
-- query input join with table 2, output once every 5 sec
#info(name='query2')
from input join table2 on input.value == table2.value
select input.value
output first every 5 sec
insert into LogStream;
When this app is run,first its sent to table_input the values 1, and 2 to fill both tables
And then, it starts sending to the input stream repeatedly values: 1, 2, 1, 2, 1, 2...
It is expected to have in LogStream 2 values every 5 seconds, the first appearance of 1 value, and the first appearance of value 2.
But instead, just the first occurrence of value 1 appears all the time, but not the value 2
[2020-04-02_18-55-16_498] INFO {io.siddhi.core.stream.output.sink.LogSink} - delays_tables_join : LogStream : Event{timestamp=1585846516098, data=[1], isExpired=false}
[2020-04-02_18-55-21_508] INFO {io.siddhi.core.stream.output.sink.LogSink} - delays_tables_join : LogStream : Event{timestamp=1585846521098, data=[1], isExpired=false}
Please, note that, when there are no table joins involved, both queries work as expected. Example without joins:
#App:name("delays")
define stream Input(value string);
#sink(type='log')
define stream LogStream (value string);
#info(name='query1')
from Input[value == '1']
select value
output first every 5 sec
insert into LogStream;
#info(name='query2')
from Input[value == '2']
select value
output first every 5 sec
insert into LogStream;
this will produce the following output:
[2020-04-02_18-53-50_305] INFO {io.siddhi.core.stream.output.sink.LogSink} - delays : LogStream : Event{timestamp=1585846430304, data=[1], isExpired=false}
[2020-04-02_18-53-50_706] INFO {io.siddhi.core.stream.output.sink.LogSink} - delays : LogStream : Event{timestamp=1585846430305, data=[2], isExpired=false}
[2020-04-02_18-53-55_312] INFO {io.siddhi.core.stream.output.sink.LogSink} - delays : LogStream : Event{timestamp=1585846438305, data=[1], isExpired=false}
[2020-04-02_18-53-56_114] INFO {io.siddhi.core.stream.output.sink.LogSink} - delays : LogStream : Event{timestamp=1585846439305, data=[2], isExpired=false}
.
I was wondering if this behaviour is expected, or there is any error at all in the design of the application.
Many thanks!
I was able to get the results as in the "without join" by fixing the "insert table 2" query by changing the table1 to table2 in insert into line
-- fill table2
#info(name='insert table 2')
from table_input[value == '2']
insert into table1;
I'm trying to abort/exit a query based on a conditional expression using CASE statement:
If the table has 0 rows then the query should go happy path.
If the table has > 0 rows then the query should abort/exit.
drop table if exists #dups_tracker ;
create table #dups_tracker
(
column1 varchar(10)
);
insert into #dups_tracker values ('John'),('Smith'),('Jack') ;
with c1 as
(select
0 as denominator__v
,count(*) as dups_cnt__v
from #dups_tracker
)
select
case
when dups_cnt__v > 0 THEN 1/denominator__v
else
1
end Ind__v
from c1
;
Here is the Error Message :
Amazon Invalid operation: division by zero; 1 statement failed.
There is no concept of aborting an SQL query. It either compiles into a query or it doesn't. If it does compile, the query runs.
The closest option would be to write a Stored Procedure, which can include IF logic. So, it could first query the contents of a table and, based on the result, decide whether it will perform another query.
Here is the logic I was able to write to abort a SQL in case of positive usecase,
/* Dummy Table to Abort Dups Check process if Positive */
--Dups Table
drop table if exists #dups;
create table #dups
(
dups_col varchar(1)
);
insert into #dups values('A');
--Dummy Table
drop table if exists #dummy ;
create table #dummy
(
dups_check decimal(1,0)
)
;
--When Table is not empty and has Dups
insert into #dummy
select
count(*) * 10
from #dups
;
/*
[Amazon](500310) Invalid operation: Numeric data overflow (result precision)
Details:
-----------------------------------------------
error: Numeric data overflow (result precision)
code: 1058
context: 64 bit overflow
query: 3246717
location: numeric.hpp:158
process: padbmaster [pid=6716]
-----------------------------------------------;
1 statement failed.
*/
--When Table is empty and doesn't have dups
truncate #dups ;
insert into #dummy
select
count(*) * 10
from #dups
;
drop table if exists temp_table;
create temp table temp_table (field_1 bool);
insert into temp_table
select case
when false -- or true
then 1
else 1 / 0
end as field_1;
This should compile, and fail when the condition isn't met.
Not sure why it's different from your example, though...
Edit: the above doesn't work querying against a table. Leaving it here for posterity.
How to call postgreSQL count value on program, because select count is show the value , but in my program code to use only count showing an error
select
--poui.id::varchar || '/' || coalesce(poc.id::varchar,'') AS id,
poui.preorder_id as name,
poc.start_date as start_date,
poc.expire_date as end_date,
poui.state as status,
select count(*) as no_preorder_completed from preorder_user_input where state = 'done'
select count(*) as no_preorder_not_completed from preorder_user_input where state in ('draft','confirm')
count(*) as no_preorder_completed from preorder_user_input where state = 'done'
from
preorder_user_input poui
left join
preorder_config poc on (poui.preorder_id = poc.id)
group by
poc.id, poui.id, poui.preorder_id, poc.expire_date, poc.start_date, poui.state
Typically if you want a count of some of the rows in PostgreSQL 9.1 you need to use SUM and CASE instead. I.e.
SUM(CASE WHEN mycolumn = desired THEN 1 ELSE 0 END)