I want to use the ORACLE DBMS_SCHEDULER on my AWS RDS ORACLE
https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Appendix.Oracle.CommonDBATasks.Scheduler.html
to do the following command every minute:
delete from MYTABLE.RECEIVED_TOKEN where EXPIRY_DATE < systimestamp and rownum <= 1;
commit;
exit
can I do that with this scheduler? I want to avoid the possibility to use a Lambda if it is possible.
I donĀ“t understand too much about how it works or if I can schedule something like that.
I don't know AWS.
As this is an Oracle database, use its scheduling capabilities. How? "Convert" that delete statement into a stored procedure which will then be scheduled by (older and somewhat simpler) DBMS_JOB or (modern, improved and more complex) DBMS_SCHEDULER package.
Here's example.
Procedure:
SQL> CREATE OR REPLACE PROCEDURE p_del_rt
2 IS
3 BEGIN
4 DELETE FROM received_token
5 WHERE expiry_date < SYSTIMESTAMP
6 AND ROWNUM <= 1;
7
8 COMMIT;
9 END;
10 /
Procedure created.
Daily job which runs at 02:00 (2 past midnight):
SQL> BEGIN
2 DBMS_SCHEDULER.CREATE_JOB (
3 job_name => 'delete_received_token',
4 job_type => 'PLSQL_BLOCK',
5 job_action => 'BEGIN p_del_rt; end;',
6 start_date =>
7 TO_TIMESTAMP_TZ ('10.01.2023 02:00 Europe/Zagreb',
8 'dd.mm.yyyy hh24:mi TZR'),
9 repeat_interval =>
10 'FREQ=DAILY; BYDAY=MON,TUE,WED,THU,FRI,SAT,SUN; BYHOUR=2; BYMINUTE=0',
11 enabled => TRUE,
12 comments => 'Delete rows whose expiry date is less than "right now"');
13 END;
14 /
PL/SQL procedure successfully completed.
What is it set to?
SQL> SELECT job_action,
2 TO_CHAR (next_run_date, 'dd.mm.yyyy hh24:mi:ss') next_run_date
3 FROM USER_SCHEDULER_JOBS
4 WHERE job_name = 'DELETE_RECEIVED_TOKEN';
JOB_ACTION NEXT_RUN_DATE
-------------------- -------------------
BEGIN p_del_rt; end; 11.02.2023 02:00:00
SQL>
So that we wouldn't wait until tomorrow, I'll run the job manually. This is table contents before (dates are in DD.MM.YYYY format) (today is 10.02.2023, which means that ID = 1 and ID = 2 have expiry_date less than today):
SQL> SELECT * FROM received_token;
ID EXPIRY_DATE
---------- ------------
1 23.12.2022
2 28.01.2023
3 13.08.2023
SQL> BEGIN
2 DBMS_SCHEDULER.run_job ('delete_received_token');
3 END;
4 /
PL/SQL procedure successfully completed.
Table contents after:
SQL> SELECT * FROM received_token;
ID EXPIRY_DATE
---------- ------------
2 28.01.2023
3 13.08.2023
SQL>
Apparently, it works. Though, I'm not sure what you meant to say by using the following condition:
and rownum <= 1
Why do you want to restrict number of rows deleted to (at most) one? (it'll be zero if no row's expiry_date is less than systimestamp). Without that condition, both ID = 1 and ID = 2 rows would have been deleted.
No problem with me, just saying.
Related
I can't see my error. Could you show me where I am going wrong?
CREATE OR REPLACE TRIGGER update_installments_info
AFTER INSERT OR UPDATE OF COLLECT_AMOUNT ON test_installments
FOR EACH ROW
DECLARE
v_id number;
BEGIN
UPDATE test_sales
SET REST_AMOUNT = REST_AMOUNT - :NEW.COLLECT_AMOUNT,
PAID_AMOUNT = PAID AMOUNT + :NEW.COLLECT_AMOUNT
WHERE SALES_ID = :NEW.SALES_ID;
END;
/
Error at line 4: PL/SQL: ORA-00933: SQL command not properly ended
Error is in UPDATEs 3rd line; cColumn name isn't PAID AMOUNT (with a space), but PAID_AMOUNT. Once fixed:
SQL> CREATE OR REPLACE TRIGGER update_installments_info
2 AFTER INSERT OR UPDATE OF COLLECT_AMOUNT ON test_installments
3 FOR EACH ROW
4 DECLARE
5 v_id number;
6 BEGIN
7 UPDATE test_sales
8 SET REST_AMOUNT = REST_AMOUNT - :NEW.COLLECT_AMOUNT,
9 PAID_AMOUNT = PAID_AMOUNT + :NEW.COLLECT_AMOUNT --> here
10 WHERE SALES_ID = :NEW.SALES_ID;
11 END;
12 /
Trigger created.
SQL>
I have an Apex application text item to enter the email ids with comma separator.
Now I want to validate the email address whether it's correct or not which entered in application item.
How to achieve this?
I am using the below code but its not working.
declare
l_cnt varchar2(1000);
l_requestors_name varchar2(4000);
begin
select apex_item.text(1) Member
into l_requestors_name
from dual;
if not l_requestors_name not like '%#%' then
return true;
else
return false;
end if;
end;
I'd suggest you to create a function which returns a Boolean or - as in my example - a character (Y - yes, it is valid; N - no, it isn't valid) (why character? You can use such a function in SQL. Boolean works in PL/SQL, but my example is pure SQL).
I guess it isn't perfect, but should be way better than just testing whether a string someone entered contains a monkey (#).
SQL> create or replace
2 function f_email_valid (par_email in varchar2)
3 return varchar2
4 is
5 begin
6 return
7 case when regexp_substr (
8 par_email,
9 '[a-zA-Z0-9._%-]+#[a-zA-Z0-9._%-]+\.[a-zA-Z]{2,4}')
10 is not null
11 or par_email is null then 'Y'
12 else 'N'
13 end;
14 end f_email_valid;
15 /
Function created.
SQL>
As user can enter several e-mail addresses separated by a comma, you'll have to split them into rows and then check each of them. Have a look:
SQL> with test (text) as
2 -- sample data
3 (select 'littlefoot#gmail.com,bigfootyahoo.com,a##hotmail.com,b123#freemail.hr' from dual),
4 split_emails as
5 -- split that long comma-separated values column into rows
6 (select regexp_substr(text, '[^,]+', 1, level) email
7 from test
8 connect by level <= regexp_count(text, ',') + 1
9 )
10 -- check every e-mail
11 select email, f_email_valid(email) is_valid
12 from split_emails;
EMAIL IS_VALID
------------------------------ --------------------
littlefoot#gmail.com Y
bigfootyahoo.com N
a##hotmail.com N
b123#freemail.hr Y
SQL>
The google search has been difficult for this. I have two categorical variables, age and months, with 7 levels each. for a few levels, say age =7 and month = 7 there is no value and when I use proc sql the intersections that do not have entries do not show, eg:
age month value
1 1 4
2 1 12
3 1 5
....
7 1 6
...
1 7 8
....
5 7 44
6 7 5
THIS LINE DOESNT SHOW
what i want
age month value
1 1 4
2 1 12
3 1 5
....
7 1 6
...
1 7 8
....
5 7 44
6 7 5
7 7 0
this happens a few times in the data, where tha last groups dont have value so they dont show, but I'd like them to for later purposes
You have a few options available, both seem to work on the premise of creating the master data and then merging it in.
Another is to use a PRELOADFMT and FORMATs or CLASSDATA option.
And the last - but possibly the easiest, if you have all months in the data set and all ages, then use the SPARSE option within PROC FREQ. It creates all possible combinations.
proc freq data=have;
table age*month /out = want SPARSE;
weight value;
run;
First some sample data:
data test;
do age=1 to 7;
do month=1 to 12;
value = ceil(10*ranuni(1));
if ranuni(1) < .9 then
output;
end;
end;
run;
This leaves a few holes, notably, (1,1).
I would use a series of SQL statements to get the levels, cross join those, and then left join the values on, doing a coalesce to put 0 when missing.
proc sql;
create table ages as
select distinct age from test;
create table months as
select distinct month from test;
create table want as
select a.age,
a.month,
coalesce(b.value,0) as value
from (
select age, month from ages, months
) as a
left join
test as b
on a.age = b.age
and a.month = b.month;
quit;
The group independent crossing of the classification variables requires a distinct selection of each level variable be crossed joined with the others -- this forms a hull that can be left joined to the original data. For the case of age*month having more than one item you need to determine if you want
rows with repeated age and month and original value
rows with distinct age and month with either
aggregate function to summarize the values, or
an indication of too many values
data have;
input age month value;
datalines;
1 1 4
2 1 12
3 1 5
7 1 6
1 7 8
5 7 44
6 7 5
8 8 1
8 8 11
run;
proc sql;
create table want1(label="Original class combos including duplicates and zeros for absent cross joins")
as
select
allAges.age
, allMonths.month
, coalesce(have.value,0) as value
from
(select distinct age from have) as allAges
cross join
(select distinct month from have) as allMonths
left join
have
on
have.age = allAges.age and have.month = allMonths.month
order by
allMonths.month, allAges.age
;
quit;
And a slight variation that marks duplicated class crossings
proc format;
value S_V_V .t = 'Too many source values'; /* single valued value */
quit;
proc sql;
create table want2(label="Distinct class combos allowing only one contributor to value, or defaulting to zero when none")
as
select distinct
allAges.age
, allMonths.month
, case
when count(*) = 1 then coalesce(have.value,0)
else .t
end as value format=S_V_V.
, count(*) as dup_check
from
(select distinct age from have) as allAges
cross join
(select distinct month from have) as allMonths
left join
have
on
have.age = allAges.age and have.month = allMonths.month
group by
allMonths.month, allAges.age
order by
allMonths.month, allAges.age
;
quit;
This type of processing can also be done in Proc TABULATE using the CLASSDATA= option.
The data is basically month on month price of configuration. I wanted to get a trend of the AMOUNT. As to how is the price behaving over a period of 12 months, for each configuration and overall trend.
Proc sql doesn't support "dif" syntax. I am unable to use the regular "do" loop in data-set as this is not really helpful here.
So can anyone help me with this ?
This code is to basically group the data and get a mean price for each configuration in that month.
proc sql;
create table c.price1 as
select
configuration,
month,
mean(retail_price) as amount format = dollar7.2
from c.price
where
configuration is not missing
and month is not missing
and retail_price is not missing
group by configuration, month;
quit;
DATA :
Configuration Month Amount
1 1 $370.00
1 2 $365.00
1 3 $318.00
1 4 $355.00
1 5 $350.00
1 6 $317.40
1 7 $340.00
1 8 $335.00
1 9 $297.00
1 10 $325.00
1 11 $320.00
1 12 $286.65
2 1 $320.00
2 2 $315.00
2 3 $287.86
2 4 $305.00
2 5 $300.00
2 6 $263.76
.......and so on
Use the DIF function in conjunction with BY group processing.
Data want;
Set have;
By config;
New_var = dif(amount);
If first.config then new_var = .;
Run;
I am merging two SAS datasets by ID number and would like to remove all instances of duplicate IDs, i.e. if an ID number occurs twice in the merged dataset then both observations with that ID will be deleted.
Web searches have suggested some sql methods and nodupkey, but these are not working because they are for typical duplicate cleansing where one instance is kept and then the multiples are deleted.
Assuming you are using a DATA step with a BY id; statement, then adding:
if NOT (first.id and last.id) then delete;
should do it. If that doesn't work, please show your code.
I'm actually a fan of writing dropped records to a separate dataset so you can track how many records were dropped at different points. So I would code this something like:
data want
drop_dups
;
merge a b ;
by id ;
if first.id and last.id then output want ;
else output drop_dups ;
run ;
Here is an SQL way to do it. You can use left/right/inner join best suitable for your needs. Note that this works on a single dataset just as well.
proc sql;
create table singles as
select * from dataset1 a inner join dataset2 b
on a.ID = b.ID
group by a.ID
having count(*) = 1;
quit;
For example from
ID x
5 2
5 4
1 6
2 7
3 6
You will select
ID x
1 6
2 7
3 6