I have case 4 conditions, but still do not understand how to apply the if conditions.
first,
if d_user.nik is not null and d_user.full_name is not null and d_user.email is not null then
update user
else
insert user
and
if d_user.masa_kerja is not null and d_user.masa_kerja> to_date (sysdate) then
can update and insert the user
else DBMS_OUTPUT.PUT_LINE ('Expired') or cannot be inserted and updated.
how to arrange the appropriate if conditions in order to match the process conditions ?
Related
I want the following trigger to be run correctly but it rise an error which is: bad bind variable 'P23_ID'.
The trigger query is:
Create or replace trigger "newTRG"
Before
Insert on "my_table"
For each row
Begin
If :new."ID" is null then
Insert into my_table (ID) values (:P23_ID);
end if;
End;
Use the v() syntax:
create or replace trigger "newTRG" before
insert on "my_table"
for each row
begin
if :new."ID" is null then
insert into my_table ( id ) values (v('P23_ID'));
end if;
end;
On a side note, if this is a primary key value it is a lot easier to use identity columns (the new way) or a sequence (the old way) to populate your column. Doing this from a page item is error prone.
I have a form in my Oracle APEX based application, I want to have validation on submit button, so that the combination of two specific entries, if they already are present in the SQL table/View, I want to show an alert, like "The entry for this combination of values of A and B already exists, please enter correct values."
If those two specific entries are represented by two form items (e.g. :P1_ONE and :P2_TWO), then the validation procedure might be a function that returns error text, such as
declare
l_cnt number;
retval varchar2(200);
begin
select count(*)
into l_cnt
from your_table t
where t.column_one = :P1_ONE
and t.column_two = :P1_TWO;
if l_cnt > 0 then
retval := 'The entry for this combination already exists';
end if;
end;
The query itself might need to be modified, depending on what exactly you meant by describing the problem; that's the way I understood it.
Then you should have a unique constraint on the table, and let that validate incoming data.
Any violation of this constraint will have exception raised, which can be transformed within the APEX error handling procedure.
create or replace
TRIGGER TR_SITECONTACT_UPDATE
AFTER UPDATE OR INSERT ON s_ct
FOR EACH ROW
DECLARE
v_SID s_ct.sid%type;
PRAGMA AUTONOMOUS_TRANSACTION;
BEGIN
IF :NEW.CTID != :OLD.CTID THEN
UPDATE CT
SET lastupdatedon =sysdate,
LASTUPDATESITE=:NEW.SID
WHERE CTID = :NEW.CTID;
COMMIT;
END IF;
END;
Here it's possible to check whether lastupdatedCOF is null or not then use update statement, before update row i need to check lastupdatedCOF IS NULL OR NOT in CT Table. IF Null means i need to use below update statement
UPDATE CT
SET lastupdatedon =sysdate,
LASTUPDATESITE=:NEW.SID
WHERE CTID = :NEW.CTID;
COMMIT;
lastupdatedCOF IS NOT NULL Means
UPDATE CT
SET lastupdatedon =sysdate,
LASTUPDATESITE=:NEW.SID,
lastupdatedCOF = NULL
WHERE CTID = :NEW.CTID;
COMMIT;
If I read your question, either lastupdatedCOF is null (in which case, it remains null) or lastupdated_cof is not null and you want to set it to null.
So, why not just always set it to null?
I.e.:
create or replace
TRIGGER TR_SITECONTACT_UPDATE
AFTER UPDATE OR INSERT ON s_ct
FOR EACH ROW
DECLARE
v_SID s_ct.sid%type;
PRAGMA AUTONOMOUS_TRANSACTION;
BEGIN
IF :NEW.CTID != :OLD.CTID THEN
UPDATE CT
SET lastupdatedon =sysdate,
LASTUPDATESITE=:NEW.SID,
lastupdatedCOF = NULL
WHERE CTID = :NEW.CTID;
COMMIT;
END IF;
END;
One other point - do you really, really need an autonomous transaction? What if the transaction inserting into/updating the s_ct table is rolled back - as things stand, you'd be left with a row in the CT table that's been changed, despite the underlying change not having taken place.
I need to implement a filtering feature on events using WSO2 CEP 4.1.0.
My filters are stored in a PostgreSQL database.
To do this, I created an event_table configuration, and I join my event on this event_table stream.
My filters can have default values, so I need a complex joining condition:
from my_stream#window.length(1) left outer join my_event_table as filter
on (filter.field1 == '' OR stream.field1 == filter.field1)
AND (filter.field2 == '' OR stream.field2 == filter.field2)
(I do a LEFT OUTER JOIN because I must have a different process if the filter is found or not: if I find the filter, I complete my_stream with information from it, and I save the event in database table; if not, I save the event in another database table).
Problem is when the system extract the join condition to interpret it, it removes the parenthesis, so the boolean interpretation is wrong:
on filter.field1 == '' OR stream.field1 == filter.field1
AND filter.field2 == '' OR stream.field2 == filter.field2
Is there a way to implement this kind of feature, without plugin creation?
Regards.
EDIT: This is the current solution I found, but I am afraid about performance and complexity, so look for another one:
#first, I left join on my event_table
from my_stream#window.length(1) left outer join my_event_table as filter
on (filter.field1 == '' OR stream.field1 == filter.field1)
select stream.field1, stream.field2, stream.field3, filter.field1 as filter_field1, filter.field2 as filter_field2, filter.field3 as filter_field3, filter.info1
insert into tempStreamJoinProblemCount
#if the join return nothing, then no filter for my line
from tempStreamJoinProblemCount[filter_field1 IS NULL]
insert into filter_not_found
#if the join return some lines, maybe 1 of these lines can match, I continue to check
from tempStreamJoinProblemCount[NOT filter_field1 IS NULL]
select field1, field2, field3, info1
#I check my complex joining condition and store it in a boolean for later: 1 then my filter match, 0 then no match
convert(
(filter_field2=='' OR field2 == filter_field2)
AND(filter_field3=='' OR field3 == filter_field3),'int') as filterMatch
insert into filterCheck
#if filterMatch is 1, I extract the filter information (info1), else I put a default value (minimal value); custom:ternaryInt is just the ternary function: boolean_condition?value_if_true:value_if_false
from computeFilterMatchInformation
select field1, custom:ternaryInt(filterMatch==1, info1, 0) as info1, filterMatch
insert into filterCheck
#As we did not join on all fields, 1 line has been expanded into several lines, so we group the lines, to remove these generated lines and keep only 1 initial line;
from filterMatchGroupBy#window.time(10 sec)
#max(info1) return only the filter value (because the value 0 from previous stream is the minimal value);
#sum(filterMatch) return 0 if there is no match, and 1+ if there is a match
select field1, max(info1) as info1, sum(filterMatch) as filter_match
group by field1, field2, field3
insert into filterCheck
#we found no match
from filterCheck[filter_match == 0]
select field1, field2, field3
insert into filter_not_found
#we found a match, so we extract filter information (info1)
from filterCheck[filter_match > 0]
select field1, field2, field3, info1
insert into filter_found
Fundamentally, left outer join might not work with an event table. Because event table is not an active construct (like a stream). So we cannot assign a window to an event table. However, in order to join with (outer joins) each stream should be associated with a window. Since we cannot do that with event tables, outer-joins wouldn't work anyway.
However, to address your scenario, you can join my_stream with my_event_table without any conditions and emit resulting events into an intermediate stream, and then check for the conditions on that intermediate stream. Try something similar to this;
from my_stream join my_event_table
select
my_stream.field1 as streamField1,
my_event_table.field1 as tableField1,
my_stream.field1 as streamField2,
my_event_table.field1 as tableField2,
insert into intermediateStream;
from intermediateStream[((tableField1 == '' OR streamField1 == tableField1) AND (tableField2 == '' OR streamField2 == tableField2))]
select *
insert into filtereMatchedStream;
from intermediateStream[not ((tableField1 == '' OR streamField1 == tableField1) AND (tableField2 == '' OR streamField2 == tableField2))]
select *
insert into filtereUnMatchedStream;
Is it possible to add a postgresql hstorefield (django >= 1.8) to a model where values in the hstore are unique?
Keys are obviously unique but can values be unique as well? I suppose custom validators could be added to the model but I am curious to know if this can be done on the database level
A single hstore value can contain multiple key => value pairs, making a solution based on a unique index impossible. Additionally, your new hstore value can also have multiple key => value pairs. The only viable alternative is then a BEFORE INSERT OR UPDATE trigger on the table:
CREATE FUNCTION trf_uniq_hstore_values() RETURNS trigger AS $$
DECLARE
dups text;
BEGIN
SELECT string_agg(x, ',') INTO dups
FROM (SELECT svals(hstorefield) AS x FROM my_table) sub
JOIN (SELECT svals(NEW.hstorefield) AS x) vals USING (x);
IF dups IS NOT NULL THEN
RAISE NOTICE format('Value(s) %s violate(s) uniqueness constraint. Operation aborted.', dups);
RETURN NULL;
ELSE
RETURN NEW;
END IF;
END; $$ LANGUAGE plpgsql;
CREATE TRIGGER tr_uniq_hstore_values
BEFORE INSERT OR UPDATE ON my_table
FOR EACH ROW EXECUTE PROCEDURE trf_uniq_hstore_values();
Note that this will not trap existing duplicates in the table.