I need to create a SQL Server trigger to block updates and deletes to a table Service.
This action should be done only to Service in which the column States sample data is "completed".
It should allow updates and deletes to Service in which the column States sample data is "active".
This is what I tried, I am having problems with doing the else operation (that is allowing updates to Service in which the column State sample data is "active").
CREATE TRIGGER [Triggername]
ON dbo.Service
FOR INSERT, UPDATE, DELETE
AS
DECLARE #para varchar(10),
#results varchar(50)
SELECT #para = Status
FROM Service
IF (#para = 'completed')
BEGIN
SET #results = 'An invoiced service cannot be updated or deleted!';
SELECT #results;
END
BEGIN
RAISERROR ('An invoiced service cannot be updated or deleted', 16, 1)
ROLLBACK TRANSACTION
RETURN
END
So if I understand you correctly, any UPDATE or DELETE should be allowed if the State column has a value of Active, but stopped in any other case??
Then I'd do this:
CREATE TRIGGER [Triggername]
ON dbo.Service
FOR UPDATE, DELETE
AS
BEGIN
-- if any row exists in the "Deleted" pseudo table of rows that WERE
-- in fact updated or deleted, that has a state that is *not* "Active"
-- then abort the operation
IF EXISTS (SELECT * FROM Deleted WHERE State <> 'Active')
ROLLBACK TRANSACTION
-- otherwise let the operation finish
END
As a note: you cannot easily return messages from a trigger (with SELECT #Results) - the trigger just silently fails by rolling back the currently active transaction
Related
I have an API which reads from two main tables Table A and Table B.
Table A has a column which acts as foreign key to Table B entries.
Now inside api flow, I have a method which runs below logic.
Raw SQL -> Joining table A with some other tables and fetching entries which has an active status in Table A.
From result of previous query we take the values from Table A column and fetch related rows from Table B using Django Models.
It is like
query = "Select * from A where status = 1" #Very simplified query just for example
cursor = db.connection.cursor()
cursor.execute(query)
results = cursor.fetchAll()
list_of_values = get_values_for_table_B(results)
b_records = list(B.objects.filter(values__in=list_of_values))
Now there is a background process which will enter or update new data in Table A and Table B. That process is doing everything using models and utilizing
with transaction.atomic():
do_update_entries()
However, the update is not just updating old row. It is like deleting old row and deleting related rows in Table B and then new rows are added to both tables.
Now the problem is if I run api and background job separately then everything is good, but when both are ran simultaneously then for many api calls the second query of Table B fails to get any data because the transaction executed in below manner:
Table A RAW Transaction executes and read old data
Background Job runs in a single txn and delete old data and enter new data. Having different foreign key values that relates it to Table B.
Table B Models read query executes which refers to values already deleted by previous txn, hence no records
So, for reading everything in a single txn I have tried below options
with transaction.atomic():
# Raw SQL for Table A
# Models query for Table B
This didn't worked and I am still getting same issue.
I tried another way around
transaction.set_autocommit(False)
Raw SQl for Table A
Models query for Table B
transaction.commit()
transaction.set_autocommit(True)
But this didn't work either. How can I read both queries in a single transaction so background job updates should not affect this read process.
pls help me with logic.
I have two tables of customers and transactions and there is column action I, U, D. If column action is I or U upsert the data if it is D delete the data in transactions tables.If all records of same transaction id are deleted then delete customers record else delete the transactions record
We can do insert,upsert,delete using Update strategy in transaction table but how can we delete the customer record if the same transaction IDs deleted
You need to create a logic ( like you said ) to delete from customer table. And its safer to either create a new pipeline in same mapping or a brand new mapping.
So, you will read customer_key from customer, do a lookup into transaction table(condition on customer_key), if you see no row found, delete that customer.
Read all customer_key from customer table.
Lookup on transaction table on customer_key. return customer_key.
Use update strategy, link customer_key from SQ #1 and customer_key from lookup. create a condition like this
IIF ( lkp_customer_key is null, DD_DELETE)
Link customer_key from SQ #1 to the customer target.
You can do this using left join too in source qualifier as well.
most of database servers on delete, it cascade the update on respective tables
I use a lambda to detect if there is any isActive record in my table and put_item to update the id if there is.
For example, I have a placeholder record with ID 999999999, if my table query detected there's an active record (isActive = True), it will put_item with the real session_id and other data.
Table record:
My lambda has the following section (from my cloudwatch the if...else statement is working as intended to verify the logic). Please ignore indentation hiccups when i copy and paste, the code runs with no issue.
##keep "isActive = True" when there's already an active status started from other source, just updating the session_id to from 999999999 to real session_id
else:
count_1 = query["Items"][0]["count_1"] <<< from earlier part of code to retrieve from current count_1 value from the table.
print(count_1) << get the right '13' value from the current table id = '999999999'
table.put_item(
Item={
'session_id': session_id,
'isActive': True,
'count_1': count_1,
'count_2': count_2
},
ConditionExpression='session_id = :session_id AND isActive = :isActive',
ExpressionAttributeValues={
':session_id': 999999999,
':isActive': True
}
)
However my table is not getting new item nor the primary key session_id is updated. Table still stays as the image above.
I understand from the documentation that
You cannot use UpdateItem to update any primary key attributes.
Instead, you will need to delete the item, and then use PutItem to
create a new item with new attributes.
but even if put_item is not able to update primary key, at least I am expecting a new item being created from my code when there isn't any error code thrown?
Does anybody know what is happening? thanks
I resolved it with different specification for ConditionExpression. Did multiple troubleshooting ways and pinpoint the issue comes from ConditionExpression:
What i did instead -
add imports of boto3.dynamodb.conditions import Key & Attr
and use ConditionExpression with ConditionExpression=Attr("session_id").ne(999999999)
and delete old id item
table.delete_item(
Key={
'session_id': 999999999
}
)
Other conditions available here https://boto3.amazonaws.com/v1/documentation/api/latest/reference/customizations/dynamodb.html#ref-dynamodb-conditions
If anyone has any other better and easier way would like to learn
I have 2 source. Oracle and SQL Server. I need to extract CustomerID from both and match. I need 2 outputs.
Number of CustomerID from Oracle
Number of CustomerID matching between Oracle and SQL Server.
Then, generate report and send it through mail to user.
Source - Oracle
Source - MS SQL
Joiner (Detail outer join with oracle)
Router
Group 1: CustomerID(Oracle) is not null and CustomerID(SQL Server) is null
Group 2: CustomerID from both not null
AGG transformation after both group to get count
Union to merge it
Load into target file
Now I will have to use Shell script to prepare mail and send it to user.
Is there way we can do it simple? like assigning count to workflow variable and then use it in Email task?
goto workflow:
open the session task and navigate to components tab
edit on sucess email and set type t0 non-reusable
click on edit button in value
click on edit button next to email text
enter "%l" . this will get the count of records and send to you in the email body.
I have an SP that work very well when called from SSMS. but when I call it from my application
that written in native C++ and use ODBC for connecting to database, the operation return no error but actually do nothing in the database.
My SP read some values from some temporary tables and either insert them in database or update them.
I had a transaction in SP that guard all the code of SP, I hardly debug my SP and find that function will return in first insert or update and so do nothing. So I remove that transaction and function partly worked, I mean it add some of the items but leave some of them there without adding them to the database.
here is a skeleton of my SP:
--BEGIN TRANSACTION
DECLARE #id bigint, #name nvarchar(50)
DELETE FROM MyTable WHERE NOT( id IN (SELECT id from #MyTable) )
DECLARE cur1 CURSOR FOR SELECT id, name FROM #MyTable
OPEN cur1
WHILE 1 != 0
BEGIN
FETCH cur1 INTO #id, #name
IF ##FETCH_STATUS != 0 BREAK;
UPDATE MyTable SET [Name]=#name WHERE [id]=#id
IF ##ROWCOUNT = 0
INSERT INTO MyTable ( ID, Name ) VALUES ( #id, #name )
END
CLOSE cur1
DEALLOCATE cur1
--COMMIT TRANSACTION
Is it possible you have an implicit transaction started in ODBC that needs an explicit COMMIT to end (after the call to the SP)? SSMS generally uses autocommit mode.
I solve my problem by adding SET NOCOUNT ON to start of my SP, so I think when SQL return multiple result set as a result of executing my SQL, ODBC close or cancel the command upon receiving first result set.