Using db first approach, I want my application to throw a concurrency exception whenever I try to update an (out-of-date) entity which it's correspoinding row in the database has been already updated by another application/user/session.
I am using Entity Framework 5 on .Net 4.5. The corresponding table has a Timestamp column to maintain row version.
I have done this in the past by adding a timestamp field to the table you wish to perform a concurrency check. (in my example i added a column called ConcurrencyCheck)
There are two types of concurrency mode here depending on your needs :
1 Concurrency Mode: Fixed :
Then re-add/refresh your table in your model. For fixed concurrency , make sure your set your concurrency mode to fixed for your table when you import it into your model : like this :
Then to trap this :
try
{
context.SaveChanges();
}
catch (OptimisticConcurrencyException ex) {
////handle your exception here...
2. Concurrency Mode: None
If you wish to handle your own concurrency checking , i.e. raise a validation informing the user and not even allowing a save to occur then you can set Concurrency mode None.
1.Ensure you change the ConcurrencyMode in the properties of the new column you just added to "None".
2. To use this in your code , i would create a variable to store your current timestamp on the screen you which to check a save on.
private byte[] CurrentRecordTimestamp
{
get
{
return (byte[])Session["currentRecordTimestamp"];
}
set
{
Session["currentRecordTimestamp"] = value;
}
}
1.On page load (assuming you're using asp.net and not mvc/razor you dont mention above), or when you populate the screen with the data you wish you edit , i would pull out the current record under edit's ConcurrencyCheck value into this variable you created.
this.CurrentRecordTimestamp = currentAccount.ConcurrencyCheck;
Then if the user leaves the record open , and someone else in the meantime changes it , and then they also attempt to save , you can compare this timestamp value you saved earlier with the concurrency value it is now.
if (Convert.ToBase64String(accountDetails.ConcurrencyCheck) != Convert.ToBase64String(this.CurrentRecordTimestamp))
{
}
After reviewing many posts here and on the web explaining concurrency and timestamp in Entity Framework 5, I came into the conclusion that basically it is impossible to get a concurrency exception when the model is generated from an existing database.
One workaround is modifying the generated entities in the .edmx file and setting the "Concurrency Mode" of the entity's timestamp property to "Fixed". Unfortunately, if the model is repeatedly re-generated from the database this modification may be lost.
However, there is one tricky workaround:
Initialize a transaction scope with isolation level of Repeatable Read or higher
Get the timestamp of the row
Compare the new timestamp with the old one
Not equal --> Exception
Equal --> Commit the transaction
The isolation level is important to prevent concurrent modifications of inferring.
PS:
Erikset's solution seems to be fine to overcome regenerating the model file.
EF detects a concurrency conflict if no rows were affected. Then if you use stored procedures to delete and update you could manually add the timestamp value in the where clause:
UPDATE | DELETE ... WHERE PKfield = PkValue and Rowversionfield = rowVersionValue
Then if the row has been deleted or modified by anyone else the Sql statement affects 0 rows and EF interpret it as concurrency conflict.
Related
I'm starting from a "When a row is added, modified or deleted" connector, i'm passing in a switch connector that controls if the row is added, modified or deleted.
I'm then using the mail node to notify myself if a row is added, modified or deleted, in the case a row is added i have to include in the mail which fields of that row have been modified.
I can't find if this control is possible (check the row and compare it with the pre-modified version) and how to do it.
This is the embrional flow
As requested i'll try to be more detailed.
Please note that this is a POWER AUTOMATE FLOW so there is almost no code.
The CRUD connector takes 3 arguments:
-Change type (When an item is Added, Modified or Deleted)
-The table name (It's the Dataverse table name)
-The scope (Business Unit)
So i need to know if (for example in the output of this connector) there is a variable or other connector that contains which column changed and caused the trigger)
It's a question about the output or possible connectors related to the Dataverse CRUD node so there is NO CODE involved and no more "after-issue" flow specification needed to understand my request
A solution is to create a new field that keeps the current value of the original field and use trigger conditions to make your flow run only when those two fields don't match, meaning that the original field is updated and that its value has changed.
I'm trying create a transformation that can change field value in DB (postgreSQL what i use).
Case :
In postgre db I have table called Monitoring and it has several field like id, date, starttime, endtime, duration, transformation name, status, desc. All those value I get from Transformation Logging.
So, when I run the transformation it will insert into Monitoring table and set value for field status with Running. And when it done it will update the status into Finish. What I'm trying is to define value in table field by myself not take it from Transformation Logging so I can customize the value like I want to.
Goal is Update transformation status value from 'running' to 'finish/error/abort etc' in my db using pentaho and display that status in web app
I have thinking to used Modified Java Script step to do it but if there any other way maybe? A better one. (Just need opinion about this)
Apart from my remark, did you try the Value Mapper?
modified javascript is not a good idea to use. Ideally, it shouldn't be used due to the performance issue. You can use "add constant" step or "User defined Java Class" for an alternative.
You cannot change the values of the built-in Logging tables, for the simple reason that they are reserved for PDI usage. This causes a known issue in case of hard error: for example the status is not set to finish when the data base server crashes, or when a NullException is not catch by the DPI code.
You have some work around.
The simplest, the one used in the ETL-Pilot is to test (Status=Finish OR LogDate< 15 minutes ago) is the web app.
You can update the table when the transformation is not running. For example, put an hourly (or less) crontab that changes to Finish the status of any transformation whose LogDate is older than 15 mn. This crontab may be a simple SQL or included in a transformation that also check the tables size and/or send an email in case of potential error.
You can copy the table (if it is a non locking operation in your DB system), modify the Status column and use this table for your web app.
Scenario: We have a Dynamo DB table supporting Optimistic Locking with Version Number. Two concurrent threads are trying to save two different entries with the same primary key value to that Table.
Question: Will ConditionalCheckFailedException be thrown for the latter save action?
Yes, the second thread which tries to insert the same data would throw ConditionalCheckFailedException.
com.amazonaws.services.dynamodbv2.model.ConditionalCheckFailedException
As soon as the item is saved in database, the subsequent updates should have the version matching with the value on DynamoDB table (i.e. server side value).
save — For a new item, the DynamoDBMapper assigns an initial version
number 1. If you retrieve an item, update one or more of its
properties and attempt to save the changes, the save operation
succeeds only if the version number on the client-side and the
server-side match. The DynamoDBMapper increments the version number
automatically.
We had a similar use case in past but in our case, multiple threads reading first from the dynamoDB and then trying to update the values.
So finally there will be change in version by the time they read and they try to update the document and if you don't read the latest value from the DynamoDB then intermediate update will be lost(which is known as update loss issue refer aws-docs for more info).
I am not sure, if you have this use-case or not but if you have simply 2 threads trying to update the value and then if one of them get different version while their request reached to DynamoDB then you will get ConditionalCheckFailedException exception.
More info about this error can be found here http://grepcode.com/file/repo1.maven.org/maven2/com.michelboudreau/alternator/0.10.0/com/amazonaws/services/dynamodb/model/ConditionalCheckFailedException.java
I am trying to increase a field number whenever a new row is added to my table. First I created a variable lastItem specified as a Record with Subtype to my Table. Now I created the following Code on the OnInsert() trigger:
lastItem.FINDLAST;
ItemNo := lastItem.ItemNo + 10;
The above code seems not to work on the OnInsert() trigger but works for one row when I enter it on the ItemNo - OnValidate() trigger.
Any ideas how to get an increasing Number on every new row in my table?
Are you sure that's Dynamics CRM? The code is a Dynamics NAV C/AL code and you talking about the Item table? In this case let NAV to give you the next number from the No. Series properly.
You can use the same approach in any other table : related pattern
You should stay away from doing direct SQL updates and adding triggers to the DB when using Dynamics CRM as it's not supported.
The appropriate way would be to use a plug-in which reads the last value and then does the increment. You'd would register this to run when a new record is created in the system.
You can find some example source code on this CodePlex project: CRM 2011 Autonumbering Solution
You should use the property autoincrement of the field. In this way you increment the field one on one in every row.
Using Django with a PostgreSQL (8.x) backend, I have a model where I need to skip a block of ids, e.g. after giving out 49999 I want the next id to be 70000 not 50000 (because that block is reserved for another source where the instances are added explicitly with id - I know that's not a great design but it's what I have to work with).
What is the correct/safest place for doing this?
I know I can set the sequence with
SELECT SETVAL(
(SELECT pg_get_serial_sequence('myapp_mymodel', 'id')),
70000,
false
);
but when does Django actually pull a number from the sequence?
Do I override MyModel.save(), call its super and then grab me a cursor and check with
SELECT currval(
(SELECT pg_get_serial_sequence('myapp_mymodel', 'id'))
);
?
I believe that a sequence may be advanced by django even if saving the model fails, so I want to make sure whenever it hits that number it advances - is there a better place than save()?
P.S.: Even if that was the way to go - can I actually figure out the currval for save()'s session like this? if I grab me a connection and cursor, and execute that second SQL statement, wouldn't I be in another session and therefore not get a currval?
Thank you for any pointers.
EDIT: I have a feeling that this must be done at database level (concurrency issues) and posted a corresponding PostgreSQL question - How can I forward a primary key sequence in PostgreSQL safely?
As I haven't found an "automated" way of doing this yet, I'm thinking of the following workaround - it would be feasible for my particular situation:
Set the sequence with a MAXVALUE 49999 NO CYCLE
When 49999 is reached, the next save() will run into a postgres error
Catch that exception and reraise as a form error "you've run out of numbers, please reset to the next block then try again"
Provide a view where the user can activate the next block, i.e. execute "ALTER SEQUENCE my_seq RESTART WITH 70000 MAXVALUE 89999"
I'm uneasy about doing the restart automatically when catching the exception:
try:
instance.save()
except RunOutOfIdsException:
restart_id_sequence()
instance.save()
as I fear two concurrent save()'s running out of ids will lead to two separate restarts, and a subsequent violation of the unique constraint. (basically same concept as original problem)
My next thought was to not use a sequence for the primary key, but rather always specify the id explicitly from a separate counter table which I check/update before using its latest number - that should be safe from concurrency issues. The only problem is that although I have a single place where I add model instances, other parts of django or third-party apps may still rely on an implicit id, which I don't want to break.
But that same mechanism happens to be easily implemented on postgres level - I believe this is the solution:
Don't use SERIAL for the primary key, use DEFAULT my_next_id()
Follow the same logic as for "single level gapless sequence" - http://www.varlena.com/GeneralBits/130.php - my_next_id() does an update followed by a select
Instead of just increasing by 1, check if a boundary was crossed and if so, increase even further