entity framework 6 and pessimistic concurrency - concurrency

I'm working on a project to gradually phase out a legacy application.
In the proces, as a temporary solution we integrate with the legacy application using the database.
The legacy application uses transactions with serializable isolation level.
Because of database integration with a legacy application, i am for the moment best off using the same pessimistic concurrency model and serializable isolation level.
These serialised transactions should not only be wrapped around the SaveChanges statement but includes some reads of data as well.
I do this by
Creation a transactionScope around my DbContext with serialised isolation level.
Create a DbContext
Do some reads
Do some changes to objects
Call SaveChanges on the DbContext
Commit the transaction scope (thus saving the changes)
I am under the notion that this wraps my entire reads and writes into on serialised transaction and then commits.
I consider this a way form of pessimistic concurrency.
However, reading this article, https://learn.microsoft.com/en-us/aspnet/mvc/overview/getting-started/getting-started-with-ef-using-mvc/handling-concurrency-with-the-entity-framework-in-an-asp-net-mvc-application
states that ef does not support pessimistic concurrency.
My question is:
A: Does EF support my way of using a serializable transaction around reads and writes
B: Wrapping the reads and writes in one transaction gives me the guarantee that my read data is not changed when committing the transaction.
C: This is a form of pessimistic concurrency right?

One way to acheive pessimistic concurrency is to use sonething like this:
var options = new TransactionOptions
{
IsolationLevel = System.Transactions.IsolationLevel.Serializable,
Timeout = new TimeSpan(0, 0, 0, 10)
};
using(var scope = new TransactionScope(TransactionScopeOption.RequiresNew, options))
{ ... stuff here ...}
In VS2017 it seems you have to rightclick TransactionScope then get it to add a reference for: Reference Assemblies\Microsoft\Framework.NETFramework\v4.6.1\System.Transactions.dll
However if you have two threads attempt to increment the same counter, you will find one succeeds whereas the other thread thows a timeout in 10 seconds. The reason for this is when they proceed to saving changes they both need to upgrade their lock to exclusive, but they cannot because other transaction is already holding a shared lock on the same row. SQL Server will then detect the deadlock after a while fails one transactions to solve the deadlock. Failing one transaction will release shared lock and the second transaction will be able to upgrade its shared lock to exclusive lock and proceed with execution.
The way out of this deadlocking is to provide a UPDLOCK hint to the database using something such as:
private static TestEntity GetFirstEntity(Context context) {
return context.TestEntities
.SqlQuery("SELECT TOP 1 Id, Value FROM TestEntities WITH (UPDLOCK)")
.Single();
}
This code came from Ladislav Mrnka's blog which now looks to be unavailable. The other alternative is to resort to optimistic locking.

The document states that EF does not have a built in pessimistic concurrency support. But this does not mean you can't have pessimistic locking with EF. So YOU CAN HAVE PESSIMISTIC LOCKING WITH EF!
The recipe is simple:
use transactions (not necessarily serializable, cause it will lead to poor perf.) - readcommitted is ok to use...but depends...
do your changes, call dbcontext.savechanges()
do lock your table - execute T-SQL manually, or feel free to use the code att. below.
the given T-SQL command with the hints will keep that database locked until the duration of the given transaction.
there's one thing you need to take care: your loaded entities might be obsolete at the point you do the lock, so all entities from the locked table should be re-fetched (reloaded).
I did a lot of pessimistic locking, but optimistic locking is better. You can't go wrong with it.
A typical example where pessimistic locking can't help is a parent child relation, where you might lock the parent and treat it like an aggregate (so you assume you are the only one having access to the child too). So if other thread tries to access the parent object, it won't work (will be blocked) until the other thread releases the lock from the parent table. But with an ORM, any other coder can load the child independently - and from that point 2 threads will make changes to the child object... With pessimistic locking you might mess up the data, with optimistic you'll get an exception, you can reload valid data and do try to save again...
So the code:
public static class DbContextSqlExtensions
{
public static void LockTable<Entity>(this DbContext context) where Entity : class
{
var tableWithSchema = context.GetTableNameWithSchema<Entity>();
context.Database.ExecuteSqlCommand(string.Format("SELECT null as dummy FROM {0} WITH (tablockx, holdlock)", tableWithSchema));
}
}
public static class DbContextExtensions
{
public static string GetTableNameWithSchema<T>(this DbContext context)
where T : class
{
var entitySet = GetEntitySet<T>(context);
if (entitySet == null)
throw new Exception(string.Format("Unable to find entity set '{0}' in edm metadata", typeof(T).Name));
var tableName = GetStringProperty(entitySet, "Schema") + "." + GetStringProperty(entitySet, "Table");
return tableName;
}
private static EntitySet GetEntitySet<T>(DbContext context)
{
var type = typeof(T);
var entityName = type.Name;
var metadata = ((IObjectContextAdapter)context).ObjectContext.MetadataWorkspace;
IEnumerable<EntitySet> entitySets;
entitySets = metadata.GetItemCollection(DataSpace.SSpace)
.GetItems<EntityContainer>()
.Single()
.BaseEntitySets
.OfType<EntitySet>()
.Where(s => !s.MetadataProperties.Contains("Type")
|| s.MetadataProperties["Type"].ToString() == "Tables");
var entitySet = entitySets.FirstOrDefault(t => t.Name == entityName);
return entitySet;
}
private static string GetStringProperty(MetadataItem entitySet, string propertyName)
{
MetadataProperty property;
if (entitySet == null)
throw new ArgumentNullException("entitySet");
if (entitySet.MetadataProperties.TryGetValue(propertyName, false, out property))
{
string str = null;
if (((property != null) &&
(property.Value != null)) &&
(((str = property.Value as string) != null) &&
!string.IsNullOrEmpty(str)))
{
return str;
}
}
return string.Empty;
}
}

Related

DynamoDB concurrent write

I have an existing DynamoDB table which has attributes say
---------------------------------------------------------
hk(hash-key)| rk(range-key)| a1 | a2 | a3 |
---------------------------------------------------------
I have an existing DynamoDb client which will only update existing record for a1 only. I want to create a second writer(DDB client) which will also update an existing record, but, for a2 and a3 only. If both the ddb client tries to update same record (1 for a1 and other for a2 and a3) at the exact same time, will DynamoDb guarantee that all a1 a2 a3 are updated with correct value(all three new values)? Is using save behavior UPDATE_SKIP_NULL_ATTRIBUTES sufficient for this purpose or do I need to implement some kind of optimistic locking? If not,
Is there something that DDB provides on the fly for this purpose?
If you happen to be using the Dynamo Java SDK you are in luck, because the SDK supports just that with Optimistic Locking. Im not sure if the other SDKs support anything similar - I suspect they do not.
Optimistic locking is a strategy to ensure that the client-side item
that you are updating (or deleting) is the same as the item in
DynamoDB. If you use this strategy, then your database writes are
protected from being overwritten by the writes of others — and
vice-versa.
Consider using this distributed locking library, https://www.npmjs.com/package/dynamodb-lock-client, here is the sample code we use in our codebase:
const DynamoDBLockClient = require('dynamodb-lock-client');
const PARTITION_KEY = 'id';
const HEARTBEAT_PERIOD_MS = 3e3;
const LEASE_DURATION_MS = 1e4;
const RETRY_COUNT = 1e2;
function dynamoLock(dynamodb, lockKey, callback) {
const failOpenClient = new DynamoDBLockClient.FailOpen({
dynamodb,
lockTable: process.env.LOCK_STORE_TABLE,// replace this with your own lock store table
partitionKey: PARTITION_KEY,
heartbeatPeriodMs: HEARTBEAT_PERIOD_MS,
leaseDurationMs: LEASE_DURATION_MS,
retryCount: RETRY_COUNT,
});
return new Promise((resolve, reject) => {
let error;
// Locking required as several lambda instances may attempt to update the table at the same time and
// we do not want to get lost updates.
failOpenClient.acquireLock(lockKey, async (lockError, lock) => {
if (lockError) {
return reject(lockError);
}
let result = null;
try {
result = await callback(lock);
} catch (callbackError) {
error = callbackError;
}
return lock.release((releaseError) => {
if (releaseError || error) {
return reject(releaseError || error);
}
return resolve(result);
});
});
});
}
async function doStuff(id) {
await dynamoLock(dynamodb, `Lock-DataReset-${id}`, async () => {
// do your ddb stuff here
});
}
Reads to DyanmoDB are eventually consistent.
See this: http://docs.aws.amazon.com/amazondynamodb/latest/developerguide/HowItWorks.ReadConsistency.html
DynamoDB supports eventually consistent and strongly consistent reads.
Eventually Consistent Reads
When you read data from a DynamoDB table, the response might not
reflect the results of a recently completed write operation. The
response might include some stale data. If you repeat your read
request after a short time, the response should return the latest
data.
Strongly Consistent Reads
When you request a strongly consistent read, DynamoDB returns a
response with the most up-to-date data, reflecting the updates from
all prior write operations that were successful. A strongly consistent
read might not be available if there is a network delay or outage.
Note DynamoDB uses eventually consistent reads, unless you specify
otherwise. Read operations (such as GetItem, Query, and Scan) provide
a ConsistentRead parameter. If you set this parameter to true,
DynamoDB uses strongly consistent reads during the operation.
Basically you have specify that you need to have strongly consistent data when you read.
And that should solve your problem. With consistent reads you should see updates to all three fields.
Do note that there are pricing impacts for strongly consistent reads.

Eclipse RAP Multi-client but single server thread

I understand how RAP creates scopes have a specific thread for each client and so on. I also understand how the application scope is unique among several clients, however I don't know how to access that specific scope in a single thread manner.
I would like to have a server side (with access to databases and stuff) that is a single execution to ensure it has a global knowledge of all transaction and that requests from clients are executed in sequence instead of parallel.
Currently I am accessing the application context as follows from the UI:
synchronized( MyServer.class ) {
ApplicationContext appContext = RWT.getApplicationContext();
MyServer myServer = (MyServer) appContext.getAttribute("myServer");
if (myServer == null){
myServer = new MyServer();
appContext.setAttribute("myServer", myServer);
}
myServer.doSomething(RWTUtils.getSessionID());
}
Even if I access myServer object there and trigger requests, the execution will still be running in the UI thread.
For now the only way to ensure the sequence is to use synchronized as follows on my server
public class MyServer {
String text = "";
public void doSomething(String string) {
try {
synchronized (this) {
System.out.println("doSomething - start :" + string);
text += "[" + string + "]";
System.out.println("text: " + (text));
Thread.sleep(10000);
System.out.println("text: " + (text));
System.out.println("doSomething - stop :" + string);
}
} catch (InterruptedException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
}
}
Is there a better way to not have to manage the thread synchronization myself?
Any help is welcome
EDIT:
To better explain myself, here is what I mean. Either I trust the database to handle multiple request properly and I have to handle also some other knowledge in a synchronized manner to share information between clients (example A) or I find a solution where another thread handles both (example B), the knowledge and the database. Of course, the problem here is that one client may block the others, but this is can be managed with background threads for long actions, most of them will be no problem. My initial question was, is there maybe already some specific thread of the application scope that does Example B or is Example A actually the way to go?
Conclusion (so far)
Basically, option A) is the way to go. For database access it will require connection pooling and for shared information it will require thoughtful synchronization of key objects. Main attention has to be done in the database design and the synchronization of objects to ensure that two clients cannot write incompatible data at the same time (e.g. write contradicting entries that make the result dependent of the write order).
First of all, the way that you create MyServer in the first snippet is not thread safe. You are likely to create more than one instance of MyServer.
You need to synchronize the creation of MyServer, like this for example:
synchronized( MyServer.class ) {
MyServer myServer = (MyServer) appContext.getAttribute("myServer");
if (myServer == null){
myServer = new MyServer();
appContext.setAttribute("myServer", myServer);
}
}
See also this post How to implement thread-safe lazy initialization? for other possible solutions.
Furthermore, your code is calling doSomething() on the client thread (i.e. the UI thread) which will cause each client to wait until pending requests of other clients are processed. The client UI will become unresponsive.
To solve this problem your code should call doSomething() (or any other long-running operation for that matter) from a background thread (see also
Threads in RAP)
When the background thread has finished, you should use Server Push to update the UI.

How avoid closing EntityManager when OptimisticLockException occurs?

My problem - process try change entity that already changed and have newest version id. When i do flush() in my code in UnitOfWork's commit() rising OptimisticLockException and catching in same place by catch-all block. And in this catch doctrine closing EntityManager.
If i want skip this entity and continue with another from ArrayCollection, i should not use flush()?
Try recreate EntityManager:
}catch (OptimisticLockException $e){
$this->em = $this->container->get('doctrine')->getManager();
echo "\n||OptimisticLockException.";
continue;
}
And still get
[Doctrine\ORM\ORMException]
The EntityManager is closed.
Strange.
If i do
$this->em->lock($entity, LockMode::OPTIMISTIC, $entity->getVersion());
and then do flush() i get OptimisticLockException and closed entity manager.
if i do
$this->getContainer()->get('doctrine')->resetManager();
$em = $doctrine->getManager();
Old data unregistered with this entity manager and i even can't write logs in database, i get error:
[Symfony\Component\Debug\Exception\ContextErrorException]
Notice: Undefined index: 00000000514cef3c000000002ff4781e
You should check entity version before you try to flush it to avoid exception. In other words you should not call flush() method if the lock fails.
You can use EntityManager#lock() method for checking whether you can flush entity or not.
/** #var EntityManager $em */
$entity = $em->getRepository('Post')->find($_REQUEST['id']);
// Get expected version (easiest way is to have the version number as a hidden form field)
$expectedVersion = $_REQUEST['version'];
// Update your entity
$entity->setText($_REQUEST['text']);
try {
//assert you edit right version
$em->lock($entity, LockMode::OPTIMISTIC, $expectedVersion);
//if $em->lock() fails flush() is not called and EntityManager is not closed
$em->flush();
} catch (OptimisticLockException $e) {
echo "Sorry, but someone else has already changed this entity. Please apply the changes again!";
}
Check the example in Doctrine docs optimistic locking
Unfortunately, nearly 4 years later, Doctrine is still unable to recover from an optimistic lock properly.
Using the lock function as suggested in the doc doesn't work if the db was changed by another server or php worker thread. The lock function only makes sure the version number wasn't changed by the current php script since the entity was loaded into memory. It doesn't read the db to make sure the version number is still the expected one.
And even if it did read the db, there is still the potential for a race condition between the time the lock function checks the current version in the db and the flush is performed.
Consider this scenario:
server A reads the entity,
server B reads the same entity,
server B updates the db,
server A updates the db <== optimistic lock exception
The exception is triggered when flush is called and there is nothing that can be done to prevent it.
Even a pessimistic lock won't help unless you can afford to loose performance and actually lock your db for a (relatively) long time.
Doctrine's solution (update... where version = :expected_version) is good in theory. But, sadly, Doctrine was designed to become unusable once an exception is triggered. Any exception. Every entity is detached. Even if the optimistic lock can be easily solved by re-reading the entity and applying the change again, Doctrine makes it very hard to do so.
As others have said, sometimes EntityManager#lock() is not useful. In my case, the Entity version may change during the same request.
If EntityManager closes after flush(), I proceed like this:
if (!$entityManager->isOpen()) {
$entityManager = EntityManager::create(
$entityManager->getConnection(),
$entityManager->getConfiguration(),
$entityManager->getEventManager()
);
// ServiceManager shoud be aware of this change
// this is for Zend ServiceManager your shoud adapt this part to your usecase
$serviceManager = $application->getServiceManager();
$serviceManager->setAllowOverride(true);
$serviceManager->setService(EntityManager::class, $entityManager);
$serviceManager->setAllowOverride(false);
// Then you should manually reload every Entity you need (or repeat the whole set of actions)
}

periodic state machine with boost statechart

I want to implement a state machine that will periodically monitor some status data (the status of my system) and react to it.
This seems to be something quite basic for a state machine (I've had this problem many times before), but I could not find a good way to do it. Here is some pseudo code to explain what I'd like to achieve:
// some data that is updated from IOs for example
MyData data;
int state = 0;
while( true ) {
update( & data ); //read a packet from serial port
//and update the data structure
switch( state ) {
case 0:
if( data.field1==0 ) state = 1;
else doSomething();
break;
case 1:
if( data.field2>0 ) state = 2;
else doSomethingElse();
break;
// etc.
}
usleep(100000); //100ms
}
Of course on top of that, I want to be able to execute some actions upon entering and exiting a state, maybe do some actions at each iteration of the state, have substates, history, etc. Which is why this simplistic approach quickly becomes impractical, hence boost statechart.
I've thought about some solutions, and I'd like to get some feedback.
1) I could list all my conditions for transitions and create an event for each one. Then I would have a loop that would monitor when each of those boolean toggles. e.g. for my first condition it could be:
if( old_data.field1!=0 && new_data.field1==0 )
// post an event of type Event 1
but it seems that it would quickly become difficult
2) have a single event that all states react to. this event is posted whenever some new status data is available. As a result, the current state will examine the data and decide whether to initiate a transition to another state or not
3) have all states inherit from an interface that defines a do_work(const MyData & data) method that would be called externally in a loop, examine the data and decide whether to initiate a transition to another state or not
Also, I am opened to using another framework (i.e. Macho or boost MSM)
Having worked with boost MSM, statecharts and QP my opinion is that you are on the right track with statecharts. MSM is faster but if you don't have much experience with state machines or meta programming the error messages from MSM are hard to understand if you do something wrong. boost.statecharts is the cleanest and easiest to understand. As for QP its written in embedded style (lots of preprocessor stuff, weaker static checking) although it also works in a PC environment. I also believe its slower. It does have the advantage of working on a lot of small ARM and similar processors. Its not free for commercial use as opposed to boost solutions.
Making an event for every type of state change does not scale. I would make one type of event EvStateChanged give it a data member containing a copy or reference to the dataset (and maybe one to the old data if you need it). You can then use costume reactions to handle whatever you need from any state context. Although default transitions work quite well in a toaster oven context (which are often used to demonstrate SM functionality) most real world SMs I have seen have many custom reactions, don't be shy to use them.
I don't really understand enough about your problem to give a code example but something along the lines of:
while( true ) {
update( & data ); //read a packet from serial port
//and update the data structure
if(data != oldData){
sm.process_event(EvDataChanged(data,oldData));
}
else{
timeout++;
if(timeout>MAX_TIMEOUT)
sm.process_event(EvTimeout());
}
usleep(100000); //100ms
}
and then handle your data changes in custome reactions depending on state along these lines:
SomeState::~SomeState(){
DoSomethingWhenLeaving();
}
sc::result SomeState::react( const EvDataChanged & e){
if(e.oldData.Field1 != e.newData.Field1){
DoSomething();
return transit<OtherState>();
}
if(e.oldData.Field2 != e.newData.Field2){
return transit<ErrorState>(); //is not allowed to change in this state
}
if(e.oldData.Field3 == 4){
return forward_event(); //superstate should handle this
}
return discard_event(); //don't care about anything else in this context
}

NHibernate Load vs. Get behavior for testing

In simple tests I can assert whether an object has been persisted by whether it's Id is no longer at it's default value. But delete an object and want to check whether the object and perhaps its children are really not in the database, the object Id's will still be at their saved values.
So I need to go to the db, and I would like a helper assertion to make the tests more readable, which is where the question comes in. I like the idea of using Load to save the db call, but I'm wondering if the ensuing exceptions can corrupt the session.
Below are how the two assertions would look, I think. Which would you use?
Cheers,
Berryl
Get
public static void AssertIsTransient<T>(this T instance, ISession session)
where T : Entity
{
if (instance.IsTransient()) return;
var found = session.Get<T>(instance.Id);
if (found != null) Assert.Fail(string.Format("{0} has persistent id '{1}'", instance, instance.Id));
}
Load
public static void AssertIsTransient<T>(this T instance, ISession session)
where T : Entity
{
if (instance.IsTransient()) return;
try
{
var found = session.Load<T>(instance.Id);
if (found != null) Assert.Fail(string.Format("{0} has persistent id '{1}'", instance, instance.Id));
}
catch (GenericADOException)
{
// nothing
}
catch (ObjectNotFoundException)
{
// nothing
}
}
edit
In either case I would be doing the fetch (Get or Load) in a new session, free of state from the session that did the save or delete.
I am trying to test cascade behavior, NOT to test NHib's ability to delete things, but maybe I am over thinking this one or there is a simpler way I haven't thought of.
Your code in the 'Load'-section will always hit Assert.Fail, but never throw an exception as Load<T> will return a proxy (with the Id-property set - or populated from the 1st level cache) without hitting the DB - ie. ISession.Load will only fail, if you access a property other than your Id-property on a deleted entity.
As for your 'Get'-section - I might be mistaken, but I think that if you delete an entity in a session - and later try to use .Get in the same session - you will get the one in 1st level cache - and again not return null.
See this post for the full explanation about .Load and .Get.
If you really need to see if it is in your DB - use a IStatelessSession - or launch a child-ISession (which will have an empty 1st level cache.
EDIT: I thought of a bigger problem - your entity will first be deleted when the transaction is committed (when the session is flushed per default) - so unless you manually flush your session (not recommended), you will still have it in your DB.
Hope this helps.