TransactionScope with SQLite in-memory database and NHibernate - unit-testing

I'm running into a problem where a transaction does not roll back when using TransactionScope.
We're using NHibernate with an in memory SQLite database, so that limits us to just one db connection for the duration of the entire lifetime of the application (in this case, some unit tests).
using (var ts = new TransactionScope(TransactionScopeOption.Required,
TimeSpan.Zero))
{
using (var transaction = _repository.BeginTransaction())
{
_repository.Save(entity);
transaction.Commit();
}
// ts.Complete(); <- commented Complete call still commits transaction
}
Even if I remove NHibernate's inner nested transaction so the code is simply as below, the transaction is still commited.
using (var ts = new TransactionScope(TransactionScopeOption.Required,
TimeSpan.Zero))
{
_repository.Save(entity);
} // no Complete(), but the transaction still commits
Is it expecting a freshly opened SQLite connection inside the TransactionScope block in order to enlist it in the transaction?
Again, I can't supply it with a new connection because that would clear out the database.
Using NHibernate 3.0 and SQLite 1.0.66.0, both latest versions at the time of writing.
Note: using transaction.Rollback() on the NHibernate ITransaction object correctly rolls back the transaction, it's just the TransactionScope support that doesn't seem to work.

I think I may have found the reason for this. If the connection is not opened from inside the TransactionScope block, it will not be enlisted in the transaction.
There's some information here:
http://msdn.microsoft.com/en-us/library/aa720033(v=vs.71).aspx
Solution:
I already had a .BeginTransaction() method in my repository, so I figured I'd manually enlist the connection in the ambient transaction there.
This is the code I ended up with:
/// <summary>
/// Begins an explicit transaction.
/// </summary>
/// <returns></returns>
public ITransaction BeginTransaction()
{
if (System.Transactions.Transaction.Current != null)
{
((DbConnection) Session.Connection).EnlistTransaction(System.Transactions.Transaction.Current);
}
return Session.BeginTransaction();
}
And here's how I'm using it:
using (var ts = new TransactionScope(TransactionScopeOption.Required, TimeSpan.Zero))
using (var transaction = repository.BeginTransaction())
{
repository.Save(entity);
transaction.Commit(); // nhibernate transaction is commited
// ts.Complete(); // TransactionScope is not commited
} // transaction is correctly rolled back now

Related

Set autocommit off in PostgreSQL with SOCI C++

This is an answer rather than a question which I need to state in SO anyway. I was struggle with this question ("how to turn off autocommit when using soci library with PostgreSQL databases") for a long time and came up with several solutions.
In Oracle, by default the auto commit option is turned off and we have to call soci::session::commit explicitly to commit the transactions we have made but in PostgreSQL this is other way around and it will commit as soon as we execute a sql statement (correct me, if I'm wrong). This will introduce problems when we write applications database independently. The soci library provide soci::transaction in order to address this.
So, when we initialize a soci::transaction by providing the soci::session to that, it will hold the transaction we have made without commiting to the database. At the end when we call soci::transaction::commit it will commit the changes to the database.
soci::session sql(CONNECTION_STRING);
soci::transaction tr(sql);
try {
sql << "insert into soci_test(id, name) values(7, \'John\')";
tr.commit();
}
catch (std::exception& e) {
tr.rollback();
}
But, performing commit or rollback will end the transaction tr and we need to initialize another soci::transaction in order to hold future transactions (to create an active in progress transaction) we are about to make. Here are more fun facts about soci::transaction.
You can have only one soci::transaction instance per soci::session. The second one will replace the first one, if you initialize another.
You can not perform more than a single commit or rollback using a soci::transaction. You will receive an exception, at the second time you do commit or rollback.
You can initialize a transaction, then use session::commit or session::rollback. It will give the same result as transaction::commit or transaction::rollback. But the transaction will end as soon as you perform single commit or rollback as usual.
It doesn't matter the visibility of the soci::transaction object to your scope (where you execute the sql and call commit or rollback) in order to hold the db transactions you made until explicitly commit or rollback. In other words, if there is an active transaction in progress for a session, db transactions will hold until we explicitly commit or rollback.
But, if the lifetime of the transaction instance which created for the session was end, we cannot expect the db transactions will be halt.
If you every suffer with "WARNING: there is no transaction in progress", you have to perform commit or rollback only using soci::transaction::commit or soci::transaction::rollback.
Now I will post the solution which I came up with, in order to enable the explicit commit or rollback with any database backend.
This is the solution I came up with.
namespace mysociutils
{
class session : public soci::session
{
public:
void open(std::string const & connectString)
{
soci::session::open(connectString);
tr = std::unique_ptr<soci::transaction>(new soci::transaction(*this));
}
void commit()
{
tr->commit();
tr = std::unique_ptr<soci::transaction>(new soci::transaction(*this));
}
void rollback()
{
tr->rollback();
tr = std::unique_ptr<soci::transaction>(new soci::transaction(*this));
}
void ~session()
{
tr->rollback();
}
private:
std::unique_ptr<soci::transaction> tr;
};
}
When ever commit or rollback is performed, initialize a new soci::transaction. Now you can replace your soci::session sql with mysociutils::session sql and enjoy SET AUTOCOMMIT OFF.

entity framework 6 and pessimistic concurrency

I'm working on a project to gradually phase out a legacy application.
In the proces, as a temporary solution we integrate with the legacy application using the database.
The legacy application uses transactions with serializable isolation level.
Because of database integration with a legacy application, i am for the moment best off using the same pessimistic concurrency model and serializable isolation level.
These serialised transactions should not only be wrapped around the SaveChanges statement but includes some reads of data as well.
I do this by
Creation a transactionScope around my DbContext with serialised isolation level.
Create a DbContext
Do some reads
Do some changes to objects
Call SaveChanges on the DbContext
Commit the transaction scope (thus saving the changes)
I am under the notion that this wraps my entire reads and writes into on serialised transaction and then commits.
I consider this a way form of pessimistic concurrency.
However, reading this article, https://learn.microsoft.com/en-us/aspnet/mvc/overview/getting-started/getting-started-with-ef-using-mvc/handling-concurrency-with-the-entity-framework-in-an-asp-net-mvc-application
states that ef does not support pessimistic concurrency.
My question is:
A: Does EF support my way of using a serializable transaction around reads and writes
B: Wrapping the reads and writes in one transaction gives me the guarantee that my read data is not changed when committing the transaction.
C: This is a form of pessimistic concurrency right?
One way to acheive pessimistic concurrency is to use sonething like this:
var options = new TransactionOptions
{
IsolationLevel = System.Transactions.IsolationLevel.Serializable,
Timeout = new TimeSpan(0, 0, 0, 10)
};
using(var scope = new TransactionScope(TransactionScopeOption.RequiresNew, options))
{ ... stuff here ...}
In VS2017 it seems you have to rightclick TransactionScope then get it to add a reference for: Reference Assemblies\Microsoft\Framework.NETFramework\v4.6.1\System.Transactions.dll
However if you have two threads attempt to increment the same counter, you will find one succeeds whereas the other thread thows a timeout in 10 seconds. The reason for this is when they proceed to saving changes they both need to upgrade their lock to exclusive, but they cannot because other transaction is already holding a shared lock on the same row. SQL Server will then detect the deadlock after a while fails one transactions to solve the deadlock. Failing one transaction will release shared lock and the second transaction will be able to upgrade its shared lock to exclusive lock and proceed with execution.
The way out of this deadlocking is to provide a UPDLOCK hint to the database using something such as:
private static TestEntity GetFirstEntity(Context context) {
return context.TestEntities
.SqlQuery("SELECT TOP 1 Id, Value FROM TestEntities WITH (UPDLOCK)")
.Single();
}
This code came from Ladislav Mrnka's blog which now looks to be unavailable. The other alternative is to resort to optimistic locking.
The document states that EF does not have a built in pessimistic concurrency support. But this does not mean you can't have pessimistic locking with EF. So YOU CAN HAVE PESSIMISTIC LOCKING WITH EF!
The recipe is simple:
use transactions (not necessarily serializable, cause it will lead to poor perf.) - readcommitted is ok to use...but depends...
do your changes, call dbcontext.savechanges()
do lock your table - execute T-SQL manually, or feel free to use the code att. below.
the given T-SQL command with the hints will keep that database locked until the duration of the given transaction.
there's one thing you need to take care: your loaded entities might be obsolete at the point you do the lock, so all entities from the locked table should be re-fetched (reloaded).
I did a lot of pessimistic locking, but optimistic locking is better. You can't go wrong with it.
A typical example where pessimistic locking can't help is a parent child relation, where you might lock the parent and treat it like an aggregate (so you assume you are the only one having access to the child too). So if other thread tries to access the parent object, it won't work (will be blocked) until the other thread releases the lock from the parent table. But with an ORM, any other coder can load the child independently - and from that point 2 threads will make changes to the child object... With pessimistic locking you might mess up the data, with optimistic you'll get an exception, you can reload valid data and do try to save again...
So the code:
public static class DbContextSqlExtensions
{
public static void LockTable<Entity>(this DbContext context) where Entity : class
{
var tableWithSchema = context.GetTableNameWithSchema<Entity>();
context.Database.ExecuteSqlCommand(string.Format("SELECT null as dummy FROM {0} WITH (tablockx, holdlock)", tableWithSchema));
}
}
public static class DbContextExtensions
{
public static string GetTableNameWithSchema<T>(this DbContext context)
where T : class
{
var entitySet = GetEntitySet<T>(context);
if (entitySet == null)
throw new Exception(string.Format("Unable to find entity set '{0}' in edm metadata", typeof(T).Name));
var tableName = GetStringProperty(entitySet, "Schema") + "." + GetStringProperty(entitySet, "Table");
return tableName;
}
private static EntitySet GetEntitySet<T>(DbContext context)
{
var type = typeof(T);
var entityName = type.Name;
var metadata = ((IObjectContextAdapter)context).ObjectContext.MetadataWorkspace;
IEnumerable<EntitySet> entitySets;
entitySets = metadata.GetItemCollection(DataSpace.SSpace)
.GetItems<EntityContainer>()
.Single()
.BaseEntitySets
.OfType<EntitySet>()
.Where(s => !s.MetadataProperties.Contains("Type")
|| s.MetadataProperties["Type"].ToString() == "Tables");
var entitySet = entitySets.FirstOrDefault(t => t.Name == entityName);
return entitySet;
}
private static string GetStringProperty(MetadataItem entitySet, string propertyName)
{
MetadataProperty property;
if (entitySet == null)
throw new ArgumentNullException("entitySet");
if (entitySet.MetadataProperties.TryGetValue(propertyName, false, out property))
{
string str = null;
if (((property != null) &&
(property.Value != null)) &&
(((str = property.Value as string) != null) &&
!string.IsNullOrEmpty(str)))
{
return str;
}
}
return string.Empty;
}
}

How avoid closing EntityManager when OptimisticLockException occurs?

My problem - process try change entity that already changed and have newest version id. When i do flush() in my code in UnitOfWork's commit() rising OptimisticLockException and catching in same place by catch-all block. And in this catch doctrine closing EntityManager.
If i want skip this entity and continue with another from ArrayCollection, i should not use flush()?
Try recreate EntityManager:
}catch (OptimisticLockException $e){
$this->em = $this->container->get('doctrine')->getManager();
echo "\n||OptimisticLockException.";
continue;
}
And still get
[Doctrine\ORM\ORMException]
The EntityManager is closed.
Strange.
If i do
$this->em->lock($entity, LockMode::OPTIMISTIC, $entity->getVersion());
and then do flush() i get OptimisticLockException and closed entity manager.
if i do
$this->getContainer()->get('doctrine')->resetManager();
$em = $doctrine->getManager();
Old data unregistered with this entity manager and i even can't write logs in database, i get error:
[Symfony\Component\Debug\Exception\ContextErrorException]
Notice: Undefined index: 00000000514cef3c000000002ff4781e
You should check entity version before you try to flush it to avoid exception. In other words you should not call flush() method if the lock fails.
You can use EntityManager#lock() method for checking whether you can flush entity or not.
/** #var EntityManager $em */
$entity = $em->getRepository('Post')->find($_REQUEST['id']);
// Get expected version (easiest way is to have the version number as a hidden form field)
$expectedVersion = $_REQUEST['version'];
// Update your entity
$entity->setText($_REQUEST['text']);
try {
//assert you edit right version
$em->lock($entity, LockMode::OPTIMISTIC, $expectedVersion);
//if $em->lock() fails flush() is not called and EntityManager is not closed
$em->flush();
} catch (OptimisticLockException $e) {
echo "Sorry, but someone else has already changed this entity. Please apply the changes again!";
}
Check the example in Doctrine docs optimistic locking
Unfortunately, nearly 4 years later, Doctrine is still unable to recover from an optimistic lock properly.
Using the lock function as suggested in the doc doesn't work if the db was changed by another server or php worker thread. The lock function only makes sure the version number wasn't changed by the current php script since the entity was loaded into memory. It doesn't read the db to make sure the version number is still the expected one.
And even if it did read the db, there is still the potential for a race condition between the time the lock function checks the current version in the db and the flush is performed.
Consider this scenario:
server A reads the entity,
server B reads the same entity,
server B updates the db,
server A updates the db <== optimistic lock exception
The exception is triggered when flush is called and there is nothing that can be done to prevent it.
Even a pessimistic lock won't help unless you can afford to loose performance and actually lock your db for a (relatively) long time.
Doctrine's solution (update... where version = :expected_version) is good in theory. But, sadly, Doctrine was designed to become unusable once an exception is triggered. Any exception. Every entity is detached. Even if the optimistic lock can be easily solved by re-reading the entity and applying the change again, Doctrine makes it very hard to do so.
As others have said, sometimes EntityManager#lock() is not useful. In my case, the Entity version may change during the same request.
If EntityManager closes after flush(), I proceed like this:
if (!$entityManager->isOpen()) {
$entityManager = EntityManager::create(
$entityManager->getConnection(),
$entityManager->getConfiguration(),
$entityManager->getEventManager()
);
// ServiceManager shoud be aware of this change
// this is for Zend ServiceManager your shoud adapt this part to your usecase
$serviceManager = $application->getServiceManager();
$serviceManager->setAllowOverride(true);
$serviceManager->setService(EntityManager::class, $entityManager);
$serviceManager->setAllowOverride(false);
// Then you should manually reload every Entity you need (or repeat the whole set of actions)
}

How to manually set a primary key in Doctrine2

I am importing data into a new Symfony2 project using Doctrine2 ORM.
All new records should have an auto-generated primary key. However, for my import, I would like to preserve the existing primary keys.
I am using this as my Entity configuration:
type: entity
id:
id:
type: integer
generator: { strategy: AUTO }
I have also created a setter for the id field in my entity class.
However, when I persist and flush this entity to the database, the key I manually set is not preserved.
What is the best workaround or solution for this?
The following answer is not mine but OP's, which was posted in the question. I've moved it into this community wiki answer.
I stored a reference to the Connection object and used that to manually insert rows and update relations. This avoids the persister and identity generators altogether. It is also possible to use the Connection to wrap all of this work in a transaction.
Once you have executed the insert statements, you may then update the relations.
This is a good solution because it avoids any potential problems you may experience when swapping out your configuration on a live server.
In your init function:
// Get the Connection
$this->connection = $this->getContainer()->get('doctrine')->getEntityManager()->getConnection();
In your main body:
// Loop over my array of old data adding records
$this->connection->beginTransaction();
foreach(array_slice($records, 1) as $record)
{
$this->addRecord($records[0], $record);
}
try
{
$this->connection->commit();
}
catch(Exception $e)
{
$output->writeln($e->getMessage());
$this->connection->rollBack();
exit(1);
}
Create this function:
// Add a record to the database using Connection
protected function addRecord($columns, $oldRecord)
{
// Insert data into Record table
$record = array();
foreach($columns as $key => $column)
{
$record[$column] = $oldRecord[$key];
}
$record['id'] = $record['rkey'];
// Insert the data
$this->connection->insert('Record', $record);
}
You've likely already considered this, but my approach would be to set the generator strategy to 'none' for the import so you can manually import the existing id's in your client code. Then once the import is complete, change the generator strategy back to 'auto' to let the RDBMS take over from there. A conditional can determine whether the id setter is invoked. Good luck - let us know what you end up deciding to use.

Using MbUnit3's [Rollback] for unit testing NHibernate interactions with SQLite

Background:
My team is dedicated to ensuring that straight from checkout, our code compiles and unit tests run successfully. To facilitate this and test some of our NHibernate mappings, we've added a SQLite DB to our repository which is a mirror of our production SQL Server 2005 database. We're using the latest versions of: MbUnit3 (part of Gallio), System.Data.SQLite and NHibernate.
Problem:
I've discovered that the following unit test does not work with SQLite, despite executing without trouble against SQL Server 2005.
[Test]
[Rollback]
public void CompleteCanPersistNode()
{
// returns a Configuration for either SQLite or SQL Server 2005 depending on how the project is configured.
Configuration config = GetDbConfig();
ISessionFactory sessionFactory = config.BuildSessionFactory();
ISession session = sessionFactory.OpenSession();
Node node = new Node();
node.Name = "Test Node";
node.PhysicalNodeType = session.Get<NodeType>(1);
// SQLite fails with the exception below after the next line called.
node.NodeLocation = session.Get<NodeLocation>(2);
session.Save(node);
session.Flush();
Assert.AreNotEqual(-1, node.NodeID);
Assert.IsNotNull(session.Get<Node>(node.NodeID));
}
The exception I'm getting (ONLY when working with SQLite) follows:
NHibernate.ADOException: cannot open connection --->
System.Data.SQLite.SQLiteException:
The database file is locked database is locked
at System.Data.SQLite.SQLite3.Step(SQLiteStatement stmt)
at System.Data.SQLite.SQLiteDataReader.NextResult()
at System.Data.SQLite.SQLiteDataReader..ctor(SQLiteCommand cmd, CommandBehavior behave)
at System.Data.SQLite.SQLiteCommand.ExecuteReader(CommandBehavior behavior)
at System.Data.SQLite.SQLiteCommand.ExecuteNonQuery()
at System.Data.SQLite.SQLiteTransaction..ctor(SQLiteConnection connection, Boolean deferredLock)
at System.Data.SQLite.SQLiteConnection.BeginDbTransaction(IsolationLevel isolationLevel)
at System.Data.SQLite.SQLiteConnection.BeginTransaction()
at System.Data.SQLite.SQLiteEnlistment..ctor(SQLiteConnection cnn, Transaction scope)
at System.Data.SQLite.SQLiteConnection.EnlistTransaction(Transaction transaction)
at System.Data.SQLite.SQLiteConnection.Open()
at NHibernate.Connection.DriverConnectionProvider.GetConnection()
at NHibernate.Impl.SessionFactoryImpl.OpenConnection()
--- End of inner exception stack trace ---
at NHibernate.Impl.SessionFactoryImpl.OpenConnection()
at NHibernate.AdoNet.ConnectionManager.GetConnection()
at NHibernate.AdoNet.AbstractBatcher.Prepare(IDbCommand cmd)
at NHibernate.AdoNet.AbstractBatcher.ExecuteReader(IDbCommand cmd)
at NHibernate.Loader.Loader.GetResultSet(IDbCommand st, Boolean autoDiscoverTypes, Boolean callable, RowSelection selection, ISessionImplementor session)
at NHibernate.Loader.Loader.DoQuery(ISessionImplementor session, QueryParameters queryParameters, Boolean returnProxies)
at NHibernate.Loader.Loader.DoQueryAndInitializeNonLazyCollections(ISessionImplementor session, QueryParameters queryParameters, Boolean returnProxies)
at NHibernate.Loader.Loader.LoadEntity(ISessionImplementor session, Object id, IType identifierType, Object optionalObject, String optionalEntityName, Object optionalIdentifier, IEntityPersister persister)
at NHibernate.Loader.Entity.AbstractEntityLoader.Load(ISessionImplementor session, Object id, Object optionalObject, Object optionalId)
at NHibernate.Loader.Entity.AbstractEntityLoader.Load(Object id, Object optionalObject, ISessionImplementor session)
at NHibernate.Persister.Entity.AbstractEntityPersister.Load(Object id, Object optionalObject, LockMode lockMode, ISessionImplementor session)
at NHibernate.Event.Default.DefaultLoadEventListener.LoadFromDatasource(LoadEvent event, IEntityPersister persister, EntityKey keyToLoad, LoadType options)
at NHibernate.Event.Default.DefaultLoadEventListener.DoLoad(LoadEvent event, IEntityPersister persister, EntityKey keyToLoad, LoadType options)
at NHibernate.Event.Default.DefaultLoadEventListener.Load(LoadEvent event, IEntityPersister persister, EntityKey keyToLoad, LoadType options)
at NHibernate.Event.Default.DefaultLoadEventListener.ProxyOrLoad(LoadEvent event, IEntityPersister persister, EntityKey keyToLoad, LoadType options)
at NHibernate.Event.Default.DefaultLoadEventListener.OnLoad(LoadEvent event, LoadType loadType)
at NHibernate.Impl.SessionImpl.FireLoad(LoadEvent event, LoadType loadType)
at NHibernate.Impl.SessionImpl.Get(String entityName, Object id)
at NHibernate.Impl.SessionImpl.Get(Type entityClass, Object id)
at NHibernate.Impl.SessionImpl.Get[T](Object id)
D:\dev\598\Code\test\unit\DataAccess.Test\NHibernatePersistenceTests.cs
When SQLite is used and the [Rollback] attribute is NOT specified, the test also completes successfully.
Question:
Is this an issue with System.Data.SQLite's implementation of TransactionScope which MbUnit3 uses for [Rollback] or a limitation of the SQLite engine?
Is there some way to write this unit test, working against SQLite, that will rollback so as to avoid affecting the database each time the test is run?
This is not a real answer to you question, but probably a solution to solve the problem.
I use an in-memory implementation of sql lite for my integration tests. I build up the schema and fill the database before each test. The schema creation and initial data filling happens really fast (less then 0.01 seconds per test) because it's an in-memory database.
Why do you use a physical database?
Edit: response to answer about question above:
1.) Because I migrated my schema and data directly from SQL Server 2005 and I want it to persist in source control.
I recommend to store a file with the database schema in and a file or script that creates the sample data in source control. You can generate the file using sql server studion management express, you can generate it from your NHibernate mappings or you can use a tool like sql compare and you can probably find other solutions for this when you need it. Plain text files are stored easier in version control systems then complete binary database files.
2.) Does something about the in-memory SQLite engine differ such that it would resolve this difficulty?
It might solve your problems because you can recreate your database before each test. Your database under test will be in a the state you expect it to be before each test is executed. A benefit of that is there is no need to roll back your transactions, but I have run similar test with in memory sqllite and it worked as aspected.
Check if you're not missing connection.release_mode=on_close in your SQLite NHibernate configuration. (reference docs)
BTW: always dispose your ISession and ISessionFactory.
Ditch [Rollback] and use NDbUnit. I use this myself for this exact scenario and it has been working great.