H2 database restore to particular state for testing - unit-testing

We use H2 database to execute tests. To isolate each test from another one, the database schema and basic data-setup is dropped and re-created before each test.
Is it possible to create a restore-point after the first setup of the database and restore before each test the data of this point?
SCRIPT just creates a sql-file with all tables and datas. Not a big difference to our own initialization.
Question database restore to particular state for testing is the same, just for Oracle and Postgres.

An old question, but I find it is still relevant. AFAIK there is no restore-point support.
Here is a simple, yet fast approach to backup/restore.
Create a backup prior to running the first test:
Connection conn = DriverManager.getConnection("jdbc:h2:mem:myDatabase;DB_CLOSE_DELAY=-1;LOG=0");
Statement stat = conn.createStatement();
stat.execute("SCRIPT TO 'memFS:myDatabase.sql'");
stat.close();
conn.close();
Restore after each test:
Connection conn = DriverManager.getConnection("jdbc:h2:mem:myDatabase;DB_CLOSE_DELAY=-1;LOG=0");
Statement stat = conn.createStatement();
stat.execute("DROP ALL OBJECTS");
stat.close();
conn.close();
conn = DriverManager.getConnection("jdbc:h2:mem:myDatabase;DB_CLOSE_DELAY=-1;INIT=runscript from 'memFS:myDatabase.sql';LOG=0");
conn.close();
Note that SHUTDOWN command turned out to be faster than DROP ALL OBJECTS, but it caused some issues (connection pool unable to reestablish connection).
I would not say the above approach is slow, far from it. But with a large database and thousands of tests there is still room for improvement as the method above takes some time. I managed to achieve a few times faster backup/restore (~15ms for a DB with ~350 tables) manually composing a script performing TRUNCATE TABLE, ALTER SEQUENCE and do the INSERT of all initial data (needs SET REFERENTIAL_INTEGRITY FALSE for cleanup/restore procedure to be really fast). The code is cumbersome but was worth the effort.

Related

Persist an entity without attaching it to the EntityManager

I want to bulk-import Doctrine entities from an XML file.
The XML file can be very large (up to 1 million entities), so I can't persist all my entities the traditional way:
$em->beginTransaction();
while ($entity = $xmlReader->readNextEntity()) {
$em->persist($entity);
}
$em->flush();
$em->commit();
I would soon exceed my memory limit, and Doctrine is not really designed to handle that many managed entities.
I don't need to track changes to the persisted entities, just to persist them; therefore I don't want them to be managed by the EntityManager.
Is it possible to persist entities without getting them managed by the EntityManager?
The first option that comes to my mind is to detach it immediately after persisting it:
$em->beginTransaction();
while ($entity = $xmlReader->readNextEntity()) {
$em->persist($entity);
$em->flush($entity);
$em->detach($entity);
}
$em->commit();
But this is quite expensive in Doctrine, and would slow down the import.
The other option would be to directly insert the data into the database using the Connection object and a prepared statement, but I like the abstraction of the entity and would ideally like to store the object directly.
Instead of using detach and flush after each insert, you can call clear (which detaches all entities from the manager) and flush in batches, which should be significantly faster:
Bulk inserts in Doctrine are best performed in batches, taking
advantage of the transactional write-behind behavior of an
EntityManager. The following code shows an example for inserting 10000
objects with a batch size of 20. You may need to experiment with the
batch size to find the size that works best for you. Larger batch
sizes mean more prepared statement reuse internally but also mean more
work during flush.
https://doctrine-orm.readthedocs.org/projects/doctrine-orm/en/latest/reference/batch-processing.html
If possible, I recommend avoiding transactions for bulk operations as they tend to slow things down:
//$em->beginTransaction();
$i = 0;
while ($entity = $xmlReader->readNextEntity()) {
$em->persist($entity);
if(++$i % 20 == 0) {
$em->flush();
$em->clear(); // detaches all entities
}
}
$em->flush(); //Persist objects that did not make up an entire batch
$em->clear();
//$em->commit();

Transactions with Python sqlite3

I'm trying to port some code to Python that uses sqlite databases, and I'm trying to get transactions to work, and I'm getting really confused. I'm really confused by this; I've used sqlite a lot in other languages, because it's great, but I simply cannot work out what's wrong here.
Here is the schema for my test database (to be fed into the sqlite3 command line tool).
BEGIN TRANSACTION;
CREATE TABLE test (i integer);
INSERT INTO "test" VALUES(99);
COMMIT;
Here is a test program.
import sqlite3
sql = sqlite3.connect("test.db")
with sql:
c = sql.cursor()
c.executescript("""
update test set i = 1;
fnord;
update test set i = 0;
""")
You may notice the deliberate mistake in it. This causes the SQL script to fail on the second line, after the update has been executed.
According to the docs, the with sql statement is supposed to set up an implicit transaction around the contents, which is only committed if the block succeeds. However, when I run it, I get the expected SQL error... but the value of i is set from 99 to 1. I'm expecting it to remain at 99, because that first update should be rolled back.
Here is another test program, which explicitly calls commit() and rollback().
import sqlite3
sql = sqlite3.connect("test.db")
try:
c = sql.cursor()
c.executescript("""
update test set i = 1;
fnord;
update test set i = 0;
""")
sql.commit()
except sql.Error:
print("failed!")
sql.rollback()
This behaves in precisely the same way --- i gets changed from 99 to 1.
Now I'm calling BEGIN and COMMIT explicitly:
import sqlite3
sql = sqlite3.connect("test.db")
try:
c = sql.cursor()
c.execute("begin")
c.executescript("""
update test set i = 1;
fnord;
update test set i = 0;
""")
c.execute("commit")
except sql.Error:
print("failed!")
c.execute("rollback")
This fails too, but in a different way. I get this:
sqlite3.OperationalError: cannot rollback - no transaction is active
However, if I replace the calls to c.execute() to c.executescript(), then it works (i remains at 99)!
(I should also add that if I put the begin and commit inside the inner call to executescript then it behaves correctly in all cases, but unfortunately I can't use that approach in my application. In addition, changing sql.isolation_level appears to make no difference to the behaviour.)
Can someone explain to me what's happening here? I need to understand this; if I can't trust the transactions in the database, I can't make my application work...
Python 2.7, python-sqlite3 2.6.0, sqlite3 3.7.13, Debian.
For anyone who'd like to work with the sqlite3 lib regardless of its shortcomings, I found that you can keep some control of transactions if you do these two things:
set Connection.isolation_level = None (as per the docs, this means autocommit mode)
avoid using executescript at all, because according to the docs it "issues a COMMIT statement first" - ie, trouble. Indeed I found it interferes with any manually set transactions
So then, the following adaptation of your test works for me:
import sqlite3
sql = sqlite3.connect("/tmp/test.db")
sql.isolation_level = None
c = sql.cursor()
c.execute("begin")
try:
c.execute("update test set i = 1")
c.execute("fnord")
c.execute("update test set i = 0")
c.execute("commit")
except sql.Error:
print("failed!")
c.execute("rollback")
Per the docs,
Connection objects can be used as context managers that automatically
commit or rollback transactions. In the event of an exception, the
transaction is rolled back; otherwise, the transaction is committed:
Therefore, if you let Python exit the with-statement when an exception occurs, the transaction will be rolled back.
import sqlite3
filename = '/tmp/test.db'
with sqlite3.connect(filename) as conn:
cursor = conn.cursor()
sqls = [
'DROP TABLE IF EXISTS test',
'CREATE TABLE test (i integer)',
'INSERT INTO "test" VALUES(99)',]
for sql in sqls:
cursor.execute(sql)
try:
with sqlite3.connect(filename) as conn:
cursor = conn.cursor()
sqls = [
'update test set i = 1',
'fnord', # <-- trigger error
'update test set i = 0',]
for sql in sqls:
cursor.execute(sql)
except sqlite3.OperationalError as err:
print(err)
# near "fnord": syntax error
with sqlite3.connect(filename) as conn:
cursor = conn.cursor()
cursor.execute('SELECT * FROM test')
for row in cursor:
print(row)
# (99,)
yields
(99,)
as expected.
Python's DB API tries to be smart, and begins and commits transactions automatically.
I would recommend to use a DB driver that does not use the Python DB API, like apsw.
Here's what I think is happening based on my reading of Python's sqlite3 bindings as well as official Sqlite3 docs. The short answer is that if you want a proper transaction, you should stick to this idiom:
with connection:
db.execute("BEGIN")
# do other things, but do NOT use 'executescript'
Contrary to my intuition, with connection does not call BEGIN upon entering the scope. In fact it doesn't do anything at all in __enter__. It only has an effect when you __exit__ the scope, choosing either COMMIT or ROLLBACK depending on whether the scope is exiting normally or with an exception.
Therefore, the right thing to do is to always explicitly mark the beginning of your transactional with connection blocks using BEGIN. This renders isolation_level irrelevant within the block, because thankfully it only has an effect while autocommit mode is enabled, and autocommit mode is always suppressed within transaction blocks.
Another quirk is executescript, which always issues a COMMIT before running your script. This can easily mess up the transactional with connection block, so your choice is to either
use exactly one executescript within the with block and nothing else, or
avoid executescript entirely; you can call execute as many times as you want, subject to the one-statement-per-execute limitation.
Normal .execute()'s work as expected with the comfortable default auto-commit mode and the with conn: ... context manager doing auto-commit OR rollback - except for protected read-modify-write transactions, which are explained at the end of this answer.
sqlite3 module's non-standard conn_or_cursor.executescript() doesn't take part in the (default) auto-commit mode (and so doesn't work normally with the with conn: ... context manager) but forwards the script rather raw. Therefor it just commits a potentially pending auto-commit transactions at start, before "going raw".
This also means that without a "BEGIN" inside the script executescript() works without a transaction, and thus no rollback option upon error or otherwise.
So with executescript() we better use a explicit BEGIN (just as your inital schema creation script did for the "raw" mode sqlite command line tool). And this interaction shows step by step whats going on:
>>> list(conn.execute('SELECT * FROM test'))
[(99,)]
>>> conn.executescript("BEGIN; UPDATE TEST SET i = 1; FNORD; COMMIT""")
Traceback (most recent call last):
File "<interactive input>", line 1, in <module>
OperationalError: near "FNORD": syntax error
>>> list(conn.execute('SELECT * FROM test'))
[(1,)]
>>> conn.rollback()
>>> list(conn.execute('SELECT * FROM test'))
[(99,)]
>>>
The script didn't reach the "COMMIT". And thus we could the view the current intermediate state and decide for rollback (or commit nevertheless)
Thus a working try-except-rollback via excecutescript() looks like this:
>>> list(conn.execute('SELECT * FROM test'))
[(99,)]
>>> try: conn.executescript("BEGIN; UPDATE TEST SET i = 1; FNORD; COMMIT""")
... except Exception as ev:
... print("Error in executescript (%s). Rolling back" % ev)
... conn.executescript('ROLLBACK')
...
Error in executescript (near "FNORD": syntax error). Rolling back
<sqlite3.Cursor object at 0x011F56E0>
>>> list(conn.execute('SELECT * FROM test'))
[(99,)]
>>>
(Note the rollback via script here, because no .execute() took over commit control)
And here a note on the auto-commit mode in combination with the more difficult issue of a protected read-modify-write transaction - which made #Jeremie say "Out of all the many, many things written about transactions in sqlite/python, this is the only thing that let me do what I want (have an exclusive read lock on the database)." in a comment on an example which included a c.execute("begin"). Though sqlite3 normally does not make a long blocking exclusive read lock except for the duration of the actual write-back, but more clever 5-stage locks to achieve enough protection against overlapping changes.
The with conn: auto-commit context does not already put or trigger a lock strong enough for protected read-modify-write in the 5-stage locking scheme of sqlite3. Such lock is made implicitely only when the first data-modifying command is issued - thus too late.
Only an explicit BEGIN (DEFERRED) (TRANSACTION) triggers the wanted behavior:
The first read operation against a database creates a SHARED lock and
the first write operation creates a RESERVED lock.
So a protected read-modify-write transaction which uses the programming language in general way (and not a special atomic SQL UPDATE clause) looks like this:
with conn:
conn.execute('BEGIN TRANSACTION') # crucial !
v = conn.execute('SELECT * FROM test').fetchone()[0]
v = v + 1
time.sleep(3) # no read lock in effect, but only one concurrent modify succeeds
conn.execute('UPDATE test SET i=?', (v,))
Upon failure such read-modify-write transaction could be retried a couple of times.
You can use the connection as a context manager. It will then automatically rollback the transactions in the event of an exception or commit them otherwise.
try:
with con:
con.execute("insert into person(firstname) values (?)", ("Joe",))
except sqlite3.IntegrityError:
print("couldn't add Joe twice")
See https://docs.python.org/3/library/sqlite3.html#using-the-connection-as-a-context-manager
This is a bit old thread but if it helps I've found that doing a rollback on the connection object does the trick.

How to integration test an object with database queries

How can i write unitintegration tests that talk to a database. e.g.:
public int GetAppLockCount(DbConnection connection)
{
string query :=
"SELECT"+CRLF+
" tl.resource_type AS ResourceType,"+CRLF+
" tl.resource_description AS ResourceName,"+CRLF+
" tl.request_session_id AS spid"+CRLF+
"FROM sys.dm_tran_locks tl"+CRLF+
"WHERE tl.resource_type = 'APPLICATION'"+CRLF+
"AND tl.resource_database_id = ("+CRLF+
" SELECT dbid"+CRLF+
" FROM master.dbo.sysprocesses"+CRLF+
" WHERE spid = ##spid)";
IRecordset rdr = Connection.Execute(query);
int nCount = 0;
while not rdr.EOF do
{
nCount := nCount+1;
rdr.Next;
}
return nCount;
}
In this case i am trying to exorcise this code of bugs (the IRecordset returns empty recordset).
[UnitTest]
void TestGetLockCountShouldAlwaysSucceed();
{
DbConnection conn = GetConnectionForUnit_IMean_IntegrationTest();
GetAppLockCount(conn);
CheckTrue(True, "We should reach here, whether there are app locks or not");
}
Now all i need is a way to connect to some database when running a unit integration testing.
Do people store connection strings somewhere for the test-runner to find? A .ini or .xml or .config file?
Note: Language/framework agnostic. The code intentionally contains elements from:
C#
Delphi
ADO.net
ADO
NUnit
DUnit
in order to drive that point home.
Now all i need is a way to connect to some database when running a unit integration testing.
Either use an existing database or an in-memory database. I've tried both an currently use an existing database that is splatted and rebuilt using Liquibase scripts in an ant file.
Advantages to in-memory - no dependencies on other applications.
Disadvantages - Not quite as real, can take time to start up.
Advantages to real database - Can be identical to the real world
Disadvantages - Requires access to a 3rd party machine. More work setting up a new user (i.e. create new database)
Do people store connection strings somewhere for the test-runner to find? A .ini or .xml or .config file?
Yeap. In C# I used a .config file, in java a .props file. With in-memory you can throw this into the version control as it will be the same for each person, with a real database running somewhere it will need to be different for each user.
You will also need to consider seed data. In Java I've used dbUnit in the past. Not the most readable, but works. Now I use a Ruby ActiveRecord task.
How do you start this? First can you rebuild your database? You need to be able to automate this before going to far down this road.
Next you should build up a blank local database for your tests. I go with one-per-developer, some other teams share but don't commit. In a .NET/MS SQL world I think in memory would be quite easy to do.

many queries in a task to generate json

So I've got a task to build which is going to archive a ton of data in our DB into JSON.
To give you a better idea of what is happening; X has 100s of Ys, and Y has 100s of Zs and so on. I'm creating a json file for every X, Y, and Z. But every X json file has an array of ids for the child Ys of X, and likewise the Ys store an array of child Zs..
It more complicated than that in many cases, but you should get an idea of the complexity involved from that example I think.
I was using ColdFusion but it seems to be a bad choice for this task because it is crashing due to memory errors. It seems to me that if it were removing queries from memory that are no longer referenced while running the task (ie: garbage collecting) then the task should have enough memory, but afaict ColdFusion isn't doing any garbage collection at all, and must be doing it after a request is complete.
So I'm looking either for advice on how to better achieve my task in CF, or for recommendations on other languages to use..
Thanks.
1) If you have debugging enabled, coldfusion will hold on to your queries until the page is done. Turn it off!
2) You may need to structDelete() the query variable to allow it to be garbage collected, otherwise it may persist as long as the scope that has a reference to it exists. eg.,
<cfset structDelete(variables,'myQuery') />
3) A cfquery pulls the entire ResultSet into memory. Most of the time this is fine. But for reporting on a large result set, you don't want this. Some JDBC drivers support setting the fetchSize, which in a forward, read only fashion, will let you get a few results at a time. This way you can deal with thousands and thousands of rows, without swamping memory. I just generated a 1GB csv file in ~80 seconds, using less than 100mb of heap. This requires dropping out to Java. But it kills two birds with one stone. It reduces the amount of data brought in at a time by the JDBC driver, and since you're working directly with the ResultSet, you don't hit the cfloop problem #orangepips mentioned. Granted, it's not for those without some Java chops.
You can do it something like this (you need cfusion.jar in your build path):
import java.io.BufferedWriter;
import java.io.FileWriter;
import java.sql.ResultSet;
import java.sql.Connection;
import java.sql.ResultSet;
import java.sql.Statement;
import au.com.bytecode.opencsv.CSVWriter;
import coldfusion.server.ServiceFactory;
public class CSVExport {
public static void export(String dsn,String query,String fileName) {
Connection conn = null;
Statement stmt = null;
ResultSet rs = null;
FileWriter fw = null;
BufferedWriter bw = null;
try {
DataSource ds = ServiceFactory.getDataSourceService().getDatasource(dsn);
conn = ds.getConnection();
// we want a forward-only, read-only result.
// you may want need to use a PreparedStatement instead.
stmt = conn.createStatement(
ResultSet.TYPE_FORWARD_ONLY,
ResultSet.CONCUR_READ_ONLY
);
// we only want to go forward!
stmt.setFetchDirect(ResultSet.FETCH_FORWARD);
// how many records to pull back at a time.
// the hard part is balancing memory usage, and round trips to the database.
// basically sacrificing speed for a lower memory hit.
stmt.setFetchSize(256);
rs = stmt.executeQuery(query);
// do something with the ResultSet, for example write to csv using opencsv
// the key is to stream it. you don't want it stored in memory.
// so excel spreadsheets and pdf files are out, but text formats like
// like csv, json, html, and some binary formats like MDB (via jackcess)
// that support streaming are in.
fw = new FileWriter(fileName);
bw = new BufferedWriter(fw);
CSVWriter writer = new CSVWriter(bw);
writer.writeAll(rs,true);
}
catch (Exception e) {
// handle your exception.
// maybe try ServiceFactory.getLoggingService() if you want to do a cflog.
e.printStackTrace();
}
finally() {
try {rs.close()} catch (Exception e) {}
try {stmt.close()} catch (Exception e) {}
try {conn.close()} catch (Exception e) {}
try {bw.close()} catch (Exception e) {}
try {fw.close()} catch (Exception e) {}
}
}
}
Figuring out how to pass parameters, logging, turning this into a background process (hint: extend Thread) etc. are separate issues, but if you grok this code, it shouldn't be too difficult.
4) Perhaps look at Jackson for generating your json. It supports streaming, and combined with the fetchSize, and a BufferedOutputStream, you should be able to keep the memory usage way down.
Eric, you are absolutely correct about ColdFusion garbage collection not removing query information from memory until request end and I've documented it fairly extensively in another SO question. In short, you hit OoM Exceptions when you loop over queries. You can prove it using a tool like VisualVM to generate a heap dump while the process is running and then running the resulting dump through Eclipse Memory Analyzer Tool (MAT). What MAT would show you is a large hierarchy, starting with an object named (I'm not making this up) CFDummyContent that holds, among other things, references to cfquery and cfqueryparam tags. Note, attempting to change it up to stored procs or even doing the database interaction via JDBC does not make difference.
So. What. To. Do?
This took me a while to figure out, but you've got 3 options in increasing order of complexity:
<cthread/>
asynchronous CFML gateway
daisy chain http requests
Using cfthread looks like this:
<cfloop ...>
<cfset threadName = "thread" & createUuid()>
<cfthread name="#threadName#" input="#value#">
<!--- do query stuff --->
<!--- code has access to passed attributes (e.g. #attributes.input#) --->
<cfset thread.passOutOfThread = somethingGeneratedInTheThread>
</cfthread>
<cfthread action="join" name="#threadName#">
<cfset passedOutOfThread = cfthread["#threadName#"].passOutOfThread>
</cfloop>
Note, this code is not taking advantage of asynchronous processing, thus the immediate join after each thread call, but rather the side effect that cfthread runs in its own request-like scope independent of the page.
I'll not cover ColdFusion gateways here. HTTP daisy chaining means executing an increment of the work, and at the end of the increment launching a request to the same algorithm telling it to execute the next increment.
Basically, all three approaches allow those memory references to be collected mid process.
And yes, for whoever asks, bugs have been raised with Adobe, see the question referenced. Also, I believe this issue is specific to Adobe ColdFusion, but have not tested Railo or OpenDB.
Finally, have to rant. I've spent a lot of time tracking this one down, fixing it in my own large code base, and several others listed in the question referenced have as well. AFAIK Adobe has not acknowledge the issue much-the-less committed to fixing it. And, yes it's a bug, plain and simple.

Using MbUnit3's [Rollback] for unit testing NHibernate interactions with SQLite

Background:
My team is dedicated to ensuring that straight from checkout, our code compiles and unit tests run successfully. To facilitate this and test some of our NHibernate mappings, we've added a SQLite DB to our repository which is a mirror of our production SQL Server 2005 database. We're using the latest versions of: MbUnit3 (part of Gallio), System.Data.SQLite and NHibernate.
Problem:
I've discovered that the following unit test does not work with SQLite, despite executing without trouble against SQL Server 2005.
[Test]
[Rollback]
public void CompleteCanPersistNode()
{
// returns a Configuration for either SQLite or SQL Server 2005 depending on how the project is configured.
Configuration config = GetDbConfig();
ISessionFactory sessionFactory = config.BuildSessionFactory();
ISession session = sessionFactory.OpenSession();
Node node = new Node();
node.Name = "Test Node";
node.PhysicalNodeType = session.Get<NodeType>(1);
// SQLite fails with the exception below after the next line called.
node.NodeLocation = session.Get<NodeLocation>(2);
session.Save(node);
session.Flush();
Assert.AreNotEqual(-1, node.NodeID);
Assert.IsNotNull(session.Get<Node>(node.NodeID));
}
The exception I'm getting (ONLY when working with SQLite) follows:
NHibernate.ADOException: cannot open connection --->
System.Data.SQLite.SQLiteException:
The database file is locked database is locked
at System.Data.SQLite.SQLite3.Step(SQLiteStatement stmt)
at System.Data.SQLite.SQLiteDataReader.NextResult()
at System.Data.SQLite.SQLiteDataReader..ctor(SQLiteCommand cmd, CommandBehavior behave)
at System.Data.SQLite.SQLiteCommand.ExecuteReader(CommandBehavior behavior)
at System.Data.SQLite.SQLiteCommand.ExecuteNonQuery()
at System.Data.SQLite.SQLiteTransaction..ctor(SQLiteConnection connection, Boolean deferredLock)
at System.Data.SQLite.SQLiteConnection.BeginDbTransaction(IsolationLevel isolationLevel)
at System.Data.SQLite.SQLiteConnection.BeginTransaction()
at System.Data.SQLite.SQLiteEnlistment..ctor(SQLiteConnection cnn, Transaction scope)
at System.Data.SQLite.SQLiteConnection.EnlistTransaction(Transaction transaction)
at System.Data.SQLite.SQLiteConnection.Open()
at NHibernate.Connection.DriverConnectionProvider.GetConnection()
at NHibernate.Impl.SessionFactoryImpl.OpenConnection()
--- End of inner exception stack trace ---
at NHibernate.Impl.SessionFactoryImpl.OpenConnection()
at NHibernate.AdoNet.ConnectionManager.GetConnection()
at NHibernate.AdoNet.AbstractBatcher.Prepare(IDbCommand cmd)
at NHibernate.AdoNet.AbstractBatcher.ExecuteReader(IDbCommand cmd)
at NHibernate.Loader.Loader.GetResultSet(IDbCommand st, Boolean autoDiscoverTypes, Boolean callable, RowSelection selection, ISessionImplementor session)
at NHibernate.Loader.Loader.DoQuery(ISessionImplementor session, QueryParameters queryParameters, Boolean returnProxies)
at NHibernate.Loader.Loader.DoQueryAndInitializeNonLazyCollections(ISessionImplementor session, QueryParameters queryParameters, Boolean returnProxies)
at NHibernate.Loader.Loader.LoadEntity(ISessionImplementor session, Object id, IType identifierType, Object optionalObject, String optionalEntityName, Object optionalIdentifier, IEntityPersister persister)
at NHibernate.Loader.Entity.AbstractEntityLoader.Load(ISessionImplementor session, Object id, Object optionalObject, Object optionalId)
at NHibernate.Loader.Entity.AbstractEntityLoader.Load(Object id, Object optionalObject, ISessionImplementor session)
at NHibernate.Persister.Entity.AbstractEntityPersister.Load(Object id, Object optionalObject, LockMode lockMode, ISessionImplementor session)
at NHibernate.Event.Default.DefaultLoadEventListener.LoadFromDatasource(LoadEvent event, IEntityPersister persister, EntityKey keyToLoad, LoadType options)
at NHibernate.Event.Default.DefaultLoadEventListener.DoLoad(LoadEvent event, IEntityPersister persister, EntityKey keyToLoad, LoadType options)
at NHibernate.Event.Default.DefaultLoadEventListener.Load(LoadEvent event, IEntityPersister persister, EntityKey keyToLoad, LoadType options)
at NHibernate.Event.Default.DefaultLoadEventListener.ProxyOrLoad(LoadEvent event, IEntityPersister persister, EntityKey keyToLoad, LoadType options)
at NHibernate.Event.Default.DefaultLoadEventListener.OnLoad(LoadEvent event, LoadType loadType)
at NHibernate.Impl.SessionImpl.FireLoad(LoadEvent event, LoadType loadType)
at NHibernate.Impl.SessionImpl.Get(String entityName, Object id)
at NHibernate.Impl.SessionImpl.Get(Type entityClass, Object id)
at NHibernate.Impl.SessionImpl.Get[T](Object id)
D:\dev\598\Code\test\unit\DataAccess.Test\NHibernatePersistenceTests.cs
When SQLite is used and the [Rollback] attribute is NOT specified, the test also completes successfully.
Question:
Is this an issue with System.Data.SQLite's implementation of TransactionScope which MbUnit3 uses for [Rollback] or a limitation of the SQLite engine?
Is there some way to write this unit test, working against SQLite, that will rollback so as to avoid affecting the database each time the test is run?
This is not a real answer to you question, but probably a solution to solve the problem.
I use an in-memory implementation of sql lite for my integration tests. I build up the schema and fill the database before each test. The schema creation and initial data filling happens really fast (less then 0.01 seconds per test) because it's an in-memory database.
Why do you use a physical database?
Edit: response to answer about question above:
1.) Because I migrated my schema and data directly from SQL Server 2005 and I want it to persist in source control.
I recommend to store a file with the database schema in and a file or script that creates the sample data in source control. You can generate the file using sql server studion management express, you can generate it from your NHibernate mappings or you can use a tool like sql compare and you can probably find other solutions for this when you need it. Plain text files are stored easier in version control systems then complete binary database files.
2.) Does something about the in-memory SQLite engine differ such that it would resolve this difficulty?
It might solve your problems because you can recreate your database before each test. Your database under test will be in a the state you expect it to be before each test is executed. A benefit of that is there is no need to roll back your transactions, but I have run similar test with in memory sqllite and it worked as aspected.
Check if you're not missing connection.release_mode=on_close in your SQLite NHibernate configuration. (reference docs)
BTW: always dispose your ISession and ISessionFactory.
Ditch [Rollback] and use NDbUnit. I use this myself for this exact scenario and it has been working great.