Configuring Unitils properties dynamically - unit-testing

I am testing EJB 3.1. I have a situation where I need to start a transaction manually in my test, perform some CRUD operations within it (to create some test data which is still not committed) and then call a method in my bean to which the transaction from my test will be propagated.
By default, while using Unitils DatabaseModule, the transactions are automatically created in the test. I understand that it is possible to change this default configuration by modifying unitils.properties as follows,
DatabaseModule.Transactional.value.default=disabled
My question is: Is there a possibility to change this configuration dynamically in the test method? I do not want the transactions to be disabled "always". By default transactions can be "commit", and when required, I want to dynamically set it to "disabled".
-Thanks.

You could try this: https://stackoverflow.com/a/6561782/411229
Not sure if it will work for transaction configuration, but worth a shot.

Related

Cleaning up shared_context variables in rspec

I am using RSpec.shared_context to set variables that all the describe blocks will use.
Something like this
RSpec.shared_context "common" do
let(:name) { #creates a database object }
#more let statements
end
Now I invoke it from describe block like so
describe "common test" do
include_context "common"
#run few tests
end
Now after running the describe block I want to clean it up. How do I rollback all the objects created in the shared context?
I tried cleaning it in the after(:context) hook but since it is a let statement the variable name is only allowed inside examples.
Is there someway I can use use_transactional_fixtures to clean this up after running the tests in the describe block.
You don't need to worry about cleaning up your "lets" if you just setup your test suite properly to wipe the database.
Use let to define a memoized helper method. The value will be cached
across multiple calls in the same example but not across examples.
Note that let is lazy-evaluated: it is not evaluated until the first
time the method it defines is invoked.
In almost every case you want teardown to happen automatically and per example. Thats what config.transactional_fixtures does - it rolls back the database after every example so that you have a fresh slate and don't get test ordering issues. Relying on each example / context whatever to explicitly clean up after itself is just a recipe for failure.

neo4j functionality & unit testing - resetting the database

I've created an application in node.js and have Mocha tests to perform automated unit and functionality testing.
I'm now trying to test database functionality, and want the database to be reset between each test for consistency.
Solution 1
Before each test I was running:
MATCH (n) OPTIONAL MATCH (n)-[r]-() DELETE n,r
and then populating the database with cyphers queries obtained using the neo4j-shell dump command. However, the problem with this is that those cypher queries utilise the internal neo4j ids to create links between nodes and relationships, and because the delete query above doesn't reset the internal neo4j id counter to 0 it all goes wrong when you try to run it!
Solution 2
I then looked at physically shutting down the neo4j server, removing the database directory and then rebooting it and populating it. This works, but it takes around 15 seconds, which is useless when I've got 200+ unit tests to run!
Solution 3
I've also looked at transactions in order to be able to roll the database back once the test had completed, but it seems that all queries have to go through the transaction endpoint. I don't think this is feasible.
.
Are there any other ways of doing this? I think solution 1 shows the most promise, but it'd mean going through and changing all my exported cypher queries to avoid using the internal neo4j ids.
For example I'd have to change:
create (_113:`User` {`firstname`:"John", `lastname`:"Smith", `uuid`:"f843c210-26e3-11e5-af31-297c662c0848"})
create (_114:`Instrument` {`name`:"Drums", `uuid`:"f84521a0-26e3-11e5-af31-297c662c0848"})
create _113-[:`PLAYS`]->_114
To:
create (_113:`User` {`firstname`:"John", `lastname`:"Smith", `uuid`:"f843c210-26e3-11e5-af31-297c662c0848"})
create (_114:`Instrument` {`name`:"Drums", `uuid`:"f84521a0-26e3-11e5-af31-297c662c0848"})
MATCH (a:User),(b:Instrument) WHERE a.uuid = 'f843c210-26e3-11e5-af31-297c662c0848' AND b.uuid = 'f84521a0-26e3-11e5-af31-297c662c0848' CREATE UNIQUE (a)-[r:`PLAYS`]->(b) RETURN r
Which is a real pain with a large dataset..
Any thoughts?
As FrobberOfBits kindly suggested, have a look at GraphAware RestTest built precisely for your purpose.

Why is Entity Manager clear() required? - Spring3 #Transactional, JPA2/Hibernate3

I have a JSF2 application that is using JPA2/Hibernate with Spring #Transactional. There are no #Transactional statements in the UI (backing beans), only in the service layer. (I am using #Transactional(propagation=Propagation.MANDATORY) in the DAOs to ensure every call occurs in a transaction.) It's all works very nicely, except...
When I am opening and updating the entities through the transactional service methods, Sometimes the retrieved entities are old. It doesn't matter that it's the same user in the same session, occasionally, the JPA "read" methods return older stale entities that have (should have) already been replaced. This stumped me for quite a while, but it turns out it is caused by caching in the Entity Manager. The DAOs are annotated with #Repository, so the injected EntityManager is being reused. I had expected that when the transaction completed, the entity manager would automatically be cleared. But that is not the case. Usually the Entity Manager returns the correct value, but often it reaches back and returns an old one from an earlier transaction instead.
As a workaround, I have sprinkled strategic entityManager.clear() statements in the DAO read methods, but that is ugly. The entityManagers should be cleared after each transaction.
Has anyone experienced this? Is there a proper solution? Can the entity manager be cleared after each transaction?
Thanks very much.
I am using: org.springframework.orm.jpa.LocalContainerEntityManagerFactoryBean and org.springframework.orm.jpa.JpaTransactionManager
The #Transactional annotation exists in the service layer. The service methods marked with #Transactional will adhere to the ACID properties no matter how many DAO calls are made from within it.
This means that you need not annotate the DAO methods as #Transactional.
I am working on something similar and this is how I have done it and my data is consistent.
Try it this and see if you are still getting inconsistent data.
Do you use #PersistenceContext annotation (above EntityManager in DAO) combined with PersistenceAnnotationBeanPostProcessor bean (you don't have to define PersistenceAnnotationBeanPostProcessor bean if you are using <context:annotation-config/> and <context:component-scan/> XML tags) ? If not, I guess this is the reason of your problems.

Entity Framework error during unit test

I'm using the entity framework.
In one of my unit tests I have a line like:
this.Set<T>().Add(entity);
On executing that line I get:
System.InvalidOperationException : The model backing the
'InvoiceNewDataContext' context has changed since the database was
created. Either manually delete/update the database, or call
Database.SetInitializer with an IDatabaseInitializer instance. For
example, the DropCreateDatabaseIfModelChanges strategy will
automatically delete and recreate the database, and optionally seed it
with new data.
Well I've actually deleted the database and removed the connection string.
I'm surprised this error is happening on adding as I wouldn't expect it to happen until I saved the data and it discovered there was no database.
In previous projects/solutions I created during unit tests I have been able to add to the context for test purposes without actually calling SaveChanges.
Would anyone know why this would be happening in my latest projects/solutions?
Are you sure it really didn't use database in your previous projects? If you do not specify any connection string it will silently use a default one to SQLExpress database with local .mdf file so make sure that isn't happening now.

Django testing execution order and tables

In situations where a test executed and changed test database tables, would database tables return to original state after each test? If not, how should I know in what order the tests are executed so that I will predict the state of database tables. For example,
class SimpleTest(Testcase):
def test_insert(self):
# testing to see if data correctly added to database
def test_other_thing(self):
# does insered data available here?
The database is rolled back at the end of every test.
For proper test isolation, when tests touch the database, you need to inherit from django.test.TestCase which handles database state isolation between one test execution and another.
Never, ever, depend on test execution order: if you need to, you are doing it wrong, because you are violating test isolation.
Remember that you don't need to use only unittest.TestCase or only django.test.TestCase: you can mix them as needed (you don't need the latter if your test does not touch the database).
Note that django.test.TestCase use transactions to speed up database state cleanup after each test, so if you need to actually test a database transaction you need to use django.test.TransactionTestCase (see https://docs.djangoproject.com/en/dev/topics/testing/#testcase)