JPA - How to truncate tables between unit tests - unit-testing

I want to cleanup the database after every test case without rolling back the transaction. I have tried DBUnit's DatabaseOperation.DELETE_ALL, but it does not work if a deletion violates a foreign key constraint. I know that I can disable foreign key checks, but that would also disable the checks for the tests (which I want to prevent).
I'm using JUnit 4, JPA 2.0 (Eclipselink), and Derby's in-memory database. Any ideas?
Thanks,
Theo

The simplest way to do this is probably using the nativeQuery jpa method.
#After
public void cleanup() {
EntityManager em = entityManagerFactory.createEntityManager();
em.getTransaction().begin();
em.createNativeQuery("truncate table person").executeUpdate();
em.createNativeQuery("truncate table preferences").executeUpdate();
em.getTransaction().commit();
}

Simple: Before each test, start a new transaction and after the test, roll it back. That will give you the same database that you had before.
Make sure the tests don't create new transactions; instead reuse the existing one.

I am a bit confused as DBUnit will reinitialize the database to a known state before every test.
They also recommend as a best practice not to cleanup or otherwise change the data after the test.
So if it is cleanup you're after to prepare the db for the next test, I would not bother.

Yes, in-transaction test would make your life much easier, but if transaction is your thing then you need to implement compensating transaction(s) during cleanup (in #After). It sounds laborious and it might be but if properly approached you may end up with a set of helper methods (in tests) that compensate (cleanup) data accumulated during #Before and tests (using JPA or straight JDBC - whatever makes sense).
For example, if you use JPA and call create methods on entities during tests you may utilize (using AOP if you fancy or just helper test methods like us) a pattern across all tests to:
track ids of all entities that have been created during test
accumulate them in order created
replay entity deletes for these entities in reverse order in #After

My setup is quite similar: it's Derby (embedded) + OpenJPA 1.2.2 + DBUnit. Here's how I handle integration tests for my current task: in every #Before method I run 3 scripts:
Drop DB — an SQL script that drops all tables.
Create DB — an SQL script that recreates them.
A test-specific DB unit XML script to populate the data.
My database has only 12 tables and the test data set is not very big, either — about 50 records. Each script takes about 500 ms to run and I maintain them manually when tables are added or modified.
This approach is probably not recommended for testing big databases, and perhaps it cannot even be considered good practice for small ones; however, it has one important advantage over rolling back the transaction in the #After method: you can actually detect what happens at commit (like persisting detached entities or optimistic lock exceptions).

Better late then never ...
I just had the same problem and came around a pretty simple solution:
set the property "...database.action" to the value "drop-and-create" in your persistence-unit config
close the entity-manager and the entity-manager factory after each test
persistence.xml
<persistence-unit name="Mapping4" transaction-type="RESOURCE_LOCAL" >
<provider>org.eclipse.persistence.jpa.PersistenceProvider</provider>
<class>...</class>
<class>...</class>
<properties>
...
<property name="javax.persistence.schema-generation.database.action" value="drop-and-create" />
...
</properties>
</persistence-unit>
unit-test:
...
#Before
public void setup() {
factory = Persistence.createEntityManagerFactory(PERSISTENCE_UNIT_NAME);
entityManager = factory.createEntityManager();
}
#After
public void tearDown() {
entityManager.clear();
entityManager.close();
factory.close();
}
...

I delete the DB file after each run:
boolean deleted = Files.deleteIfExists(Paths.get("pathToDbFile"));
A little dirty but works for me.
Regards

Option 1: You can disable foreign key checks before truncating tables, and enable them again after truncation. You will still have checks in tests in this way.
Option 2: H2 database destroys the in-memory database when the last connection closed. I guess Derby DB supports something similar, or you can switch to H2.
See also: I wrote a code to truncate tables before each test using Hibernate in a related question: https://stackoverflow.com/a/63747005/471214

Related

neo4j functionality & unit testing - resetting the database

I've created an application in node.js and have Mocha tests to perform automated unit and functionality testing.
I'm now trying to test database functionality, and want the database to be reset between each test for consistency.
Solution 1
Before each test I was running:
MATCH (n) OPTIONAL MATCH (n)-[r]-() DELETE n,r
and then populating the database with cyphers queries obtained using the neo4j-shell dump command. However, the problem with this is that those cypher queries utilise the internal neo4j ids to create links between nodes and relationships, and because the delete query above doesn't reset the internal neo4j id counter to 0 it all goes wrong when you try to run it!
Solution 2
I then looked at physically shutting down the neo4j server, removing the database directory and then rebooting it and populating it. This works, but it takes around 15 seconds, which is useless when I've got 200+ unit tests to run!
Solution 3
I've also looked at transactions in order to be able to roll the database back once the test had completed, but it seems that all queries have to go through the transaction endpoint. I don't think this is feasible.
.
Are there any other ways of doing this? I think solution 1 shows the most promise, but it'd mean going through and changing all my exported cypher queries to avoid using the internal neo4j ids.
For example I'd have to change:
create (_113:`User` {`firstname`:"John", `lastname`:"Smith", `uuid`:"f843c210-26e3-11e5-af31-297c662c0848"})
create (_114:`Instrument` {`name`:"Drums", `uuid`:"f84521a0-26e3-11e5-af31-297c662c0848"})
create _113-[:`PLAYS`]->_114
To:
create (_113:`User` {`firstname`:"John", `lastname`:"Smith", `uuid`:"f843c210-26e3-11e5-af31-297c662c0848"})
create (_114:`Instrument` {`name`:"Drums", `uuid`:"f84521a0-26e3-11e5-af31-297c662c0848"})
MATCH (a:User),(b:Instrument) WHERE a.uuid = 'f843c210-26e3-11e5-af31-297c662c0848' AND b.uuid = 'f84521a0-26e3-11e5-af31-297c662c0848' CREATE UNIQUE (a)-[r:`PLAYS`]->(b) RETURN r
Which is a real pain with a large dataset..
Any thoughts?
As FrobberOfBits kindly suggested, have a look at GraphAware RestTest built precisely for your purpose.

mocking database objects with Rhino mock

I am sorry if this question is already been asked. I am very new to Unit Testing and I am suppose to use Rhino for mocking.
So the problem is...I have a method to test and that method is suppose to do retrive some data based on input parameter and return as datatable.
It also do some calculation for finding out which stored procedure should be called and with which set of parameters.
I issue is that, When I call the method with mock objects....it throws an error at date from database retrival line of code as object is not set to an instanse. That is expected as their is no data retruning from database since we are mocking it.
so what could be done it that case.
Seems like it is a good time to introduce Repository Pattern.
If you introduce than, the logic to generate query to DB and the logic to read data from DB will be encapsulated in the Repository.
In this case you can mock/stub the Repository in your tests and you can unit test all the classes, which use Repository, without creation test DB at all.
The Repository mock will verify whether incoming parameters are correct.
And the Repository stub will return any test-specific data which you need for each particular test.

Why is Entity Manager clear() required? - Spring3 #Transactional, JPA2/Hibernate3

I have a JSF2 application that is using JPA2/Hibernate with Spring #Transactional. There are no #Transactional statements in the UI (backing beans), only in the service layer. (I am using #Transactional(propagation=Propagation.MANDATORY) in the DAOs to ensure every call occurs in a transaction.) It's all works very nicely, except...
When I am opening and updating the entities through the transactional service methods, Sometimes the retrieved entities are old. It doesn't matter that it's the same user in the same session, occasionally, the JPA "read" methods return older stale entities that have (should have) already been replaced. This stumped me for quite a while, but it turns out it is caused by caching in the Entity Manager. The DAOs are annotated with #Repository, so the injected EntityManager is being reused. I had expected that when the transaction completed, the entity manager would automatically be cleared. But that is not the case. Usually the Entity Manager returns the correct value, but often it reaches back and returns an old one from an earlier transaction instead.
As a workaround, I have sprinkled strategic entityManager.clear() statements in the DAO read methods, but that is ugly. The entityManagers should be cleared after each transaction.
Has anyone experienced this? Is there a proper solution? Can the entity manager be cleared after each transaction?
Thanks very much.
I am using: org.springframework.orm.jpa.LocalContainerEntityManagerFactoryBean and org.springframework.orm.jpa.JpaTransactionManager
The #Transactional annotation exists in the service layer. The service methods marked with #Transactional will adhere to the ACID properties no matter how many DAO calls are made from within it.
This means that you need not annotate the DAO methods as #Transactional.
I am working on something similar and this is how I have done it and my data is consistent.
Try it this and see if you are still getting inconsistent data.
Do you use #PersistenceContext annotation (above EntityManager in DAO) combined with PersistenceAnnotationBeanPostProcessor bean (you don't have to define PersistenceAnnotationBeanPostProcessor bean if you are using <context:annotation-config/> and <context:component-scan/> XML tags) ? If not, I guess this is the reason of your problems.

Testing Mongoose Node.JS app

I'm trying to write unit tests for parts of my Node app. I'm using Mongoose for my ORM.
I've searched a bunch for how to do testing with Mongoose and Node but not come with anything. The solutions/frameworks all seem to be full-stack or make no mention of mocking stuff.
Is there a way I can mock my Mongoose DB so I can return static data in my tests? I'd rather not have to set up a test DB and fill it with data for every unit test.
Has anyone else encountered this?
I too went looking for answers, and ended up here. This is what I did:
I started off using mockery to mock out the module that my models were in. An then creating my own mock module with each model hanging off it as a property. These properties wrapped the real models (so that child properties exist for the code under test). And then I override the methods I want to manipulate for the test like save. This had the advantage of mockery being able to undo the mocking.
but...
I don't really care enough about undoing the mocking to write wrapper properties for every model. So now I just require my module and override the functions I want to manipulate. I will probably run tests in separate processes if it becomes an issue.
In the arrange part of my tests:
// mock out database saves
var db = require("../../schema");
db.Model1.prototype.save = function(callback) {
console.log("in the mock");
callback();
};
db.Model2.prototype.save = function(callback) {
console.log("in the mock");
callback("mock staged an error for testing purposes");
};
I solved this by structuring my code a little. I'm keeping all my mongoose-related stuff in separate classes with APIs like "save", "find", "delete" and no other class does direct access to the database. Then I simply mock those in tests that rely on data.
I did something similar with the actual objects that are returned. For every model I have in mongoose, I have a corresponding class that wraps it and provides access-methods to fields. Those are also easily mocked.
Also worth mentioning:
mockgoose - In-memory DB that mocks Mongoose, for testing purposes.
monckoose - Similar, but takes a different approach (Implements a fake driver). Monckoose seems to be unpublished as of March 2015.

Django testing execution order and tables

In situations where a test executed and changed test database tables, would database tables return to original state after each test? If not, how should I know in what order the tests are executed so that I will predict the state of database tables. For example,
class SimpleTest(Testcase):
def test_insert(self):
# testing to see if data correctly added to database
def test_other_thing(self):
# does insered data available here?
The database is rolled back at the end of every test.
For proper test isolation, when tests touch the database, you need to inherit from django.test.TestCase which handles database state isolation between one test execution and another.
Never, ever, depend on test execution order: if you need to, you are doing it wrong, because you are violating test isolation.
Remember that you don't need to use only unittest.TestCase or only django.test.TestCase: you can mix them as needed (you don't need the latter if your test does not touch the database).
Note that django.test.TestCase use transactions to speed up database state cleanup after each test, so if you need to actually test a database transaction you need to use django.test.TransactionTestCase (see https://docs.djangoproject.com/en/dev/topics/testing/#testcase)