How do I manage unit test resources in Kotlin, such as starting/stopping a database connection or an embedded elasticsearch server? - unit-testing

In my Kotlin JUnit tests, I want to start/stop embedded servers and use them within my tests.
I tried using the JUnit #Before annotation on a method in my test class and it works fine, but it isn't the right behaviour since it runs every test case instead of just once.
Therefore I want to use the #BeforeClass annotation on a method, but adding it to a method results in an error saying it must be on a static method. Kotlin doesn't appear to have static methods. And then the same applies for static variables, because I need to keep a reference to the embedded server around for use in the test cases.
So how do I create this embedded database just once for all of my test cases?
class MyTest {
#Before fun setup() {
// works in that it opens the database connection, but is wrong
// since this is per test case instead of being shared for all
}
#BeforeClass fun setupClass() {
// what I want to do instead, but results in error because
// this isn't a static method, and static keyword doesn't exist
}
var referenceToServer: ServerType // wrong because is not static either
...
}
Note: this question is intentionally written and answered by the author (Self-Answered Questions), so that the answers to commonly asked Kotlin topics are present in SO.

Your unit test class usually needs a few things to manage a shared resource for a group of test methods. And in Kotlin you can use #BeforeClass and #AfterClass not in the test class, but rather within its companion object along with the #JvmStatic annotation.
The structure of a test class would look like:
class MyTestClass {
companion object {
init {
// things that may need to be setup before companion class member variables are instantiated
}
// variables you initialize for the class just once:
val someClassVar = initializer()
// variables you initialize for the class later in the #BeforeClass method:
lateinit var someClassLateVar: SomeResource
#BeforeClass #JvmStatic fun setup() {
// things to execute once and keep around for the class
}
#AfterClass #JvmStatic fun teardown() {
// clean up after this class, leave nothing dirty behind
}
}
// variables you initialize per instance of the test class:
val someInstanceVar = initializer()
// variables you initialize per test case later in your #Before methods:
var lateinit someInstanceLateZVar: MyType
#Before fun prepareTest() {
// things to do before each test
}
#After fun cleanupTest() {
// things to do after each test
}
#Test fun testSomething() {
// an actual test case
}
#Test fun testSomethingElse() {
// another test case
}
// ...more test cases
}
Given the above, you should read about:
companion objects - similar to the Class object in Java, but a singleton per class that is not static
#JvmStatic - an annotation that turns a companion object method into a static method on the outer class for Java interop
lateinit - allows a var property to be initialized later when you have a well defined lifecycle
Delegates.notNull() - can be used instead of lateinit for a property that should be set at least once before being read.
Here are fuller examples of test classes for Kotlin that manage embedded resources.
The first is copied and modified from Solr-Undertow tests, and before the test cases are run, configures and starts a Solr-Undertow server. After the tests run, it cleans up any temporary files created by the tests. It also ensures environment variables and system properties are correct before the tests are run. Between test cases it unloads any temporary loaded Solr cores. The test:
class TestServerWithPlugin {
companion object {
val workingDir = Paths.get("test-data/solr-standalone").toAbsolutePath()
val coreWithPluginDir = workingDir.resolve("plugin-test/collection1")
lateinit var server: Server
#BeforeClass #JvmStatic fun setup() {
assertTrue(coreWithPluginDir.exists(), "test core w/plugin does not exist $coreWithPluginDir")
// make sure no system properties are set that could interfere with test
resetEnvProxy()
cleanSysProps()
routeJbossLoggingToSlf4j()
cleanFiles()
val config = mapOf(...)
val configLoader = ServerConfigFromOverridesAndReference(workingDir, config) verifiedBy { loader ->
...
}
assertNotNull(System.getProperty("solr.solr.home"))
server = Server(configLoader)
val (serverStarted, message) = server.run()
if (!serverStarted) {
fail("Server not started: '$message'")
}
}
#AfterClass #JvmStatic fun teardown() {
server.shutdown()
cleanFiles()
resetEnvProxy()
cleanSysProps()
}
private fun cleanSysProps() { ... }
private fun cleanFiles() {
// don't leave any test files behind
coreWithPluginDir.resolve("data").deleteRecursively()
Files.deleteIfExists(coreWithPluginDir.resolve("core.properties"))
Files.deleteIfExists(coreWithPluginDir.resolve("core.properties.unloaded"))
}
}
val adminClient: SolrClient = HttpSolrClient("http://localhost:8983/solr/")
#Before fun prepareTest() {
// anything before each test?
}
#After fun cleanupTest() {
// make sure test cores do not bleed over between test cases
unloadCoreIfExists("tempCollection1")
unloadCoreIfExists("tempCollection2")
unloadCoreIfExists("tempCollection3")
}
private fun unloadCoreIfExists(name: String) { ... }
#Test
fun testServerLoadsPlugin() {
println("Loading core 'withplugin' from dir ${coreWithPluginDir.toString()}")
val response = CoreAdminRequest.createCore("tempCollection1", coreWithPluginDir.toString(), adminClient)
assertEquals(0, response.status)
}
// ... other test cases
}
And another starting AWS DynamoDB local as an embedded database (copied and modified slightly from Running AWS DynamoDB-local embedded). This test must hack the java.library.path before anything else happens or local DynamoDB (using sqlite with binary libraries) won't run. Then it starts a server to share for all test classes, and cleans up temporary data between tests. The test:
class TestAccountManager {
companion object {
init {
// we need to control the "java.library.path" or sqlite cannot find its libraries
val dynLibPath = File("./src/test/dynlib/").absoluteFile
System.setProperty("java.library.path", dynLibPath.toString());
// TEST HACK: if we kill this value in the System classloader, it will be
// recreated on next access allowing java.library.path to be reset
val fieldSysPath = ClassLoader::class.java.getDeclaredField("sys_paths")
fieldSysPath.setAccessible(true)
fieldSysPath.set(null, null)
// ensure logging always goes through Slf4j
System.setProperty("org.eclipse.jetty.util.log.class", "org.eclipse.jetty.util.log.Slf4jLog")
}
private val localDbPort = 19444
private lateinit var localDb: DynamoDBProxyServer
private lateinit var dbClient: AmazonDynamoDBClient
private lateinit var dynamo: DynamoDB
#BeforeClass #JvmStatic fun setup() {
// do not use ServerRunner, it is evil and doesn't set the port correctly, also
// it resets logging to be off.
localDb = DynamoDBProxyServer(localDbPort, LocalDynamoDBServerHandler(
LocalDynamoDBRequestHandler(0, true, null, true, true), null)
)
localDb.start()
// fake credentials are required even though ignored
val auth = BasicAWSCredentials("fakeKey", "fakeSecret")
dbClient = AmazonDynamoDBClient(auth) initializedWith {
signerRegionOverride = "us-east-1"
setEndpoint("http://localhost:$localDbPort")
}
dynamo = DynamoDB(dbClient)
// create the tables once
AccountManagerSchema.createTables(dbClient)
// for debugging reference
dynamo.listTables().forEach { table ->
println(table.tableName)
}
}
#AfterClass #JvmStatic fun teardown() {
dbClient.shutdown()
localDb.stop()
}
}
val jsonMapper = jacksonObjectMapper()
val dynamoMapper: DynamoDBMapper = DynamoDBMapper(dbClient)
#Before fun prepareTest() {
// insert commonly used test data
setupStaticBillingData(dbClient)
}
#After fun cleanupTest() {
// delete anything that shouldn't survive any test case
deleteAllInTable<Account>()
deleteAllInTable<Organization>()
deleteAllInTable<Billing>()
}
private inline fun <reified T: Any> deleteAllInTable() { ... }
#Test fun testAccountJsonRoundTrip() {
val acct = Account("123", ...)
dynamoMapper.save(acct)
val item = dynamo.getTable("Accounts").getItem("id", "123")
val acctReadJson = jsonMapper.readValue<Account>(item.toJSON())
assertEquals(acct, acctReadJson)
}
// ...more test cases
}
NOTE: some parts of the examples are abbreviated with ...

Managing resources with before/after callbacks in tests, obviously, has it's pros:
Tests are "atomic". A test executes as a whole things with all the callbacks One won't forget to fire up a dependency service before the tests and shut it down after it's done. If done properly, executions callbacks will work on any environment.
Tests are self-contained. There is no external data or setup phases, everything is contained within a few test classes.
It has some cons too. One important of them is that it pollutes the code and makes the code violate single responsibility principle. Tests now not only test something, but perform a heavyweight initialization and resource management. It can be ok in some cases (like configuring an ObjectMapper), but modifying java.library.path or spawning another processes (or in-process embedded databases) are not so innocent.
Why not treat those services as dependencies for your test eligible for "injection", like described by 12factor.net.
This way you start and initialize dependency services somewhere outside of the test code.
Nowadays virtualization and containers are almost everywhere and most developers' machines are able to run Docker. And most of the application have a dockerized version: Elasticsearch, DynamoDB, PostgreSQL and so on. Docker is a perfect solution for external services that your tests need.
It can be a script that runs is run manually by a developer every time she wants to execute tests.
It can be a task run by build tool (e.g. Gradle has awesome dependsOn and finalizedBy DSL for defining dependencies). A task, of course, can execute the same script that developer executes manually using shell-outs / process execs.
It can be a task run by IDE before test execution. Again, it can use the same script.
Most CI / CD providers have a notion of "service" — an external dependency (process) that runs in parallel to your build and can be accessed via it's usual SDK / connector / API: Gitlab, Travis, Bitbucket, AppVeyor, Semaphore, …
This approach:
Frees your test code from initialization logic. Your tests will only test and do nothing more.
Decouples code and data. Adding a new test case can now be done by adding new data into dependency services with it's native toolset. I.e. for SQL databases you'll use SQL, for Amazon DynamoDB you'll use CLI to create tables and put items.
Is closer to a production code, where you obviously do not start those services when your "main" application starts.
Of course, it has it's flaws (basically, the statements I've started from):
Tests are not more "atomic". Dependency service must be started somehow prior test execution. The way it is started may be different in different environments: developer's machine or CI, IDE or build tool CLI.
Tests are not self-contained. Now your seed data may be even packed inside an image, so changing it may require rebuilding a different project.

Related

ASP.NET Web API Unit Test Autofac Module with BuildManager.GetReferencedAssemblies()

Working on a project in ASP.NET Web API 2 which has Autofac as my IoC container. This project is hosted on IIS and in my Autofac module I use the following method to scan for assemblies:
var asm = BuildManager.GetReferencedAssemblies().Cast<Assembly>().ToArray();
Why?
https://docs.autofac.org/en/latest/register/scanning.html#iis-hosted-web-applications
But now we are making Unit Tests using NUnit, during my setup I register my module which uses this method. Now I receive the following exception when running my tests:
System.InvalidOperationException: 'This method cannot be called during the application's pre-start initialization phase.'
I understand why I have this exception but I don't have a clue how to make my code work in tests and for deployment environments.
Setup method of NUnit:
[TestFixture]
public abstract class ApplicationTestBase
{
[SetUp]
public override void Init()
{
var builder = new ContainerBuilder();
// If the class requires auto mapper mapping, initialize them
// We do this in order not to init them for every test => optimalisation!
if (GetType().GetCustomAttributes<RequiresAutoMapperMappingsAttribute>(false) != null)
{
builder.RegisterModule<AutoMapperModule>();
}
this.Container = builder.Build();
}
}
Do I need to create a new module specific for my Unit tests or is there another way for this?
AutoMapperTest
[RequiresAutoMapperMappings]
[TestFixture]
public class AutoMapperTests : ApplicationTestBase
{
[Test]
public void Assert_Valid_Mappings()
{
Mapper.AssertConfigurationIsValid();
}
}
UPDATE
Like Cyril mentioned: Why do you need Ioc in your unit tests? I went searching and indeed you don't have to use the Ioc in your tests. So I ditched the Ioc and initialized my mapper configuration byy doing:
Mapper.Initialize(configuration =>
{
var asm = AppDomain.CurrentDomain.GetAssemblies()
.Where(a => a.FullName.StartsWith("ProjectWebService."));
configuration.AddProfiles(asm);
});
I would recommend separating the "how to load assemblies" logic from the "do assembly scanning and register modules logic."
Right now I'm guessing you have something like this all in one method.
public IContainer BuildContainer()
{
var asm = BuildManager.GetReferencedAssemblies().Cast<Assembly>().ToArray();
var builder = new ContainerBuilder();
builder.RegisterAssemblyTypes(asm);
var container = builder.Build();
}
Not exactly that, but something similar - the loading of assemblies is inlined and directly used.
Separate that so you can swap that logic in for testing. For example, consider allowing a parameter to be optionally passed so you can override the logic in test.
public IContainer BuildContainer(Func<IEnumerable<Assembly>> assemblyLoader = null)
{
IEnumerable<Assembly> asm = null;
if (assemblyLoader != null)
{
asm = assemblyLoader();
}
else
{
asm = BuildManager.GetReferencedAssemblies().Cast<Assembly>().ToArray();
}
var builder = new ContainerBuilder();
builder.RegisterAssemblyTypes(asm);
var container = builder.Build();
}
Your default logic will work the way you want, but then in testing you can swap in something else.
var container = BuildContainer(() => AppDomain.GetAssemblies());
There are lots of ways you can do that swap-in. It could be anything from a static property you can set somewhere to a virtual method you can override somewhere. The point is, by separating the assembly loading logic you can get the test-time behavior to work but still use the registration behavior you're after.

Test runners inconsistent with HttpClient and Mocking HttpMessageRequest XUnit

So let me start by saying I've seen all the threads over the wars between creating a wrapper vs mocking the HttpMethodRequest. In the past, I've done the wrapper method with great success, but I thought I'd go down the path of Mocking the HttpMessageRequest.
For starters here is an example of the debate: Mocking HttpClient in unit tests. I want to add that's not what this is about.
What I've found is that I have tests upon tests that inject an HttpClient. I've been doing a lot of serverless aws lambdas, and the basic flow is like so:
//some pseudo code
public class Functions
{
public Functions(HttpClient client)
{
_httpClient = client;
}
public async Task<APIGatewayResponse> GetData(ApiGatewayRequest request, ILambdaContext context)
{
var result = await _client.Get("http://example.com");
return new APIGatewayResponse
{
StatusCode = result.StatusCode,
Body = await result.Content.ReadStringAsAsync()
};
}
}
...
[Fact]
public void ShouldDoCall()
{
var requestUri = new Uri("http://example.com");
var mockResponse = new HttpResponseMessage(HttpStatusCode.OK) { Content = new StringContent(expectedResponse) };
var mockHandler = new Mock<HttpClientHandler>();
mockHandler
.Protected()
.Setup<Task<HttpResponseMessage>>(
"SendAsync",
It.IsAny<HttpRequestMessage>(),
It.IsAny<CancellationToken>())
.ReturnsAsync(mockResponse);
var f = new Functions(new HttpClient(handler.Object);
var result = f.GetData().Result;
handlerMock.Protected().Verify(
"SendAsync",
Times.Exactly(1), // we expected a single external request
ItExpr.Is<HttpRequestMessage>(req =>
req.Method == HttpMethod.Get &&
req.RequestUri == expectedUri // to this uri
),
ItExpr.IsAny<CancellationToken>()
);
Assert.Equal(200, result.StatusCode);
}
So here's where I have the problem!
When all my tests run in NCrunch they pass, and pass fast!
When I run them all manually with Resharper 2018, they fail.
Equally, when they get run within the CI/CD platform, which is a docker container with the net core 2.1 SDK on a Linux distro, they too fail.
These tests should not be run in parallel (read the tests default this way). I have about 30 tests around these methods combined, and each one randomly fails on the moq verify portion. Sometimes they pass, sometimes they fail. If I break down the tests per test class and on run the groups that way, instead of all in one, then these will all pass in chunks. I'll also add that I have even gone through trying to isolate the variables per test method to make sure there is no overlap.
So, I'm really lost with trying to handle this through here and make sure this is testable.
Are there different ways to approach the HttpClient where it can consistently pass?
After lots of back n forth. I found two of situations from this.
I couldn't get parallel processing disabled within the docker setup, which is where I thought the issue was (I even made it do thread sleep between tests to slow it down (It felt really icky to me)
I found that all the tests l locally ran through the test runners were telling me they passed when about 1/2 failed on the docker test runner. What ended up being the issue was a magic string area when seeing and getting environment variables.
Small caveat to call out, Amazon updated their .NET Core lambda tools to install via dotnet cli, so this was updated in our docker image.

aem-mocks property test a servlet

Trying to write some proper AEM integration tests using the aem-mocks framework. The goal is to try and test a servlet by calling its path,
E.g. an AEM servlet
#SlingServlet(
paths = {"/bin/utils/emailSignUp"},
methods = {"POST"},
selectors = {"form"}
)
public class EmailSignUpFormServlet extends SlingAllMethodsServlet {
#Reference
SubmissionAgent submissionAgent;
#Reference
XSSFilter xssFilter;
public EmailSignUpFormServlet(){
}
public EmailSignUpFormServlet(SubmissionAgent submissionAgent, XSSFilter xssFilter) {
this.submissionAgent = submissionAgent;
this.xssFilter = xssFilter;
}
#Override
public void doPost(SlingHttpServletRequest request, SlingHttpServletResponse response) throws IOException {
String email = request.getParameter("email");
submissionAgent.saveForm(xssFilter.filter(email));
}
}
Here is the corresponding test to try and do the integration testing. Notice how I've called the servlet's 'doPost' method, instead of 'POST'ing via some API.
public class EmailSignUpFormServletTest {
#Rule
public final AemContext context = new AemContext();
#Mock
SubmissionAgent submissionAgent;
#Mock
XSSFilter xssFilter;
private EmailSignUpFormServlet emailSignUpFormServlet;
#Before
public void setup(){
MockitoAnnotations.initMocks(this);
Map<String,String> report = new HashMap<>();
report.put("statusCode","302");
when(submissionAgent.saveForm(any(String.class)).thenReturn(report);
}
#Test
public void emailSignUpFormDoesNotRequireRecaptchaChallenge() throws IOException {
// Setup test email value
context.request().setQueryString("email=test.only#mail.com");
//===================================================================
/*
* WHAT I END UP DOING:
*/
// instantiate a new class of the servlet
emailSignUpFormServlet = new EmailSignUpFormServlet(submissionAgent, xssFilter);
// call the post method (Simulate the POST call)
emailSignUpFormServlet.doPost(context.request(),context.response());
/*
* WHAT I WOULD LIKE TO DO:
*/
// send request using some API that allows me to do post to the framework
// Example:
// context.request().POST("/bin/utils/emailSignUp") <--- doesn't exist!
//===================================================================
// assert response is internally redirected, hence expected status is a 302
assertEquals(302,context.response().getStatus());
}
}
I've done a lot of research on how this could be done (here) and (here), and these links show a lot about how you can set various parameters for context.request() object. However, they just don't show how to finally execute the 'post' call.
What you are trying to do is mix a UT with IT so this won't be easy at least with the aem-mocks framework. Let me explain why.
Assuming that you are able to call your required code
/*
* WHAT I WOULD LIKE TO DO:
*/
// send request using some API that allows me to do post to the framework
// Example:
// context.request().POST("/bin/utils/emailSignUp") <--- doesn't exist!
//===================================================================
Your test will end up executing all the logic in SlingAllMethodsServlet class and its parent classes. I am assuming that this is not what you want to test as these classes are not part of your logic and they already have other UT/IT (under respective Apache projects) to cater for testing requirements.
Also, looking at your code, bulk of your core logic resides in following snipper
String email = request.getParameter("email");
submissionAgent.saveForm(xssFilter.filter(email));
Your UT criteria is already met by the following line of your code:
emailSignUpFormServlet.doPost(context.request(),context.response());
as it covers most of that logic.
Now, if you are looking for proper IT for posting the parameters and parsing them all the way down to doPost method then aem-mocks is not the framework for that because it does not provide it in a simple way.
You can, in theory, mock all the layers from resource resolver, resource provider and sling servlet executors to pass the parameters all the way to your core logic. This can work but it won't benefit your cause because:
Most of the code is already tested via other UT
Too many internal mocking dependencies might make the tests flaky or version dependant.
If you really want to do pure IT, then it will be easier to host the servlet in an instance and access it via HttpClient. This will ensure that all the layers are hit. A lot of tests are done this way but it feels a bit heavy handed for the functionality you want to test and there are better ways of doing it.
Also the reason why context.request().POST doesn't exist is because context.request() for is a mocked state for the sake of testing. You want to actually bind and mock Http.Post operations which needs some way to resolve to your servlet and that is not supported by the framework.
Hope this helps.

Scala - write unit tests for objects/singletons that extends a trait/class with DB connection

Unit test related question
Encountered a problem with testing scala objects that extend another trait/class that has a DB connection (or any other "external" call)
Using a singleton with a DB connection anywhere in my project makes unit-test not be a option because I cannot override / mock the DB connection
This results in changing my design only for test purpose in situations where its clearly needed to be a object
Any suggestions ?
Code snippet for a non testable code :
object How2TestThis extends SomeDBconnection {
val somethingUsingDB = {
getStuff.map(//some logic)
}
val moreThigs {
//more things
}
}
trait SomeDBconnection {
import DBstuff._
val db = connection(someDB)
val getStuff = db.getThings
}
One of the options is to use cake pattern to require some DB connection and mixin specific implementation as desired. For example:
import java.sql.Connection
// Defines general DB connection interface for your application
trait DbConnection {
def getConnection: Connection
}
// Concrete implementation for production/dev environment for example
trait ProductionDbConnectionImpl extends DbConnection {
def getConnection: Connection = ???
}
// Common code that uses that DB connection and needs to be tested.
trait DbConsumer {
this: DbConnection =>
def runDb(sql: String): Unit = {
getConnection.prepareStatement(sql).execute()
}
}
...
// Somewhere in production code when you set everything up in init or main you
// pick concrete db provider
val prodDbConsumer = new DbConsumer with ProductionDbConnectionImpl
prodDbConsumer.runDb("select * from sometable")
...
// Somewhere in test code you mock or stub DB connection ...
val testDbConsumer = new DbConsumer with DbConnection { def getConnection = ??? }
testDbConsumer.runDb("select * from sometable")
If you have to use a singleton/Scala object you can have a lazy val or some init(): Unit method that sets connection up.
Another approach would be to use some sort of injector. For example look at Lift code:
package net.liftweb.http
/**
* A base trait for a Factory. A Factory is both an Injector and
* a collection of FactorMaker instances. The FactoryMaker instances auto-register
* with the Injector. This provides both concrete Maker/Vender functionality as
* well as Injector functionality.
*/
trait Factory extends SimpleInjector
Then somewhere in your code you use this vendor like this:
val identifier = new FactoryMaker[MongoIdentifier](DefaultMongoIdentifier) {}
And then in places where you actually have to get access to DB:
identifier.vend
You can supply alternative provider in tests by surrounding your code with:
identifier.doWith(mongoId) { <your test code> }
which can be conveniently used with specs2 Around context for example:
implicit val dbContext new Around {
def around[T: AsResult](t: => T): Result = {
val mongoId = new MongoIdentifier {
def jndiName: String = dbName
}
identifier.doWith(mongoId) {
AsResult(t)
}
}
}
It's pretty cool because it's implemented in Scala without any special bytecode or JVM hacks.
If you think first 2 options are too complicated and you have a small app you can use Properties file/cmd args to let you know if you are running in test or production mode. Again the idea comes from Lift :). You can easily implement it yourself, but here how you can do it with Lift Props:
// your generic DB code:
val jdbcUrl: String = Props.get("jdbc.url", "jdbc:postgresql:database")
You can have 2 props files:
production.default.props
jdbc.url=jdbc:postgresql:database
test.default.props
jdbc.url=jdbc:h2
Lift will automatically detect run mode Props.mode and pick the right props file to read. You can set run mode with JVM cmd args.
So in this case you can either connect to in-memory DB or just read run mode and set your connection in code accordingly (mock, stub, uninitialized, etc).
Use regular IOC pattern - pass dependencies via constructor arguments to the class. Don't use an object. This gets inconvenient quickly unless you use special dependency injection frameworks.
Some suggestions:
Use object for something that can't have an alternative implementation and if this only implementation will work in all environments. Use object for constants and pure FP non side effecting code. Use singletons for wiring things up at the last moment - like a class with main, not somewhere deep in the code where many components depend on it unless it has no side effects or it uses something like stackable/injectable vendor providers (see Lift).
Conclusion:
You can't mock an object or override its implementation. You need to design your code to be testable and some of the options for it are listed above. It's a good practice to make your code flexible with easily composable parts not only for the purposes of testing but also for reusability and maintainability.

EJB repository testing with OpenEJB - how to rollback changes

I try to test my EJB-based repositories using OpenEJB. Every time new unit test is runned I'd like to have my DB in an "initial" state. After the test, all changes should be rolled back (no matter if test succeeded or not). How to accomplish it in a simple way? I tried using UserTransaction - beginning it when test is starting and rolling back changes when finishing (as you can see below). I don't know why, but with this code all changes in DB (which were done during unit test) are left after line rolling changes back has been executed.
As I wrote, I'd like to accomplish it in the simplest way, without any external DB schema and so on.
Thanks in advance for any hints!
Piotr
public class MyRepositoryTest {
private Context initialContext;
private UserTransaction tx;
private MyRepository repository; //class under the test
#Before
public void setUp() throws Exception {
this.initialContext = OpenEjbContextFactory.getInitialContext();
this.repository = (MyRepository) initialContext.lookup(
"MyRepositoryLocal");
TransactionManager tm = (TransactionManager) initialContext.lookup(
"java:comp/TransactionManager");
tx = new CoreUserTransaction(tm);
tx.begin();
}
#After
public void tearDown() throws Exception {
tx.rollback();
this.initialContext = null;
}
#Test
public void test() throws Exception {
// do some test stuff
}
}
There's an example called 'transaction-rollback' in the examples zip for 3.1.4.
Check that out as it has several ways to rollback in a unit test. One of the techniques includes a trick to get a new in memory database for each test.