EJB repository testing with OpenEJB - how to rollback changes - unit-testing

I try to test my EJB-based repositories using OpenEJB. Every time new unit test is runned I'd like to have my DB in an "initial" state. After the test, all changes should be rolled back (no matter if test succeeded or not). How to accomplish it in a simple way? I tried using UserTransaction - beginning it when test is starting and rolling back changes when finishing (as you can see below). I don't know why, but with this code all changes in DB (which were done during unit test) are left after line rolling changes back has been executed.
As I wrote, I'd like to accomplish it in the simplest way, without any external DB schema and so on.
Thanks in advance for any hints!
Piotr
public class MyRepositoryTest {
private Context initialContext;
private UserTransaction tx;
private MyRepository repository; //class under the test
#Before
public void setUp() throws Exception {
this.initialContext = OpenEjbContextFactory.getInitialContext();
this.repository = (MyRepository) initialContext.lookup(
"MyRepositoryLocal");
TransactionManager tm = (TransactionManager) initialContext.lookup(
"java:comp/TransactionManager");
tx = new CoreUserTransaction(tm);
tx.begin();
}
#After
public void tearDown() throws Exception {
tx.rollback();
this.initialContext = null;
}
#Test
public void test() throws Exception {
// do some test stuff
}
}

There's an example called 'transaction-rollback' in the examples zip for 3.1.4.
Check that out as it has several ways to rollback in a unit test. One of the techniques includes a trick to get a new in memory database for each test.

Related

Parallel test with shared mock

I'm writing a unit test class (using testng) that has mocked member variables (using Mockito) and running the tests in parallel. I initially set up the expected mock in an #BeforeClass method, and in each test case I break something by creating a Mockito.when for each exceptional case.
What I'm seeing (unsurprisingly) is that these tests aren't independent; the Mockito.when in one test case affects the others. I noticed that I could be set up the mocks before each test, and I changed the #BeforeClass to #BeforeMethod. I still didn't expect these to pass consistently, as the tests are all still operating on the same shared mock object at the same time. However, all the tests started passing consistently. My question is "why"? Will this eventually fail? No matter what I do (Thread.sleep, etc) I can't reproduce a failure.
Is using #BeforeMethod enough to make these tests independent? If so, can anyone explain why?
Example code below:
public class ExampleTest {
#Mock
private List<String> list;
#BeforeClass // Changing to #BeforeMethod works for some reason
public void setup() throws NoSuchComponentException, ADPRuntimeException {
MockitoAnnotations.initMocks(this);
Mockito.when(list.get(0)).thenReturn("normal");
}
#Test
public void testNormalCase() throws InterruptedException {
assertEquals(list.get(0), "normal"); // Fails with expected [normal] but found [exceptional]
}
#Test
public void testExceptionalCase() throws InterruptedException {
Mockito.when(list.get(0)).thenReturn("exceptional");
assertEquals(list.get(0), "exceptional");
}
}
The problem here is that TestNG creates one instance of your test class ExampleTest and this is the instance that is used by both of your #Test methods.
So when you used #BeforeClass, you would have random failures with testNormalCase() if testExceptionalCase() ran first and altered the state of your test class.
When you changed your annotation to be #BeforeMethod, it would cause the setup to be executed right before every #Test method was executed.
So the setup would fix the state for testNormalCase() which is why it would pass, and since testExceptionalCase() was internally altering the state using Mockito.when() and then running assertions, it would pass all the time as well.
But there's one scenario wherein your setup will still fail, viz., when you use parallel="methods" attribute in your <suite> tag within your TestNG suite xml file i.e., when you configure TestNG and instruct it to run every #Test method in parallel.
In that case, the Mockito.when() within testExceptionalCase() will affect the shared state [ since you are using this in a shared manner amongst all your #Test methods ] causing testNormalCase() to fail randomly.
To fix this, I would suggest that you do the following :
Don't share this between your #Test methods, but house it separately outside of your test class i.e., house all the data members of your test class in a separate pojo which would be mocked rather than mocking this.
Use a ThreadLocal to store the state which is being mocked by Mockito.when() and then run assertions on the ThreadLocal from within your #Test methods.

Mvx.Resolve fails in unit tests

I'm currently trying to write unit tests for an android/ios application written in xamaring using mvvmcross. I've followed the instructions in the wiki and they do work well to the point when a service tries to change the ViewModel this way:
var viewDispatcher = Mvx.Resolve<IMvxViewDispatcher>();
viewDispatcher?.ShowViewModel(
new MvxViewModelRequest(typeof(HomeViewModel), null, null, MvxRequestedBy.Unknown));
The tests fail at the first line with Mvx.Resolve();. I assume this is down to registering the interfaces in the mock IoC container:
this.mockDispatcher = new MockDispatcher();
this.Ioc.RegisterSingleton<IMvxViewDispatcher>(this.mockDispatcher);
this.Ioc.RegisterSingleton<IMvxMainThreadDispatcher(this.mockDispatcher);
so Mvx cannot resolve then when called this way. Can this code be tested or is there any other possibility to change the ViewModel from the service?
I think your AdditionalSetup never gets called. You have to add the SetUp attribute to a setup method and call the Setup() of MvxIoCSupportingTest if you use nunit, else the respective attribute.
public abstract class MvxTestBase : MvxIoCSupportingTest
{
protected MockDispatcher mockDispatcher;
protected override void AdditionalSetup()
{
this.mockDispatcher = new MockDispatcher();
this.Ioc.RegisterSingleton<IMvxViewDispatcher>(this.mockDispatcher);
this.Ioc.RegisterSingleton<IMvxMainThreadDispatcher>(this.mockDispatcher);
}
[SetUp]
public virtual void SetupTest()
{
Setup();
}
}
Or you call it in each test as shown here: https://mvvmcross.com/docs/testing#section-test-class-declaration-and-setup

How do I manage unit test resources in Kotlin, such as starting/stopping a database connection or an embedded elasticsearch server?

In my Kotlin JUnit tests, I want to start/stop embedded servers and use them within my tests.
I tried using the JUnit #Before annotation on a method in my test class and it works fine, but it isn't the right behaviour since it runs every test case instead of just once.
Therefore I want to use the #BeforeClass annotation on a method, but adding it to a method results in an error saying it must be on a static method. Kotlin doesn't appear to have static methods. And then the same applies for static variables, because I need to keep a reference to the embedded server around for use in the test cases.
So how do I create this embedded database just once for all of my test cases?
class MyTest {
#Before fun setup() {
// works in that it opens the database connection, but is wrong
// since this is per test case instead of being shared for all
}
#BeforeClass fun setupClass() {
// what I want to do instead, but results in error because
// this isn't a static method, and static keyword doesn't exist
}
var referenceToServer: ServerType // wrong because is not static either
...
}
Note: this question is intentionally written and answered by the author (Self-Answered Questions), so that the answers to commonly asked Kotlin topics are present in SO.
Your unit test class usually needs a few things to manage a shared resource for a group of test methods. And in Kotlin you can use #BeforeClass and #AfterClass not in the test class, but rather within its companion object along with the #JvmStatic annotation.
The structure of a test class would look like:
class MyTestClass {
companion object {
init {
// things that may need to be setup before companion class member variables are instantiated
}
// variables you initialize for the class just once:
val someClassVar = initializer()
// variables you initialize for the class later in the #BeforeClass method:
lateinit var someClassLateVar: SomeResource
#BeforeClass #JvmStatic fun setup() {
// things to execute once and keep around for the class
}
#AfterClass #JvmStatic fun teardown() {
// clean up after this class, leave nothing dirty behind
}
}
// variables you initialize per instance of the test class:
val someInstanceVar = initializer()
// variables you initialize per test case later in your #Before methods:
var lateinit someInstanceLateZVar: MyType
#Before fun prepareTest() {
// things to do before each test
}
#After fun cleanupTest() {
// things to do after each test
}
#Test fun testSomething() {
// an actual test case
}
#Test fun testSomethingElse() {
// another test case
}
// ...more test cases
}
Given the above, you should read about:
companion objects - similar to the Class object in Java, but a singleton per class that is not static
#JvmStatic - an annotation that turns a companion object method into a static method on the outer class for Java interop
lateinit - allows a var property to be initialized later when you have a well defined lifecycle
Delegates.notNull() - can be used instead of lateinit for a property that should be set at least once before being read.
Here are fuller examples of test classes for Kotlin that manage embedded resources.
The first is copied and modified from Solr-Undertow tests, and before the test cases are run, configures and starts a Solr-Undertow server. After the tests run, it cleans up any temporary files created by the tests. It also ensures environment variables and system properties are correct before the tests are run. Between test cases it unloads any temporary loaded Solr cores. The test:
class TestServerWithPlugin {
companion object {
val workingDir = Paths.get("test-data/solr-standalone").toAbsolutePath()
val coreWithPluginDir = workingDir.resolve("plugin-test/collection1")
lateinit var server: Server
#BeforeClass #JvmStatic fun setup() {
assertTrue(coreWithPluginDir.exists(), "test core w/plugin does not exist $coreWithPluginDir")
// make sure no system properties are set that could interfere with test
resetEnvProxy()
cleanSysProps()
routeJbossLoggingToSlf4j()
cleanFiles()
val config = mapOf(...)
val configLoader = ServerConfigFromOverridesAndReference(workingDir, config) verifiedBy { loader ->
...
}
assertNotNull(System.getProperty("solr.solr.home"))
server = Server(configLoader)
val (serverStarted, message) = server.run()
if (!serverStarted) {
fail("Server not started: '$message'")
}
}
#AfterClass #JvmStatic fun teardown() {
server.shutdown()
cleanFiles()
resetEnvProxy()
cleanSysProps()
}
private fun cleanSysProps() { ... }
private fun cleanFiles() {
// don't leave any test files behind
coreWithPluginDir.resolve("data").deleteRecursively()
Files.deleteIfExists(coreWithPluginDir.resolve("core.properties"))
Files.deleteIfExists(coreWithPluginDir.resolve("core.properties.unloaded"))
}
}
val adminClient: SolrClient = HttpSolrClient("http://localhost:8983/solr/")
#Before fun prepareTest() {
// anything before each test?
}
#After fun cleanupTest() {
// make sure test cores do not bleed over between test cases
unloadCoreIfExists("tempCollection1")
unloadCoreIfExists("tempCollection2")
unloadCoreIfExists("tempCollection3")
}
private fun unloadCoreIfExists(name: String) { ... }
#Test
fun testServerLoadsPlugin() {
println("Loading core 'withplugin' from dir ${coreWithPluginDir.toString()}")
val response = CoreAdminRequest.createCore("tempCollection1", coreWithPluginDir.toString(), adminClient)
assertEquals(0, response.status)
}
// ... other test cases
}
And another starting AWS DynamoDB local as an embedded database (copied and modified slightly from Running AWS DynamoDB-local embedded). This test must hack the java.library.path before anything else happens or local DynamoDB (using sqlite with binary libraries) won't run. Then it starts a server to share for all test classes, and cleans up temporary data between tests. The test:
class TestAccountManager {
companion object {
init {
// we need to control the "java.library.path" or sqlite cannot find its libraries
val dynLibPath = File("./src/test/dynlib/").absoluteFile
System.setProperty("java.library.path", dynLibPath.toString());
// TEST HACK: if we kill this value in the System classloader, it will be
// recreated on next access allowing java.library.path to be reset
val fieldSysPath = ClassLoader::class.java.getDeclaredField("sys_paths")
fieldSysPath.setAccessible(true)
fieldSysPath.set(null, null)
// ensure logging always goes through Slf4j
System.setProperty("org.eclipse.jetty.util.log.class", "org.eclipse.jetty.util.log.Slf4jLog")
}
private val localDbPort = 19444
private lateinit var localDb: DynamoDBProxyServer
private lateinit var dbClient: AmazonDynamoDBClient
private lateinit var dynamo: DynamoDB
#BeforeClass #JvmStatic fun setup() {
// do not use ServerRunner, it is evil and doesn't set the port correctly, also
// it resets logging to be off.
localDb = DynamoDBProxyServer(localDbPort, LocalDynamoDBServerHandler(
LocalDynamoDBRequestHandler(0, true, null, true, true), null)
)
localDb.start()
// fake credentials are required even though ignored
val auth = BasicAWSCredentials("fakeKey", "fakeSecret")
dbClient = AmazonDynamoDBClient(auth) initializedWith {
signerRegionOverride = "us-east-1"
setEndpoint("http://localhost:$localDbPort")
}
dynamo = DynamoDB(dbClient)
// create the tables once
AccountManagerSchema.createTables(dbClient)
// for debugging reference
dynamo.listTables().forEach { table ->
println(table.tableName)
}
}
#AfterClass #JvmStatic fun teardown() {
dbClient.shutdown()
localDb.stop()
}
}
val jsonMapper = jacksonObjectMapper()
val dynamoMapper: DynamoDBMapper = DynamoDBMapper(dbClient)
#Before fun prepareTest() {
// insert commonly used test data
setupStaticBillingData(dbClient)
}
#After fun cleanupTest() {
// delete anything that shouldn't survive any test case
deleteAllInTable<Account>()
deleteAllInTable<Organization>()
deleteAllInTable<Billing>()
}
private inline fun <reified T: Any> deleteAllInTable() { ... }
#Test fun testAccountJsonRoundTrip() {
val acct = Account("123", ...)
dynamoMapper.save(acct)
val item = dynamo.getTable("Accounts").getItem("id", "123")
val acctReadJson = jsonMapper.readValue<Account>(item.toJSON())
assertEquals(acct, acctReadJson)
}
// ...more test cases
}
NOTE: some parts of the examples are abbreviated with ...
Managing resources with before/after callbacks in tests, obviously, has it's pros:
Tests are "atomic". A test executes as a whole things with all the callbacks One won't forget to fire up a dependency service before the tests and shut it down after it's done. If done properly, executions callbacks will work on any environment.
Tests are self-contained. There is no external data or setup phases, everything is contained within a few test classes.
It has some cons too. One important of them is that it pollutes the code and makes the code violate single responsibility principle. Tests now not only test something, but perform a heavyweight initialization and resource management. It can be ok in some cases (like configuring an ObjectMapper), but modifying java.library.path or spawning another processes (or in-process embedded databases) are not so innocent.
Why not treat those services as dependencies for your test eligible for "injection", like described by 12factor.net.
This way you start and initialize dependency services somewhere outside of the test code.
Nowadays virtualization and containers are almost everywhere and most developers' machines are able to run Docker. And most of the application have a dockerized version: Elasticsearch, DynamoDB, PostgreSQL and so on. Docker is a perfect solution for external services that your tests need.
It can be a script that runs is run manually by a developer every time she wants to execute tests.
It can be a task run by build tool (e.g. Gradle has awesome dependsOn and finalizedBy DSL for defining dependencies). A task, of course, can execute the same script that developer executes manually using shell-outs / process execs.
It can be a task run by IDE before test execution. Again, it can use the same script.
Most CI / CD providers have a notion of "service" — an external dependency (process) that runs in parallel to your build and can be accessed via it's usual SDK / connector / API: Gitlab, Travis, Bitbucket, AppVeyor, Semaphore, …
This approach:
Frees your test code from initialization logic. Your tests will only test and do nothing more.
Decouples code and data. Adding a new test case can now be done by adding new data into dependency services with it's native toolset. I.e. for SQL databases you'll use SQL, for Amazon DynamoDB you'll use CLI to create tables and put items.
Is closer to a production code, where you obviously do not start those services when your "main" application starts.
Of course, it has it's flaws (basically, the statements I've started from):
Tests are not more "atomic". Dependency service must be started somehow prior test execution. The way it is started may be different in different environments: developer's machine or CI, IDE or build tool CLI.
Tests are not self-contained. Now your seed data may be even packed inside an image, so changing it may require rebuilding a different project.

What should I annotate when I unit test the GPS function with Robolectric?

I am referring the following site http://googletesting.blogspot.ca/2010/12/test-sizes.html . There it was mentioned that we should annotate #LargeTest if our test method is accessing the network feature. I am using Roboloectric for unit testing.
And my method uses the shadowLocationManager to simulate the GPS location. I am not sure what should I annotate for my GPSTest. Should I go for #Test or go for #LargeTest. I just started learning the unit testing with Robolectric. I been using #Test annotation for all my tests so far and I didn't face a problem. Can someone please suggest me the proper annotation to be followed for Robolectric.
Edit: The #SmallTest,#MediumTest,#LargeTest annotation failed in my GPS Test and #Test passed the test. Are these annotations doesn't work on Robolectric? If so,then how can I mark my tests with different annotations?
If you want to have a look at my test method,please look at the following code:
#Before
public void setUp() {
mainActivity = new Settings();
mainActivity=Robolectric.buildActivity(Settings.class).create().get();
}
#Test
public void shouldReturnTheLatestLocation() {
LocationManager locationManager = (LocationManager)
Robolectric.application.getSystemService(Context.LOCATION_SERVICE);
ShadowLocationManager shadowLocationManager = Robolectric.shadowOf(locationManager);
Location expectedLocation = location(locationManager.NETWORK_PROVIDER, 12.0, 20.0);
shadowLocationManager.simulateLocation(expectedLocation);
System.out.print("expected location is "+expectedLocation.getLatitude()+expectedLocation.getLongitude());
Location actualLocation = mainActivity.latestLocation();
System.out.print("actual location is "+actualLocation.getLatitude()+actualLocation.getLongitude());
assertEquals(expectedLocation, actualLocation);
}
BottomLine: (I think I can expect an upvote now...Feeling proud :D :P)
You're right, those annotations are not for Robolectric.
They are useful when using the Android InstrumentationTestRunner which runs tests on device.
Robolectric runs tests on the JVM, different beast altogether.

Mbunit and selenium

Can anyone tell me how get mbunit to run more than one test at a time without it setting up and tearing down after every test?
Currently I'm using selenium for UI testing and need to run the tests consecutively do a login page.
Thanks in advance,
cb
Are you looking for FixtureSetUp/FixtureTearDown attribute [used to be called TestFixtureSetUp], which is called at class level, meaning, it will be set up once for all the tests in one test class.
Setup/TearDown attribute is called on Method level.
MbUnit also support test assembly setup and teardown. Here is a link for this.
[assembly: AssemblyCleanUp(typeof(AssemblyCleaner))]
...
public class AssemblyCleaner
{
[SetUp]
public static void SetUp()
{
Console.WriteLine("Setting up {0}", typeof(AssemblyCleanUp).Assembly.FullName);
}
[TearDown]
public static void TearDown()
{
Console.WriteLine("Cleaning up {0}", typeof(AssemblyCleanUp).Assembly.FullName);
}
}