To execute automated UI tests, I trigger tests on an external cloud service which requires the upload of our test suite (for the purpose of this question please consider their approach a given).
I still want this process to be encapsulated into a JUnit runner to be consistent with runs utilising different cloud services or local execution. I execute my tests with Maven
mvn clean install -Dtest=TestRunner -Dproperties=/path/to/settings.file
and I want this flow to be consistent no matter which test provider is used.
The workaround I came up with is to trigger the tests like this on my local machine:
#Override
public void run(RunNotifier notifier) {
if (someCondition) {
new DelegateRunner().run(notifier);
} else {
super.run(notifier);
}
}
The DelegateRunner then calls the third-party service which triggers the tests on the cloud. How can I map the results I receive from this service (I can query their API) back to my local JUnit execution?
The class RunNotifier offers methods like fireTestFinished or fireTestFailure but I'm not sure how to build the objects (Result, Description, Failure) these methods take as parameters. I suspect I need to make use of test listeners but I can't figure out the details.
In a broader sense, what are my options to create JUnit test results when the actual tests are running on a remote machine or not even being executed as JUnit tests? Is this a use-case someone has encountered before. It might be slightly exotic but I don't think I'm the first either.
For a start, I just want to provide a binary result - tests passed or at least one test failed - in a way that doesn't break any JUnit integrations (like the Maven surefire plugin).
Right now, I get:
Tests run: 0, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 501.287 sec
and
No tests were executed! (Set -DfailIfNoTests=false to ignore this error.)
How can I fail the build in case there is a test failure and pass it otherwise (with number of tests as 1)? I can think of a few hacky ways but I'm sure there is a proper one.
At it's most basic, with a single test result, the DelegateRunner could be something like this:
public class DelegateRunner extends Runner {
private Description testDescription = Description
.createTestDescription("groupName", "testName");
public DelegateRunner(Class<?> testClass) {
}
#Override
public Description getDescription() {
return testDescription;
}
#Override
public void run(RunNotifier notifier) {
notifier.fireTestStarted(testDescription);
... trigger remote test ...
if (passed)
notifier.fireTestFinished(testDescription);
else
notifier.fireTestFailure(new Failure(testDescription,
new AssertionError("Details of the failure")));
}
}
Then both getDescription() and run() would need to be wrapped:
public class FrontRunner extends Runner {
private Runner runner;
public FrontRunner(Class<?> testClass) throws InitializationError {
if (someCondition)
runner = new DelegateRunner(testClass);
else
runner = new JUnit4(testClass);
}
#Override
public Description getDescription() {
return runner.getDescription();
}
#Override
public void run(RunNotifier notifier) {
runner.run(notifier);
}
}
(Assuming someCondition can be known up front, and that it's just the default JUnit4 runner that's needed normally).
This comes through to the Maven build as expected:
-------------------------------------------------------
T E S T S
-------------------------------------------------------
Running ...FrontRunnerTest
Tests run: 1, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 0.078 sec <<< FAILURE!
testName(groupName) Time elapsed: 0.015 sec <<< FAILURE!
java.lang.AssertionError: Details of the failure
at so.ownrunner.DelegateRunner.run(DelegateRunner.java:28)
at so.ownrunner.FrontRunner.run(FrontRunner.java:27)
at ...
Results :
Failed tests: testName(groupName): Details of the failure
Tests run: 1, Failures: 1, Errors: 0, Skipped: 0
Then if a more structured response is needed, Description.addChild() can be used to nest the suites and/or tests, e.g. :
public class NestedDelegateRunner extends Runner {
private Description suiteDescription = Description
.createSuiteDescription("groupName");
private Description test1Description = Description
.createTestDescription("groupName", "test1");
private Description test2Description = Description
.createTestDescription("groupName", "test2");
public NestedDelegateRunner(Class<?> testClass) {
suiteDescription.addChild(test1Description);
suiteDescription.addChild(test2Description);
}
#Override
public Description getDescription() {
return suiteDescription;
}
#Override
public void run(RunNotifier notifier) {
notifier.fireTestStarted(test1Description);
notifier.fireTestStarted(test2Description);
notifier.fireTestFinished(test1Description);
notifier.fireTestFailure(new Failure(test2Description,
new AssertionError("Details of the failure")));
}
}
In fact the addChild() is not crucial, but without it the structure can be less obvious - e.g. something like Eclipse will just show Unrooted tests.
Related
In my Kotlin JUnit tests, I want to start/stop embedded servers and use them within my tests.
I tried using the JUnit #Before annotation on a method in my test class and it works fine, but it isn't the right behaviour since it runs every test case instead of just once.
Therefore I want to use the #BeforeClass annotation on a method, but adding it to a method results in an error saying it must be on a static method. Kotlin doesn't appear to have static methods. And then the same applies for static variables, because I need to keep a reference to the embedded server around for use in the test cases.
So how do I create this embedded database just once for all of my test cases?
class MyTest {
#Before fun setup() {
// works in that it opens the database connection, but is wrong
// since this is per test case instead of being shared for all
}
#BeforeClass fun setupClass() {
// what I want to do instead, but results in error because
// this isn't a static method, and static keyword doesn't exist
}
var referenceToServer: ServerType // wrong because is not static either
...
}
Note: this question is intentionally written and answered by the author (Self-Answered Questions), so that the answers to commonly asked Kotlin topics are present in SO.
Your unit test class usually needs a few things to manage a shared resource for a group of test methods. And in Kotlin you can use #BeforeClass and #AfterClass not in the test class, but rather within its companion object along with the #JvmStatic annotation.
The structure of a test class would look like:
class MyTestClass {
companion object {
init {
// things that may need to be setup before companion class member variables are instantiated
}
// variables you initialize for the class just once:
val someClassVar = initializer()
// variables you initialize for the class later in the #BeforeClass method:
lateinit var someClassLateVar: SomeResource
#BeforeClass #JvmStatic fun setup() {
// things to execute once and keep around for the class
}
#AfterClass #JvmStatic fun teardown() {
// clean up after this class, leave nothing dirty behind
}
}
// variables you initialize per instance of the test class:
val someInstanceVar = initializer()
// variables you initialize per test case later in your #Before methods:
var lateinit someInstanceLateZVar: MyType
#Before fun prepareTest() {
// things to do before each test
}
#After fun cleanupTest() {
// things to do after each test
}
#Test fun testSomething() {
// an actual test case
}
#Test fun testSomethingElse() {
// another test case
}
// ...more test cases
}
Given the above, you should read about:
companion objects - similar to the Class object in Java, but a singleton per class that is not static
#JvmStatic - an annotation that turns a companion object method into a static method on the outer class for Java interop
lateinit - allows a var property to be initialized later when you have a well defined lifecycle
Delegates.notNull() - can be used instead of lateinit for a property that should be set at least once before being read.
Here are fuller examples of test classes for Kotlin that manage embedded resources.
The first is copied and modified from Solr-Undertow tests, and before the test cases are run, configures and starts a Solr-Undertow server. After the tests run, it cleans up any temporary files created by the tests. It also ensures environment variables and system properties are correct before the tests are run. Between test cases it unloads any temporary loaded Solr cores. The test:
class TestServerWithPlugin {
companion object {
val workingDir = Paths.get("test-data/solr-standalone").toAbsolutePath()
val coreWithPluginDir = workingDir.resolve("plugin-test/collection1")
lateinit var server: Server
#BeforeClass #JvmStatic fun setup() {
assertTrue(coreWithPluginDir.exists(), "test core w/plugin does not exist $coreWithPluginDir")
// make sure no system properties are set that could interfere with test
resetEnvProxy()
cleanSysProps()
routeJbossLoggingToSlf4j()
cleanFiles()
val config = mapOf(...)
val configLoader = ServerConfigFromOverridesAndReference(workingDir, config) verifiedBy { loader ->
...
}
assertNotNull(System.getProperty("solr.solr.home"))
server = Server(configLoader)
val (serverStarted, message) = server.run()
if (!serverStarted) {
fail("Server not started: '$message'")
}
}
#AfterClass #JvmStatic fun teardown() {
server.shutdown()
cleanFiles()
resetEnvProxy()
cleanSysProps()
}
private fun cleanSysProps() { ... }
private fun cleanFiles() {
// don't leave any test files behind
coreWithPluginDir.resolve("data").deleteRecursively()
Files.deleteIfExists(coreWithPluginDir.resolve("core.properties"))
Files.deleteIfExists(coreWithPluginDir.resolve("core.properties.unloaded"))
}
}
val adminClient: SolrClient = HttpSolrClient("http://localhost:8983/solr/")
#Before fun prepareTest() {
// anything before each test?
}
#After fun cleanupTest() {
// make sure test cores do not bleed over between test cases
unloadCoreIfExists("tempCollection1")
unloadCoreIfExists("tempCollection2")
unloadCoreIfExists("tempCollection3")
}
private fun unloadCoreIfExists(name: String) { ... }
#Test
fun testServerLoadsPlugin() {
println("Loading core 'withplugin' from dir ${coreWithPluginDir.toString()}")
val response = CoreAdminRequest.createCore("tempCollection1", coreWithPluginDir.toString(), adminClient)
assertEquals(0, response.status)
}
// ... other test cases
}
And another starting AWS DynamoDB local as an embedded database (copied and modified slightly from Running AWS DynamoDB-local embedded). This test must hack the java.library.path before anything else happens or local DynamoDB (using sqlite with binary libraries) won't run. Then it starts a server to share for all test classes, and cleans up temporary data between tests. The test:
class TestAccountManager {
companion object {
init {
// we need to control the "java.library.path" or sqlite cannot find its libraries
val dynLibPath = File("./src/test/dynlib/").absoluteFile
System.setProperty("java.library.path", dynLibPath.toString());
// TEST HACK: if we kill this value in the System classloader, it will be
// recreated on next access allowing java.library.path to be reset
val fieldSysPath = ClassLoader::class.java.getDeclaredField("sys_paths")
fieldSysPath.setAccessible(true)
fieldSysPath.set(null, null)
// ensure logging always goes through Slf4j
System.setProperty("org.eclipse.jetty.util.log.class", "org.eclipse.jetty.util.log.Slf4jLog")
}
private val localDbPort = 19444
private lateinit var localDb: DynamoDBProxyServer
private lateinit var dbClient: AmazonDynamoDBClient
private lateinit var dynamo: DynamoDB
#BeforeClass #JvmStatic fun setup() {
// do not use ServerRunner, it is evil and doesn't set the port correctly, also
// it resets logging to be off.
localDb = DynamoDBProxyServer(localDbPort, LocalDynamoDBServerHandler(
LocalDynamoDBRequestHandler(0, true, null, true, true), null)
)
localDb.start()
// fake credentials are required even though ignored
val auth = BasicAWSCredentials("fakeKey", "fakeSecret")
dbClient = AmazonDynamoDBClient(auth) initializedWith {
signerRegionOverride = "us-east-1"
setEndpoint("http://localhost:$localDbPort")
}
dynamo = DynamoDB(dbClient)
// create the tables once
AccountManagerSchema.createTables(dbClient)
// for debugging reference
dynamo.listTables().forEach { table ->
println(table.tableName)
}
}
#AfterClass #JvmStatic fun teardown() {
dbClient.shutdown()
localDb.stop()
}
}
val jsonMapper = jacksonObjectMapper()
val dynamoMapper: DynamoDBMapper = DynamoDBMapper(dbClient)
#Before fun prepareTest() {
// insert commonly used test data
setupStaticBillingData(dbClient)
}
#After fun cleanupTest() {
// delete anything that shouldn't survive any test case
deleteAllInTable<Account>()
deleteAllInTable<Organization>()
deleteAllInTable<Billing>()
}
private inline fun <reified T: Any> deleteAllInTable() { ... }
#Test fun testAccountJsonRoundTrip() {
val acct = Account("123", ...)
dynamoMapper.save(acct)
val item = dynamo.getTable("Accounts").getItem("id", "123")
val acctReadJson = jsonMapper.readValue<Account>(item.toJSON())
assertEquals(acct, acctReadJson)
}
// ...more test cases
}
NOTE: some parts of the examples are abbreviated with ...
Managing resources with before/after callbacks in tests, obviously, has it's pros:
Tests are "atomic". A test executes as a whole things with all the callbacks One won't forget to fire up a dependency service before the tests and shut it down after it's done. If done properly, executions callbacks will work on any environment.
Tests are self-contained. There is no external data or setup phases, everything is contained within a few test classes.
It has some cons too. One important of them is that it pollutes the code and makes the code violate single responsibility principle. Tests now not only test something, but perform a heavyweight initialization and resource management. It can be ok in some cases (like configuring an ObjectMapper), but modifying java.library.path or spawning another processes (or in-process embedded databases) are not so innocent.
Why not treat those services as dependencies for your test eligible for "injection", like described by 12factor.net.
This way you start and initialize dependency services somewhere outside of the test code.
Nowadays virtualization and containers are almost everywhere and most developers' machines are able to run Docker. And most of the application have a dockerized version: Elasticsearch, DynamoDB, PostgreSQL and so on. Docker is a perfect solution for external services that your tests need.
It can be a script that runs is run manually by a developer every time she wants to execute tests.
It can be a task run by build tool (e.g. Gradle has awesome dependsOn and finalizedBy DSL for defining dependencies). A task, of course, can execute the same script that developer executes manually using shell-outs / process execs.
It can be a task run by IDE before test execution. Again, it can use the same script.
Most CI / CD providers have a notion of "service" — an external dependency (process) that runs in parallel to your build and can be accessed via it's usual SDK / connector / API: Gitlab, Travis, Bitbucket, AppVeyor, Semaphore, …
This approach:
Frees your test code from initialization logic. Your tests will only test and do nothing more.
Decouples code and data. Adding a new test case can now be done by adding new data into dependency services with it's native toolset. I.e. for SQL databases you'll use SQL, for Amazon DynamoDB you'll use CLI to create tables and put items.
Is closer to a production code, where you obviously do not start those services when your "main" application starts.
Of course, it has it's flaws (basically, the statements I've started from):
Tests are not more "atomic". Dependency service must be started somehow prior test execution. The way it is started may be different in different environments: developer's machine or CI, IDE or build tool CLI.
Tests are not self-contained. Now your seed data may be even packed inside an image, so changing it may require rebuilding a different project.
I user TestNG and jMock for my unit test but, I have a problem with TestNG. It marks the test as passed when I expect mock object method to be invoked and it is not!
public class SomeTestTest {
Mockery mocker = new Mockery();
SomeInterface someInterface = mocker.mock(SomeInterface.class);
#Test
public void testName() throws Exception {
mocker.checking(new Expectations() {{
oneOf(someInterface).someMethod();
}});
}
}
and this is the report I get
Custom suite
Total tests run: 1, Failures: 0, Skips: 0
You're missing a call to Mockery.assertIsSatisfied().
That call tells jMock when you expect all expectations to be satisfied. Otherwise it wouldn't know at which point in your code you want those to be verified.
That's also explained in the Getting Started article.
I am writing client-side components in a provided framework, and need to be able to unit test my components. The components are written using MVP (Model-View-Presenter) pattern, I want to use PEX to automatically generate unit tests for my presenters.
The following is the code of a presenter.
public partial class CompetitorPresenter : PresenterBase
{
private readonly ICompetitorView _view;
public IGlobalDataAccess GlobalDataAccess;
public IGlobalUI Globals;
public SystemClient Client;
public bool DeleteRecord()
{
if (_view.CompetitorName != "Daniel")
return false;
if (Client.SystemName != "Ruby")
return false;
return true;
}
}
The problem I am having is that the object SystemClient is provided by the framework, and I cannot use a factory class to create an instance of SystemClient. Therefore when I run PEX to automatically generate unit tests, I have to tell PEX to ignore SystemClient, the result of this is that the method DeleteRecord is not fully covered as the line Client.SystemName != "Ruby" is not tested.
Since I have the mock object MSystemClient (created using moles), I am wondering if somewhere in the configuration I could tell PEX to use MSystemClient, and let PEX to automatically generate test cases to fully cover this method.
You are on the right track. If you cannot control where the instance of CompetitorPresenter.Client is created, you can define a mole for all instances:
MSystemClient.AllInstances.SystemNameGet = () => "SomeName";
Your unit test has to be run in a "hosted environment":
[HostType("Moles")]
public void TestMethod()
{
MSystemClient.AllInstances.SystemNameGet = () => "SomeName";
// Test code...
}
Quick background: I've been hunting down a Maven / Surefire test-running problem for days now, and I've narrowed it down to a small number suspect of tests. The behavior I'm seeing is insane. I start with mvn clean test: 250 tests run, 0 skipped. Now, I move the suspect test into src/test/java and try again: 146 tests run, 0 skipped! The output of Maven gives no clue that other tests aren't being run, even with the -X flag.
That brings me to my question: the reason I call the test 'suspect' is that the whole class is decorated with #Ignore, so I would imagine that including it in my test sources should have no effect at all. Then it occurred to me -- those classes have #BeforeClass/#AfterClass methods that
manage a dummy Zookeeper server. It's resulted in wonky behavior before, which is why we have the tests #Ignored.
If JUnit is running the before/after code but ignoring the tests, I have no idea what might happen (but it'd probably be super bad). Is this happening? Is this supposed to happen? If so, how am I supposed to say "for reference, here's a test that should work but needs fixing" when it includes #BeforeClass / #AfterClass? Also of substantial interest: what the hell is this doing to Surefire / Maven, that it causes unrelated tests to fall off the face of the Earth?
If you have a test with the #Ignore annotation, then it is normal behaviour for the #BeforeClass & #AfterClass to get run, whether or not all of the tests are #Ignored.
If, however, the Class has an #Ignore annotation, then the #BeforeClass & #AfterClass don't get run.
For maven, if you don't want to run any tests in a particular class, then you have to ignore them in surefire or failsafe. Add this to the maven configuration (see Maven Surefire Plugin)
<excludes>
<exclude>**/FoobarTest.class</exclude>
</excludes>
Environment: JDK 1.6, surefire plugin 2.9, jUnit 4.8.1, Maven 3.0, 3.0.3, 2.2.1.
I created this test class:
import org.junit.AfterClass;
import org.junit.BeforeClass;
import org.junit.Ignore;
import org.junit.Test;
#Ignore
public class IgnoreTest {
#BeforeClass
public static void beforeClass() {
System.out.println("BEFORE CLASS");
}
#AfterClass
public static void afterClass() {
System.out.println("AFTER CLASS");
}
#Test
public void test1() throws Exception {
System.out.println("test1");
}
#Test
public void test2() throws Exception {
System.out.println("test2");
}
#Test
public void test3() throws Exception {
System.out.println("test3");
}
}
Then mvn clean test print this:
Running hu.palacsint.stackoverflow.q7535177.IgnoreTest
Tests run: 1, Failures: 0, Errors: 0, Skipped: 1, Time elapsed: 0.015 sec
Results :
Tests run: 1, Failures: 0, Errors: 0, Skipped: 1
Works as you expected. If I remove the #Ignore and run mvn clean test again it prints this:
Running hu.palacsint.stackoverflow.q7535177.IgnoreTest
BEFORE CLASS
test2
test1
test3
AFTER CLASS
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.045 sec
Results :
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0
So, it works for me with three different Maven versions. No #BeforeClass/#AfterClass was run in #Ignored classes.
There is one (maybe more) situation when #BeforeClass/#AfterClass methods could run in an #Ignored test class. It's when your ignored class has a not ignored subclass:
import org.junit.Test;
public class IgnoreSubTest extends IgnoreTest {
#Test
public void test4() throws Exception {
System.out.println("test4 subclass");
}
}
Results of mvn clean test:
Running hu.palacsint.stackoverflow.q7535177.IgnoreTest
Tests run: 1, Failures: 0, Errors: 0, Skipped: 1, Time elapsed: 0.047 sec
Running hu.palacsint.stackoverflow.q7535177.IgnoreSubTest
BEFORE CLASS
test4 subclass
test1
test2
test3
AFTER CLASS
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.057 sec
Results :
Tests run: 5, Failures: 0, Errors: 0, Skipped: 1
In this case the #BeforeClass and the #AfterClass methods runs because they are methods of the IgnoreSubTest test class.
I try to test my EJB-based repositories using OpenEJB. Every time new unit test is runned I'd like to have my DB in an "initial" state. After the test, all changes should be rolled back (no matter if test succeeded or not). How to accomplish it in a simple way? I tried using UserTransaction - beginning it when test is starting and rolling back changes when finishing (as you can see below). I don't know why, but with this code all changes in DB (which were done during unit test) are left after line rolling changes back has been executed.
As I wrote, I'd like to accomplish it in the simplest way, without any external DB schema and so on.
Thanks in advance for any hints!
Piotr
public class MyRepositoryTest {
private Context initialContext;
private UserTransaction tx;
private MyRepository repository; //class under the test
#Before
public void setUp() throws Exception {
this.initialContext = OpenEjbContextFactory.getInitialContext();
this.repository = (MyRepository) initialContext.lookup(
"MyRepositoryLocal");
TransactionManager tm = (TransactionManager) initialContext.lookup(
"java:comp/TransactionManager");
tx = new CoreUserTransaction(tm);
tx.begin();
}
#After
public void tearDown() throws Exception {
tx.rollback();
this.initialContext = null;
}
#Test
public void test() throws Exception {
// do some test stuff
}
}
There's an example called 'transaction-rollback' in the examples zip for 3.1.4.
Check that out as it has several ways to rollback in a unit test. One of the techniques includes a trick to get a new in memory database for each test.