I have couple of DAO unit test classes that I want to run together using TestNG, however TestNG tries to run them in parallel which results in some rollbacks failing. While I would like to run my Unit Test classes run sequentially, I also want to be able to specify a minimum time that TestNG must wait before it runs the next test. Is this achievable?
P.S. I understand that TestNG can be told to run all the tests in a test class in a SingleThread, I am able to specify the sequence of method calls anyway using groups, so that's not an issue perhaps.
What about a hard dependency between the 2 tests? If you write that:
#Test
public void test1() { ... }
#Test(dependsOnMethods = "test1", alwaysRun = true)
public void test2() { ... }
then test2 will always be run after test1.
Do not forget alwaysRun = true, otherwise if test1 fails, test2 will be skipped!
If you do not want to run your classes in parallel, you need to specify the parallel attribute of your suite as false. By default, it's false. So I would think that it should run sequentially by default, unless you have some change in the way you invoke your tests.
For adding a bit of delay between your classes, you can probably add your delay logic in a method annotated with #AfterClass. AFAIK testng does not have a way to specify that in a testng xml or commandline. There is a timeout attribute but that is more for timing out tests and is not probably what you are looking for.
For adding delay between your tests i.e. test tags in xml, then you can try implementing the ITestListener - onFinish method, wherein you can add your delay code. It is run after every <test>. If a delay is required after every testcase, then implement delay code in IInvokedMethodListener - AfterInvocation() which is run after every test method runs. You would need to specify the listener when you invoke your suite then.
Hope it helps..
Following is what I used in some tests.
First, define utility methods like this:
// make thread sleep a while, so that reduce effect to subsequence operation if any shared resource,
private void delay(long milliseconds) throws InterruptedException {
Thread.sleep(milliseconds);
}
private void delay() throws InterruptedException {
delay(500);
}
Then, call the method inside testing methods, at end or beginning.
e.g
#Test
public void testCopyViaTransfer() throws IOException, InterruptedException {
copyViaTransfer(new File(sourcePath), new File(targetPath));
delay();
}
Related
So, I am trying to unit test a class in various scenarios. We use JUnit V 4.
I have a setUp method wherein i reStub the mock to return an expected mock Value.
I have 4 tests : test1-test4. test1,test2 work fine with the expected mocked value configured in perTestSetup method.
Test t3 needs MockClass to throw an exception, so i configure it seperately in t3. Now t3 works fine as the mock throws the exception as expected.
But when perTestSetup tries to reset the mock to return mockResult before running test4, it fails and throws the same Runtime exception configured in t4. I also tried reset() before mocking in perTestSetup(). But that too fails similarly.
What am i missing here?
#Before
public void perTestSetup(){
when(MockClass.functionCall(...)).thenReturn(mockResult);
}
#Test
public void test1(){
}
#Test
public void test2(){
}
#Test
public void test3(){
when(MockClass.functionCall(...)).thenThrow(new RuntimeExcption());
...
}
#Test
public void test4(){
}
Your perTestSetup() method isn't doing what you think it is doing. The #Before annotation means the test environment will run this method once, before doing any of the tests, rather than once per test. Before I finished reading your question, I was actually itching to advise you to rename this method to simply setup(), as that would be a more accurate description.
Options:
Change the annotation to #BeforeEach, which would then change the behaviour to do what you think it should currently be doing. However, this would be inefficient as in the second two tests you will be defining behaviour and then immediately redefining it.
What do the parameters look like in your functionCall(...)? It may be possible to define two separate behaviours in your single #Before setup() method, i.e.
when(MockClass.functionCall(good values)).thenReturn(mockResult);
when(MockClass.functionCall(bad values)).thenThrow(new RuntimeException());
In each test, call functionCall() with the relevant values for that that particular test.
If the parameters in functionCall() do not readily accommodate the previous approach, consider making two separate instantiations of MockClass, something like
MockClass successfulMockClass = new mock(MockClass.class);
when(successfulMockClass()).thenReturn(mockResult);
MockClass unsuccessfulMockClass = new mock(MockClass.class);
when(unsuccessfulMockClass()).thenThrow(new RuntimeException());
In your tests, call on the relevant mocked object depending on what input you are testing against.
Without being able to see the details of your class, I suspect the second option is what I would go for. It may be worth trying all three to see which feels most intuitive for you, though.
I'm writing a unit test class (using testng) that has mocked member variables (using Mockito) and running the tests in parallel. I initially set up the expected mock in an #BeforeClass method, and in each test case I break something by creating a Mockito.when for each exceptional case.
What I'm seeing (unsurprisingly) is that these tests aren't independent; the Mockito.when in one test case affects the others. I noticed that I could be set up the mocks before each test, and I changed the #BeforeClass to #BeforeMethod. I still didn't expect these to pass consistently, as the tests are all still operating on the same shared mock object at the same time. However, all the tests started passing consistently. My question is "why"? Will this eventually fail? No matter what I do (Thread.sleep, etc) I can't reproduce a failure.
Is using #BeforeMethod enough to make these tests independent? If so, can anyone explain why?
Example code below:
public class ExampleTest {
#Mock
private List<String> list;
#BeforeClass // Changing to #BeforeMethod works for some reason
public void setup() throws NoSuchComponentException, ADPRuntimeException {
MockitoAnnotations.initMocks(this);
Mockito.when(list.get(0)).thenReturn("normal");
}
#Test
public void testNormalCase() throws InterruptedException {
assertEquals(list.get(0), "normal"); // Fails with expected [normal] but found [exceptional]
}
#Test
public void testExceptionalCase() throws InterruptedException {
Mockito.when(list.get(0)).thenReturn("exceptional");
assertEquals(list.get(0), "exceptional");
}
}
The problem here is that TestNG creates one instance of your test class ExampleTest and this is the instance that is used by both of your #Test methods.
So when you used #BeforeClass, you would have random failures with testNormalCase() if testExceptionalCase() ran first and altered the state of your test class.
When you changed your annotation to be #BeforeMethod, it would cause the setup to be executed right before every #Test method was executed.
So the setup would fix the state for testNormalCase() which is why it would pass, and since testExceptionalCase() was internally altering the state using Mockito.when() and then running assertions, it would pass all the time as well.
But there's one scenario wherein your setup will still fail, viz., when you use parallel="methods" attribute in your <suite> tag within your TestNG suite xml file i.e., when you configure TestNG and instruct it to run every #Test method in parallel.
In that case, the Mockito.when() within testExceptionalCase() will affect the shared state [ since you are using this in a shared manner amongst all your #Test methods ] causing testNormalCase() to fail randomly.
To fix this, I would suggest that you do the following :
Don't share this between your #Test methods, but house it separately outside of your test class i.e., house all the data members of your test class in a separate pojo which would be mocked rather than mocking this.
Use a ThreadLocal to store the state which is being mocked by Mockito.when() and then run assertions on the ThreadLocal from within your #Test methods.
I am having an issue with tests sporadically failing when using nunit 3 and using parallel test running.
We have a number of tests that currently are structured as follows
[TestFixture]
public class CalculateShipFromStoreShippingCost
{
private IService_service;
private IClient _client;
[SetUp]
public void SetUp()
{
_service = Substitute.For<IService>();
_client = new Client(_service);
}
[Test]
public async Task WhenScenario1()
{
_service.Apply(Args.Any<int>).Returns(1);
var result = _client.DoTheThing();
Assert.IsTrue(1,result);
}
[Test]
public async Task WhenScenario2()
{
_service.Apply(Args.Any<int>).Returns(2);
var result = _client.DoTheThing();
Assert.IsTrue(2,result);
}
}
Sometimes the test fail as one of the substitutes is returning the value for the other test.
How should this test be structured so that with Nunit it will run reliably when done in parallel
You haven't shown any Parallelizable attributes in your example, so I assume you are using the attribute at a higher level, most likely on the assembly. Otherwise, no parallel execution would occur. Further, since you say the test cases are running in parallel, you have apparently specified ParallelScope.Children.
The two test cases shown in your fixture cannot run in parallel. You should bear in mind that the SetUp method runs for each of the tests. So each of your two tests sets the value of _service, which is part of the state of the single instance of CalculateShipFromStoreShippingCost, which is shared by both tests. That is why you are seeing the "wrong" substitute being returned at times.
It is not possible for two test cases to run reliably in parallel if they both change the state of the fixture. Note that it does not matter whether the assignment to _service takes place in the test method itself or in the SetUp method - both are executed as part of the test case. So, you have to either stop running these two cases in parallel or stop changing the state.
To stop running the tests in parallel, you simply add [NonParallelizable] to each test method. If you are not using the latest framework version, use [Parallelizable(ParallelScope.None)] instead. Your other tests will continue to run in parallel, but these two will not.
Alternatively, use ParallelScope.Fixture at the assembly level. This will cause fixtures to run in parallel by default, while the individual test cases within them each run sequentially. When using ParallelizableAttribute at the assembly level, it is sometimes best to take a more conservative approach, adding in more parallelism within some fixtures where it is useful.
An entirely different approach is to make your tests stateless. Eliminate the _service member and use a local value within the test method itself. Each of your tests would add two lines like...
var service = SubstituteFor<IService>();
var client = new Client(service);
As shown in your example, I would imagine you are getting very little performance gain from running the two methods in parallel, so I would not use that last approach unless I saw a specific performance reason to do so.
As a final note... If you use make your fixtures run in parallel by default (either with an assembly-level attribute or with attributes on each fixture) and place no Parallelizable attribute on your test cases, NUnit uses an optimization, whereby all the tests within the fixture run on the same thread. This saving in context changes will often make up for the loss of any performance improvement you hoped to get through running in parallel.
I am writing some tests for my application using NUnit and Resharper as the test runner. I have a test that creates entries in SQL using a custom data access layer and then checks to see if they were in fact, created. This check is done using a more direct connection using SqlDataAdapter.
After gathering the test results, the test entries are deleted from the table (within the test method, not in TearDown()). I've noticed that I often stop the tests in the middle because of various reasons and they obviously don't get cleaned up because I told the test to stop and so it never reaches the cleanup code. In my experience, even TearDown() doesn't get called when the tests are stopped using the stop button.
My question is: When I hit the stop button, is there any way to invoke a cleanup procedure to remove tests artifacts?
If you stop the code execution you are stoping the procedure too so nothing else will be executed.
Something you can do is deleting on Set Up. This method is executed before each test. This seems to be the best way to be sure that database is clean.
SetUpAttribute (NUnit 2.0)
This attribute is used inside a TestFixture to provide a common set of functions that are performed just before each test method is called. A TestFixture can have only one SetUp method. If more than one is defined the TestFixture will compile successfully, but its tests will not run.
Sample
namespace NUnit.Tests
{
using System;
using NUnit.Framework;
[TestFixture]
public class SuccessTests
{
[SetUp] public void Init()
{ /* ... */ }
[TearDown] public void Dispose()
{ /* ... */ }
[Test] public void Add()
{ /* ... */ }
}
}
If your test makes connection to DB, they are not really a "unit" test but more of integration test. Execution of unit test should not impact any other layer in the application.
I would suggest you to use "Dependency Injection" mechanism. In short, structure your custom data access layer such that while initializing it you pass SQL connection object to it. From your application, you pass on the actual connection but from your unit test you pass a mock connection. You can then assert mock connection on if a query or non-query was executed or not.
This way no data gets inserted at all in your DB and you don't need to worry about cleaning it if your test is stopped in middle of execution.
You can look at an example of mocking SQL connection in this post: Mocking data access Layer Rhino mock
I'm writing some NUnit tests for database operations. Obviously, if Add() fails, then Get() will fail as well. However, it looks deceiving when both Add() and Get() fail because it looks like there's two problems instead of just one.
Is there a way to specify an 'order' for tests to run in, in that if the first test fails, the following tests are ignored?
In the same line, is there a way to order the unit test classes themselves? For example, I would like to run my tests for basic database operations first before the tests for round-tripping data from the UI.
Note: This is a little different than having tests depend on each other, it's more like ensuring that something works first before running a bunch of tests. It's a waste of time to, for example, run a bunch of database operations if you can't get a connection to the database in the first place.
Edit: It seems that some people are missing the point. I'm not doing this:
[Test]
public void AddTest()
{
db.Add(someData);
}
[Test]
public void GetTest()
{
db.Get(someData);
Assert.That(data was retrieved successfully);
}
Rather, I'm doing this:
[Test]
public void AddTest()
{
db.Add(someData);
}
[Test]
public void GetTest()
{
// need some way here to ensure that db.Add() can actually be performed successfully
db.Add(someData);
db.Get(somedata);
Assert.That(data was retrieved successfully);
}
In other words, I want to ensure that the data can be added in the first place before I can test whether it can be retrieved. People are assuming I'm using data from the first test to pass the second test when this is not the case. I'm trying to ensure that one operation is possible before attempting another that depends on it.
As I said already, you need to ensure you can get a connection to the database before running database operations. Or that you can open a file before performing file operations. Or connect to a server before testing API calls. Or...you get the point.
NUnit supports an "Assume.That" syntax for validating setup. This is documented as part of the Theory (thanks clairestreb). In the NUnit.Framework namespace is a class Assume. To quote the documentation:
/// Provides static methods to express the assumptions
/// that must be met for a test to give a meaningful
/// result. If an assumption is not met, the test
/// should produce an inconclusive result.
So in context:
public void TestGet() {
MyList sut = new MyList()
Object expecting = new Object();
sut.Put(expecting);
Assume.That(sut.size(), Is(1));
Assert.That(sut.Get(), Is(expecting));
}
Tests should never depend on each other. You just found out why. Tests that depend on each other are fragile by definition. If you need the data in the DB for the test for Get(), put it there in the setup step.
I think the problem is that you're using NUnit to run something other than the sort of Unit Tests that NUnit was made to run.
Essentially, you want AddTest to run before GetTest, and you want NUnit to stop executing tests if AddTest fails.
The problem is that that's antithetical to unit testing - tests are supposed to be completely independent and run in any order.
The standard concept of Unit Testing is that if you have a test around the 'Add' functionality, then you can use the 'Add' functionality in the 'Get' test and not worry about if 'Add' works within the 'Get' test. You know 'Add' works - you have a test for it.
The 'FIRST' principle (http://agileinaflash.blogspot.com/2009/02/first.html) describes how Unit tests should behave. The test you want to write violates both 'I' (Isolated) and 'R' (Repeatable).
If you're concerned about the database connection dropping between your two tests, I would recommend that rather than connect to a real database during the test, your code should use some sort of a data interface, and for the test, you should be using a mock interface. If the point of the test is to exercise the database connection, then you may simply be using the wrong tool for the job - that's not really a Unit test.
I don't think that's possible out-of-box.
Anyway, your test class design as you described will make the test code very fragile.
MbUnit seems to have a DependsOnAttribute that would allow you to do what you want.
If the other test fixture or test
method fails then this test will not
run. Moreover, the dependency forces
this test to run after those it
depends upon.
Don't know anything about NUnit though.
You can't assume any order of test fixture execution, so any prerequisites have to be checked for within your test classes.
Segregate your Add test into one test-class e.g. AddTests, and put the Get test(s) into another test-class, e.g. class GetTests.
In the [TestFixtureSetUp] method of the GetTests class, check that you have working database access (e.g. that Add's work), and if not, Assert.Ignore or Inconclusive, as you deem appropriate.
This will abort the GetTests test fixture when its prerequisites aren't met, and skip trying to run any of the unit tests it contains.
(I think! I'm an nUnit newbie.)
Create a global variable and return in the test for Get unless Add set it to true (do this in the last line of Add):
public boolean addFailed = false;
public void testAdd () {
try {
... old test code ...
} catch (Throwable t) { // Catch all errors
addFailed = true;
throw t; // Don't forget to rethrow
}
}
public void testGet () {
if (addFailed) return;
... old test code ...
}