I am writing some tests for my application using NUnit and Resharper as the test runner. I have a test that creates entries in SQL using a custom data access layer and then checks to see if they were in fact, created. This check is done using a more direct connection using SqlDataAdapter.
After gathering the test results, the test entries are deleted from the table (within the test method, not in TearDown()). I've noticed that I often stop the tests in the middle because of various reasons and they obviously don't get cleaned up because I told the test to stop and so it never reaches the cleanup code. In my experience, even TearDown() doesn't get called when the tests are stopped using the stop button.
My question is: When I hit the stop button, is there any way to invoke a cleanup procedure to remove tests artifacts?
If you stop the code execution you are stoping the procedure too so nothing else will be executed.
Something you can do is deleting on Set Up. This method is executed before each test. This seems to be the best way to be sure that database is clean.
SetUpAttribute (NUnit 2.0)
This attribute is used inside a TestFixture to provide a common set of functions that are performed just before each test method is called. A TestFixture can have only one SetUp method. If more than one is defined the TestFixture will compile successfully, but its tests will not run.
Sample
namespace NUnit.Tests
{
using System;
using NUnit.Framework;
[TestFixture]
public class SuccessTests
{
[SetUp] public void Init()
{ /* ... */ }
[TearDown] public void Dispose()
{ /* ... */ }
[Test] public void Add()
{ /* ... */ }
}
}
If your test makes connection to DB, they are not really a "unit" test but more of integration test. Execution of unit test should not impact any other layer in the application.
I would suggest you to use "Dependency Injection" mechanism. In short, structure your custom data access layer such that while initializing it you pass SQL connection object to it. From your application, you pass on the actual connection but from your unit test you pass a mock connection. You can then assert mock connection on if a query or non-query was executed or not.
This way no data gets inserted at all in your DB and you don't need to worry about cleaning it if your test is stopped in middle of execution.
You can look at an example of mocking SQL connection in this post: Mocking data access Layer Rhino mock
Related
I'm writing a unit test class (using testng) that has mocked member variables (using Mockito) and running the tests in parallel. I initially set up the expected mock in an #BeforeClass method, and in each test case I break something by creating a Mockito.when for each exceptional case.
What I'm seeing (unsurprisingly) is that these tests aren't independent; the Mockito.when in one test case affects the others. I noticed that I could be set up the mocks before each test, and I changed the #BeforeClass to #BeforeMethod. I still didn't expect these to pass consistently, as the tests are all still operating on the same shared mock object at the same time. However, all the tests started passing consistently. My question is "why"? Will this eventually fail? No matter what I do (Thread.sleep, etc) I can't reproduce a failure.
Is using #BeforeMethod enough to make these tests independent? If so, can anyone explain why?
Example code below:
public class ExampleTest {
#Mock
private List<String> list;
#BeforeClass // Changing to #BeforeMethod works for some reason
public void setup() throws NoSuchComponentException, ADPRuntimeException {
MockitoAnnotations.initMocks(this);
Mockito.when(list.get(0)).thenReturn("normal");
}
#Test
public void testNormalCase() throws InterruptedException {
assertEquals(list.get(0), "normal"); // Fails with expected [normal] but found [exceptional]
}
#Test
public void testExceptionalCase() throws InterruptedException {
Mockito.when(list.get(0)).thenReturn("exceptional");
assertEquals(list.get(0), "exceptional");
}
}
The problem here is that TestNG creates one instance of your test class ExampleTest and this is the instance that is used by both of your #Test methods.
So when you used #BeforeClass, you would have random failures with testNormalCase() if testExceptionalCase() ran first and altered the state of your test class.
When you changed your annotation to be #BeforeMethod, it would cause the setup to be executed right before every #Test method was executed.
So the setup would fix the state for testNormalCase() which is why it would pass, and since testExceptionalCase() was internally altering the state using Mockito.when() and then running assertions, it would pass all the time as well.
But there's one scenario wherein your setup will still fail, viz., when you use parallel="methods" attribute in your <suite> tag within your TestNG suite xml file i.e., when you configure TestNG and instruct it to run every #Test method in parallel.
In that case, the Mockito.when() within testExceptionalCase() will affect the shared state [ since you are using this in a shared manner amongst all your #Test methods ] causing testNormalCase() to fail randomly.
To fix this, I would suggest that you do the following :
Don't share this between your #Test methods, but house it separately outside of your test class i.e., house all the data members of your test class in a separate pojo which would be mocked rather than mocking this.
Use a ThreadLocal to store the state which is being mocked by Mockito.when() and then run assertions on the ThreadLocal from within your #Test methods.
I know this is a basic question but due to lack of my knowledge I couldn't start writing test for my application. So here I am trying to learn and understand what to test and how to test based on my application scenario.
Class MyController {
MyService service = new MyService();
List<String> myList = service.generateSomeList(keyVal);
model.add("myList", myList);
}
Class MyService {
ThirdPatryService extService = new ThirdPatryService ();
MyDao dao = new MyDao();
public List<String> generateSomeList(Long key) {
List<String> resultList = dao.fetchMyList(key);
extService.processData(resultList); //using 3rd party service to process result. It doesn't return anything.
return formattedList(resultList);
}
private List<String> formattedList(List<String> listToProcess) {
//do some formatting
}
}
Class MyDao {
List<String> fetchMyList(key) {
//Use DB connection to run sql and return result
}
}
I want to do both unit testing and integration testing. So some of my questions are:
Do I have to do unit testing for MyDao? I don't think so since I can test query result by testing service level.
What can be the possible test cases for service level? I can think of test result from db and test formatting function. Any other test that I missed?
While testing generateSomeList() method is that OK to create dummy String list and test it against result? Like code below Am I creating list myself and testing myself. IS this proper/correct way to write test?
#Test
public void generateSomeListTest() {
//Step 1: Create dummy string list e.g. dummyList =["test1", "test2", "test3"]
//Step 2: when(mydao.fetchMyList(anyLong()).thenReturn(dummyList);
//Step 4: result=service.generateSomelist(123L);
//Step 5: assertEquals(result[i], dummyList[i]);
}
I don't think I have to test third party service but I think I have to make sure it is being called. Is this correct? If yes how can I do that with Mockito?
How to make sure thirdparty service has really done the processing of my data. Since its return type is void how can I do test it really done its job e.g like send email
Do I have to write test for controller or I can just write integration test.
I really appreciate if you could answer these question to understand the testing part for the application.
Thanks.
They're not really unit tests, but yes, you should test your DAOs. One of the main points in using DAOs is precisely that they're relatively easy to test (you store some test data in the database, then call a DAO method which executes a query, and check that the method returns what it should return), and that they make the service layer easy to test by mocking the DAOs. You should definitely not use the real DAOs when testing the services. Ue mock DAOs. That willmake the service tests much simpler, and much much faster.
Testing the results from DB is the job of the DAO test. The service test should use a mock DAO that returns hard-coded results, and checks that the service foes what it should do with these hard-coded results (formatting, in this case)
Yes, it's fine.
Usually, it's sufficient to stub the dependencies. Verifying that they have been called is often redundant. In that case, it could be a good idea since the third party service doesn't return anything. But that's a code smell. Why doesn't it return something? See the method verify() in Mockito. It's the very first point in the Mockito documentation: http://docs.mockito.googlecode.com/hg/latest/org/mockito/Mockito.html#1
The third party service is supposed to have been tested separately and thus be reliable. So you're supposed to assume that it does what its documentation says it does. The test for A which uses B is not supposed to test B. Only A. The test of B tests B.
A unit test is usually simpler to write and faster to execute. It's also easier to test corner cases with unit tests. Integration tests should be more coarse-grained.
Just a note: what your code severely lacks as is is dependency injection. That's what will make your code testable. It's very hard to test as is because the controller creates its service, which creates its DAO. Instead, the controller should be injected with the service, and the service should be injected with the DAO. That's what allows injecting a mock DAO in the service to test the service in isolation, and to inject a mock service into the controller to test the controller in isolation.
I have couple of DAO unit test classes that I want to run together using TestNG, however TestNG tries to run them in parallel which results in some rollbacks failing. While I would like to run my Unit Test classes run sequentially, I also want to be able to specify a minimum time that TestNG must wait before it runs the next test. Is this achievable?
P.S. I understand that TestNG can be told to run all the tests in a test class in a SingleThread, I am able to specify the sequence of method calls anyway using groups, so that's not an issue perhaps.
What about a hard dependency between the 2 tests? If you write that:
#Test
public void test1() { ... }
#Test(dependsOnMethods = "test1", alwaysRun = true)
public void test2() { ... }
then test2 will always be run after test1.
Do not forget alwaysRun = true, otherwise if test1 fails, test2 will be skipped!
If you do not want to run your classes in parallel, you need to specify the parallel attribute of your suite as false. By default, it's false. So I would think that it should run sequentially by default, unless you have some change in the way you invoke your tests.
For adding a bit of delay between your classes, you can probably add your delay logic in a method annotated with #AfterClass. AFAIK testng does not have a way to specify that in a testng xml or commandline. There is a timeout attribute but that is more for timing out tests and is not probably what you are looking for.
For adding delay between your tests i.e. test tags in xml, then you can try implementing the ITestListener - onFinish method, wherein you can add your delay code. It is run after every <test>. If a delay is required after every testcase, then implement delay code in IInvokedMethodListener - AfterInvocation() which is run after every test method runs. You would need to specify the listener when you invoke your suite then.
Hope it helps..
Following is what I used in some tests.
First, define utility methods like this:
// make thread sleep a while, so that reduce effect to subsequence operation if any shared resource,
private void delay(long milliseconds) throws InterruptedException {
Thread.sleep(milliseconds);
}
private void delay() throws InterruptedException {
delay(500);
}
Then, call the method inside testing methods, at end or beginning.
e.g
#Test
public void testCopyViaTransfer() throws IOException, InterruptedException {
copyViaTransfer(new File(sourcePath), new File(targetPath));
delay();
}
So a TestMethod runs only once in one test run.
How can I, in a single test run, let a TestMethod run several times, each time for a different data set that I've set up? My data does not come from a database or file; I want to build up several different in-memory instances of test data mockup.
TestInitialize doesn't let me do this as it runs only once, as well.
What's in control of the execution of TestMethods? How to make it re-run my TestMethods for each data set and how do I access the data set then?
I thought TestContext would be useful but it seems to be database only?
What you're looking for is so-called Data-driven testing.
Look e.g. here and here for descriptions on how to do it with MSTest.
HTH.
Thomas
You could define a test method which calls the other test method multiple times, after doing the correct setup, I'm not saying that this is a good thing to do, but I believe it would work
public class TestClass
{
//This is where the per-data-source test is. This is not marked as a TestMehod because
//it will not be invoked directly by the test runner.
public void ActualTest()
{
//Per-data-source test logic here.
}
[TestMethod]
public void RunActualTestsMultipleTimesWithDifferentConfigs()
{
//Setup for test run with data set 1
ActualTest();
//Setup for test with data set 2
ActualTest();
}
}
This feels like a terrible, terrible hack, I freely admit that. I wouldn't use this myself if I had any other choice, but it may be an option.
Another possibility is to look into how extensible MSTest is, specifically whether or not there is any mechinism to modify or extend the test runner
I'm writing some NUnit tests for database operations. Obviously, if Add() fails, then Get() will fail as well. However, it looks deceiving when both Add() and Get() fail because it looks like there's two problems instead of just one.
Is there a way to specify an 'order' for tests to run in, in that if the first test fails, the following tests are ignored?
In the same line, is there a way to order the unit test classes themselves? For example, I would like to run my tests for basic database operations first before the tests for round-tripping data from the UI.
Note: This is a little different than having tests depend on each other, it's more like ensuring that something works first before running a bunch of tests. It's a waste of time to, for example, run a bunch of database operations if you can't get a connection to the database in the first place.
Edit: It seems that some people are missing the point. I'm not doing this:
[Test]
public void AddTest()
{
db.Add(someData);
}
[Test]
public void GetTest()
{
db.Get(someData);
Assert.That(data was retrieved successfully);
}
Rather, I'm doing this:
[Test]
public void AddTest()
{
db.Add(someData);
}
[Test]
public void GetTest()
{
// need some way here to ensure that db.Add() can actually be performed successfully
db.Add(someData);
db.Get(somedata);
Assert.That(data was retrieved successfully);
}
In other words, I want to ensure that the data can be added in the first place before I can test whether it can be retrieved. People are assuming I'm using data from the first test to pass the second test when this is not the case. I'm trying to ensure that one operation is possible before attempting another that depends on it.
As I said already, you need to ensure you can get a connection to the database before running database operations. Or that you can open a file before performing file operations. Or connect to a server before testing API calls. Or...you get the point.
NUnit supports an "Assume.That" syntax for validating setup. This is documented as part of the Theory (thanks clairestreb). In the NUnit.Framework namespace is a class Assume. To quote the documentation:
/// Provides static methods to express the assumptions
/// that must be met for a test to give a meaningful
/// result. If an assumption is not met, the test
/// should produce an inconclusive result.
So in context:
public void TestGet() {
MyList sut = new MyList()
Object expecting = new Object();
sut.Put(expecting);
Assume.That(sut.size(), Is(1));
Assert.That(sut.Get(), Is(expecting));
}
Tests should never depend on each other. You just found out why. Tests that depend on each other are fragile by definition. If you need the data in the DB for the test for Get(), put it there in the setup step.
I think the problem is that you're using NUnit to run something other than the sort of Unit Tests that NUnit was made to run.
Essentially, you want AddTest to run before GetTest, and you want NUnit to stop executing tests if AddTest fails.
The problem is that that's antithetical to unit testing - tests are supposed to be completely independent and run in any order.
The standard concept of Unit Testing is that if you have a test around the 'Add' functionality, then you can use the 'Add' functionality in the 'Get' test and not worry about if 'Add' works within the 'Get' test. You know 'Add' works - you have a test for it.
The 'FIRST' principle (http://agileinaflash.blogspot.com/2009/02/first.html) describes how Unit tests should behave. The test you want to write violates both 'I' (Isolated) and 'R' (Repeatable).
If you're concerned about the database connection dropping between your two tests, I would recommend that rather than connect to a real database during the test, your code should use some sort of a data interface, and for the test, you should be using a mock interface. If the point of the test is to exercise the database connection, then you may simply be using the wrong tool for the job - that's not really a Unit test.
I don't think that's possible out-of-box.
Anyway, your test class design as you described will make the test code very fragile.
MbUnit seems to have a DependsOnAttribute that would allow you to do what you want.
If the other test fixture or test
method fails then this test will not
run. Moreover, the dependency forces
this test to run after those it
depends upon.
Don't know anything about NUnit though.
You can't assume any order of test fixture execution, so any prerequisites have to be checked for within your test classes.
Segregate your Add test into one test-class e.g. AddTests, and put the Get test(s) into another test-class, e.g. class GetTests.
In the [TestFixtureSetUp] method of the GetTests class, check that you have working database access (e.g. that Add's work), and if not, Assert.Ignore or Inconclusive, as you deem appropriate.
This will abort the GetTests test fixture when its prerequisites aren't met, and skip trying to run any of the unit tests it contains.
(I think! I'm an nUnit newbie.)
Create a global variable and return in the test for Get unless Add set it to true (do this in the last line of Add):
public boolean addFailed = false;
public void testAdd () {
try {
... old test code ...
} catch (Throwable t) { // Catch all errors
addFailed = true;
throw t; // Don't forget to rethrow
}
}
public void testGet () {
if (addFailed) return;
... old test code ...
}