So, I am trying to unit test a class in various scenarios. We use JUnit V 4.
I have a setUp method wherein i reStub the mock to return an expected mock Value.
I have 4 tests : test1-test4. test1,test2 work fine with the expected mocked value configured in perTestSetup method.
Test t3 needs MockClass to throw an exception, so i configure it seperately in t3. Now t3 works fine as the mock throws the exception as expected.
But when perTestSetup tries to reset the mock to return mockResult before running test4, it fails and throws the same Runtime exception configured in t4. I also tried reset() before mocking in perTestSetup(). But that too fails similarly.
What am i missing here?
#Before
public void perTestSetup(){
when(MockClass.functionCall(...)).thenReturn(mockResult);
}
#Test
public void test1(){
}
#Test
public void test2(){
}
#Test
public void test3(){
when(MockClass.functionCall(...)).thenThrow(new RuntimeExcption());
...
}
#Test
public void test4(){
}
Your perTestSetup() method isn't doing what you think it is doing. The #Before annotation means the test environment will run this method once, before doing any of the tests, rather than once per test. Before I finished reading your question, I was actually itching to advise you to rename this method to simply setup(), as that would be a more accurate description.
Options:
Change the annotation to #BeforeEach, which would then change the behaviour to do what you think it should currently be doing. However, this would be inefficient as in the second two tests you will be defining behaviour and then immediately redefining it.
What do the parameters look like in your functionCall(...)? It may be possible to define two separate behaviours in your single #Before setup() method, i.e.
when(MockClass.functionCall(good values)).thenReturn(mockResult);
when(MockClass.functionCall(bad values)).thenThrow(new RuntimeException());
In each test, call functionCall() with the relevant values for that that particular test.
If the parameters in functionCall() do not readily accommodate the previous approach, consider making two separate instantiations of MockClass, something like
MockClass successfulMockClass = new mock(MockClass.class);
when(successfulMockClass()).thenReturn(mockResult);
MockClass unsuccessfulMockClass = new mock(MockClass.class);
when(unsuccessfulMockClass()).thenThrow(new RuntimeException());
In your tests, call on the relevant mocked object depending on what input you are testing against.
Without being able to see the details of your class, I suspect the second option is what I would go for. It may be worth trying all three to see which feels most intuitive for you, though.
Related
I'm writing a unit test class (using testng) that has mocked member variables (using Mockito) and running the tests in parallel. I initially set up the expected mock in an #BeforeClass method, and in each test case I break something by creating a Mockito.when for each exceptional case.
What I'm seeing (unsurprisingly) is that these tests aren't independent; the Mockito.when in one test case affects the others. I noticed that I could be set up the mocks before each test, and I changed the #BeforeClass to #BeforeMethod. I still didn't expect these to pass consistently, as the tests are all still operating on the same shared mock object at the same time. However, all the tests started passing consistently. My question is "why"? Will this eventually fail? No matter what I do (Thread.sleep, etc) I can't reproduce a failure.
Is using #BeforeMethod enough to make these tests independent? If so, can anyone explain why?
Example code below:
public class ExampleTest {
#Mock
private List<String> list;
#BeforeClass // Changing to #BeforeMethod works for some reason
public void setup() throws NoSuchComponentException, ADPRuntimeException {
MockitoAnnotations.initMocks(this);
Mockito.when(list.get(0)).thenReturn("normal");
}
#Test
public void testNormalCase() throws InterruptedException {
assertEquals(list.get(0), "normal"); // Fails with expected [normal] but found [exceptional]
}
#Test
public void testExceptionalCase() throws InterruptedException {
Mockito.when(list.get(0)).thenReturn("exceptional");
assertEquals(list.get(0), "exceptional");
}
}
The problem here is that TestNG creates one instance of your test class ExampleTest and this is the instance that is used by both of your #Test methods.
So when you used #BeforeClass, you would have random failures with testNormalCase() if testExceptionalCase() ran first and altered the state of your test class.
When you changed your annotation to be #BeforeMethod, it would cause the setup to be executed right before every #Test method was executed.
So the setup would fix the state for testNormalCase() which is why it would pass, and since testExceptionalCase() was internally altering the state using Mockito.when() and then running assertions, it would pass all the time as well.
But there's one scenario wherein your setup will still fail, viz., when you use parallel="methods" attribute in your <suite> tag within your TestNG suite xml file i.e., when you configure TestNG and instruct it to run every #Test method in parallel.
In that case, the Mockito.when() within testExceptionalCase() will affect the shared state [ since you are using this in a shared manner amongst all your #Test methods ] causing testNormalCase() to fail randomly.
To fix this, I would suggest that you do the following :
Don't share this between your #Test methods, but house it separately outside of your test class i.e., house all the data members of your test class in a separate pojo which would be mocked rather than mocking this.
Use a ThreadLocal to store the state which is being mocked by Mockito.when() and then run assertions on the ThreadLocal from within your #Test methods.
I have this production code in my Presenter:
#UiThread
public void tryToReplaceLogo(String emailInitiallySearchedFor, String logoUrl) {
if(isTheEmailWeAskedApiForStillTheSameAsInTheInputField(emailInitiallySearchedFor)){
if (!TextUtils.isEmpty(logoUrl)) {
downloadAndShowImage(logoUrl);
} else {
view.displayDefaultLogo();
}
}
}
public void downloadAndShowImage(String url) {
final Target target = new Target() {
#Override
public void onBitmapLoaded(Bitmap bitmap, Picasso.LoadedFrom from) {
view.displayLogoFromBitmap(bitmap);
}
#Override
public void onBitmapFailed(Drawable errorDrawable) {
}
#Override
public void onPrepareLoad(Drawable placeHolderDrawable) {
}
};
Picasso.with(view.getViewContext()).load(url).resize(150, 150).centerInside().into(target);
}
And this unit test for it:
#Test
public void testDisplayLogoIfValidUrlReturnedAndEmailEnteredIsTheSame() throws Exception {
when(loginView.getUserName()).thenReturn(VALID_EMAIL);
when(loginView.getViewContext()).thenReturn(context);
loginLogoFetcherPresenter.onValidateEmailEvent(createSuccessfulValidateEmailEvent(VALID_EMAIL));
waitForAsyncTaskToKickIn();
verify(loginView).displayLogoFromBitmap((Bitmap) anyObject());
}
However, the displayLogoFromBitmap method is never called so my test fails. I need to mock the Target dependency to invoke the onBitmapLoaded method but I don't know how.
Possibly I need to create a static inner class that implements Target so that I can set a Mocked implementation of that in my tests, but how do I invoke the onBitmapLoaded method on the mock?
EDIT:
I have a setter field for Picasso in my LoginPresenter now. In production, (as I am using AndroidAnnotations), I instantiate it in
#AfterInject
void initPicasso() {
picasso = Picasso.with(context):
}
In my test, I mock Picasso like so:
#Mock
Picasso picasso;
#Before
public void setUp() {
picasso = mock(Picasso.class, RETURNS_DEEP_STUBS);
}
(I don't remember why, but I can't use Mockito 2 at this point. It was some incompatibility with something, I think)
In my test case, I got to this point and I don't know what to do:
#Test
public void displayLogoIfValidUrlReturnedAndEmailEnteredIsTheSame() throws Exception {
when(loginView.getUserName()).thenReturn(VALID_EMAIL);
when(loginView.getViewContext()).thenReturn(context);
when(picasso.load(anyString()).resize(anyInt(), anyInt()).centerInside().into(???)) // What do I do here?
loginLogoFetcherPresenter.onValidateEmailEvent(createSuccessfulValidateEmailEvent(VALID_EMAIL));
waitForAsyncTaskToKickIn();
verify(loginView).displayLogoFromBitmap((Bitmap) anyObject());
}
I need to mock the Target dependency
No; do not mock the system under test. Target is as much a part of that system as anything; you wrote the code for it, after all. Remember, once you mock out a class, you commit to not using its implementation, so trying to mock Target to invoke onBitmapLoaded is missing the point.
What's going on here is that you're passing Target—which is real code you wrote that is worth testing—into Picasso, which is external code you didn't write but do depend on. This makes Picasso the dependency worth mocking, with the caveat that mocking interfaces you don't control can get you into trouble if they change (e.g. a method turns final).
So:
Mock your Picasso instance, and the RequestCreator instance Picasso returns when it loads. RequestCreator implements the Builder pattern, so it's a prime candidate for Mockito 2.0's RETURNS_SELF option or other Builder pattern strategies.
Pass the Picasso instance into your system under test, rather than creating it using Picasso.with. At this point you may not need to stub LoginView.getViewContext(), which is a good thing as your test can interact less with hard-to-test Android system classes, and because you've further separated object creation (Picasso) from business logic.
Use an ArgumentCaptor in your test to extract out the Target method that was called on RequestCreator.into.
Test the state of the system before the async callback returns, if you'd like. It's optional, but it's definitely a state your system will be in, and it's easy to forget to test it. You'd probably call verify(view, never()).onBitmapLoaded(any()).
Call target.onBitmapLoaded yourself. You have the target instance at this point, and it should feel correct to explicitly call your code (that is written in your system-under-test) from your test.
Assert your after-callback state, which here would be verify(view).onBitmapLoaded(any()).
Note that there is an existing test helper called MockPicasso, but it seems to require Robolectric, and I haven't reviewed its safety or utility myself.
It seems like EasyMock tests tend to follow the following pattern:
#Test
public void testCreateHamburger()
{
// set up the expectation
EasyMock.expect(mockFoodFactory.createHamburger("Beef", "Swiss", "Tomato", "Green Peppers", "Ketchup"))
.andReturn(mockHamburger);
// replay the mock
EasyMock.replay(mockFoodFactory);
// perform the test
mockAverager.average(chef.cookFood("Hamburger"));
// verify the result
EasyMock.verify(mockFoodFactory);
}
This works fine for one test, but what happens when I want to test the same logic again in a different method? My first thought is to do something like this:
#Before
public void setUp()
{
// set up the expectation
EasyMock.expect(mockFoodFactory.createHamburger("Beef", "Swiss", "Tomato", "Green Peppers", "Ketchup"))
.andReturn(mockHamburger);
// replay the mock
EasyMock.replay(mockCalculator);
}
#After
public void tearDown()
{
// verify the result
EasyMock.verify(mockCalculator);
}
#Test
public void testCreateHamburger()
{
// perform the test
mockAverager.average(chef.cookFood("Hamburger"));
}
#Test
public void testCreateMeal()
{
// perform the test
mockAverager.average(chef.cookMeal("Hamburger"));
}
There's a few fundamental problems with this approach. The first is that I can't have any variation in my method calls. If I want to test person.cookFood("Turkey Burger"), my set up method wouldn't work. The second problem is that my set up method requires createHamburger to be called. If I call person.cookFood("Salad"), then this might not be applicable. I could use anyTimes() or stubReturn() with EasyMock to avoid this problem. However, these methods only verify if a method is called, it's called with certain parameters, not if the method was actually called.
The only solution that's worked so far is to copy and paste the expectations for every test and vary the parameters. Does anybody know any better ways to test with EasyMock which maintain the DRY principle?
The problems you are running into are because Unit Tests should be DAMP not DRY. Unit tests will tend to repeat themselves. If you can remove the repetition in a safe way (so that it doesnt create unnecessarily coupled tests), then go for it. If not, then don't force it. Unit tests should be quick and easy...if they aren't then you are spending too much time testing instead of writing business value.
Just my two cents. BTW, the Art of Unit Testing by Roy Osherove is a great read on unit testing, and covers this topic.
I have couple of DAO unit test classes that I want to run together using TestNG, however TestNG tries to run them in parallel which results in some rollbacks failing. While I would like to run my Unit Test classes run sequentially, I also want to be able to specify a minimum time that TestNG must wait before it runs the next test. Is this achievable?
P.S. I understand that TestNG can be told to run all the tests in a test class in a SingleThread, I am able to specify the sequence of method calls anyway using groups, so that's not an issue perhaps.
What about a hard dependency between the 2 tests? If you write that:
#Test
public void test1() { ... }
#Test(dependsOnMethods = "test1", alwaysRun = true)
public void test2() { ... }
then test2 will always be run after test1.
Do not forget alwaysRun = true, otherwise if test1 fails, test2 will be skipped!
If you do not want to run your classes in parallel, you need to specify the parallel attribute of your suite as false. By default, it's false. So I would think that it should run sequentially by default, unless you have some change in the way you invoke your tests.
For adding a bit of delay between your classes, you can probably add your delay logic in a method annotated with #AfterClass. AFAIK testng does not have a way to specify that in a testng xml or commandline. There is a timeout attribute but that is more for timing out tests and is not probably what you are looking for.
For adding delay between your tests i.e. test tags in xml, then you can try implementing the ITestListener - onFinish method, wherein you can add your delay code. It is run after every <test>. If a delay is required after every testcase, then implement delay code in IInvokedMethodListener - AfterInvocation() which is run after every test method runs. You would need to specify the listener when you invoke your suite then.
Hope it helps..
Following is what I used in some tests.
First, define utility methods like this:
// make thread sleep a while, so that reduce effect to subsequence operation if any shared resource,
private void delay(long milliseconds) throws InterruptedException {
Thread.sleep(milliseconds);
}
private void delay() throws InterruptedException {
delay(500);
}
Then, call the method inside testing methods, at end or beginning.
e.g
#Test
public void testCopyViaTransfer() throws IOException, InterruptedException {
copyViaTransfer(new File(sourcePath), new File(targetPath));
delay();
}
I'm writing some NUnit tests for database operations. Obviously, if Add() fails, then Get() will fail as well. However, it looks deceiving when both Add() and Get() fail because it looks like there's two problems instead of just one.
Is there a way to specify an 'order' for tests to run in, in that if the first test fails, the following tests are ignored?
In the same line, is there a way to order the unit test classes themselves? For example, I would like to run my tests for basic database operations first before the tests for round-tripping data from the UI.
Note: This is a little different than having tests depend on each other, it's more like ensuring that something works first before running a bunch of tests. It's a waste of time to, for example, run a bunch of database operations if you can't get a connection to the database in the first place.
Edit: It seems that some people are missing the point. I'm not doing this:
[Test]
public void AddTest()
{
db.Add(someData);
}
[Test]
public void GetTest()
{
db.Get(someData);
Assert.That(data was retrieved successfully);
}
Rather, I'm doing this:
[Test]
public void AddTest()
{
db.Add(someData);
}
[Test]
public void GetTest()
{
// need some way here to ensure that db.Add() can actually be performed successfully
db.Add(someData);
db.Get(somedata);
Assert.That(data was retrieved successfully);
}
In other words, I want to ensure that the data can be added in the first place before I can test whether it can be retrieved. People are assuming I'm using data from the first test to pass the second test when this is not the case. I'm trying to ensure that one operation is possible before attempting another that depends on it.
As I said already, you need to ensure you can get a connection to the database before running database operations. Or that you can open a file before performing file operations. Or connect to a server before testing API calls. Or...you get the point.
NUnit supports an "Assume.That" syntax for validating setup. This is documented as part of the Theory (thanks clairestreb). In the NUnit.Framework namespace is a class Assume. To quote the documentation:
/// Provides static methods to express the assumptions
/// that must be met for a test to give a meaningful
/// result. If an assumption is not met, the test
/// should produce an inconclusive result.
So in context:
public void TestGet() {
MyList sut = new MyList()
Object expecting = new Object();
sut.Put(expecting);
Assume.That(sut.size(), Is(1));
Assert.That(sut.Get(), Is(expecting));
}
Tests should never depend on each other. You just found out why. Tests that depend on each other are fragile by definition. If you need the data in the DB for the test for Get(), put it there in the setup step.
I think the problem is that you're using NUnit to run something other than the sort of Unit Tests that NUnit was made to run.
Essentially, you want AddTest to run before GetTest, and you want NUnit to stop executing tests if AddTest fails.
The problem is that that's antithetical to unit testing - tests are supposed to be completely independent and run in any order.
The standard concept of Unit Testing is that if you have a test around the 'Add' functionality, then you can use the 'Add' functionality in the 'Get' test and not worry about if 'Add' works within the 'Get' test. You know 'Add' works - you have a test for it.
The 'FIRST' principle (http://agileinaflash.blogspot.com/2009/02/first.html) describes how Unit tests should behave. The test you want to write violates both 'I' (Isolated) and 'R' (Repeatable).
If you're concerned about the database connection dropping between your two tests, I would recommend that rather than connect to a real database during the test, your code should use some sort of a data interface, and for the test, you should be using a mock interface. If the point of the test is to exercise the database connection, then you may simply be using the wrong tool for the job - that's not really a Unit test.
I don't think that's possible out-of-box.
Anyway, your test class design as you described will make the test code very fragile.
MbUnit seems to have a DependsOnAttribute that would allow you to do what you want.
If the other test fixture or test
method fails then this test will not
run. Moreover, the dependency forces
this test to run after those it
depends upon.
Don't know anything about NUnit though.
You can't assume any order of test fixture execution, so any prerequisites have to be checked for within your test classes.
Segregate your Add test into one test-class e.g. AddTests, and put the Get test(s) into another test-class, e.g. class GetTests.
In the [TestFixtureSetUp] method of the GetTests class, check that you have working database access (e.g. that Add's work), and if not, Assert.Ignore or Inconclusive, as you deem appropriate.
This will abort the GetTests test fixture when its prerequisites aren't met, and skip trying to run any of the unit tests it contains.
(I think! I'm an nUnit newbie.)
Create a global variable and return in the test for Get unless Add set it to true (do this in the last line of Add):
public boolean addFailed = false;
public void testAdd () {
try {
... old test code ...
} catch (Throwable t) { // Catch all errors
addFailed = true;
throw t; // Don't forget to rethrow
}
}
public void testGet () {
if (addFailed) return;
... old test code ...
}