Splitting a test to a set of smaller tests - unit-testing

I want to be able to split a big test to smaller tests so that when the smaller tests pass they imply that the big test would also pass (so there is no reason to run the original big test). I want to do this because smaller tests usually take less time, less effort and are less fragile. I would like to know if there are test design patterns or verification tools that can help me to achieve this test splitting in a robust way.
I fear that the connection between the smaller tests and the original test is lost when someone changes something in the set of smaller tests. Another fear is that the set of smaller tests doesn't really cover the big test.
An example of what I am aiming at:
//Class under test
class A {
public void setB(B b){ this.b = b; }
public Output process(Input i){
return b.process(doMyProcessing(i));
}
private InputFromA doMyProcessing(Input i){ .. }
..
}
//Another class under test
class B {
public Output process(InputFromA i){ .. }
..
}
//The Big Test
#Test
public void theBigTest(){
A systemUnderTest = createSystemUnderTest(); // <-- expect that this is expensive
Input i = createInput();
Output o = systemUnderTest.process(i); // <-- .. or expect that this is expensive
assertEquals(o, expectedOutput());
}
//The splitted tests
#PartlyDefines("theBigTest") // <-- so something like this should come from the tool..
#Test
public void smallerTest1(){
// this method is a bit too long but its just an example..
Input i = createInput();
InputFromA x = expectedInputFromA(); // this should be the same in both tests and it should be ensured somehow
Output expected = expectedOutput(); // this should be the same in both tests and it should be ensured somehow
B b = mock(B.class);
when(b.process(x)).thenReturn(expected);
A classUnderTest = createInstanceOfClassA();
classUnderTest.setB(b);
Output o = classUnderTest.process(i);
assertEquals(o, expected);
verify(b).process(x);
verifyNoMoreInteractions(b);
}
#PartlyDefines("theBigTest") // <-- so something like this should come from the tool..
#Test
public void smallerTest2(){
InputFromA x = expectedInputFromA(); // this should be the same in both tests and it should be ensured somehow
Output expected = expectedOutput(); // this should be the same in both tests and it should be ensured somehow
B classUnderTest = createInstanceOfClassB();
Output o = classUnderTest.process(x);
assertEquals(o, expected);
}

The first suggestion that I'll make is to re-factor your tests on red (failing). To do so, you'll have to break your production code temporarily. This way, you know the tests are still valid.
One common pattern is to use a separate test fixture per collection of "big" tests. You don't have to stick to the "all tests for one class in one test class" pattern. If a a set of tests are related to each other, but are unrelated to another set of tests, then put them in their own class.
The biggest advantage to using a separate class to hold the individual small tests for the big test is that you can take advantage of setup and tear-down methods. In your case, I would move the lines you have commented with:
// this should be the same in both tests and it should be ensured somehow
to the setup method (in JUnit, a method annotated with #Before). If you have some unusually expensive setup that needs to be done, most xUnit testing frameworks have a way to define a setup method that runs once before all of the tests. In JUnit, this is a public static void method that has the #BeforeClass annotation.
If the test data is immutable, I tend to define the variables as constants.
Putting all this together, you might have something like:
public class TheBigTest {
// If InputFromA is immutable, it could be declared as a constant
private InputFromA x;
// If Output is immutable, it could be declared as a constant
private Output expected;
// You could use
// #BeforeClass public static void setupExpectations()
// instead if it is very expensive to setup the data
#Before
public void setUpExpectations() throws Exception {
x = expectedInputFromA();
expected = expectedOutput();
}
#Test
public void smallerTest1(){
// this method is a bit too long but its just an example..
Input i = createInput();
B b = mock(B.class);
when(b.process(x)).thenReturn(expected);
A classUnderTest = createInstanceOfClassA();
classUnderTest.setB(b);
Output o = classUnderTest.process(i);
assertEquals(o, expected);
verify(b).process(x);
verifyNoMoreInteractions(b);
}
#Test
public void smallerTest2(){
B classUnderTest = createInstanceOfClassB();
Output o = classUnderTest.process(x);
assertEquals(o, expected);
}
}

All I can suggest is the book xUnit Test Patterns. If there is a solution it should be in there.

theBigTest is missing the dependency on B. Also smallerTest1 mocks B dependency. In smallerTest2 you should mock InputFromA.
Why did you create a dependency graph like you did?
A takes a B then when A::process Input, you then post process InputFromA in B.
Keep the big test and refactor A and B to change the dependency mapping.
[EDIT] in response to remarks.
#mkorpela, my point is that by looking at the code and their dependencies is how you start to get an idea of how to create smaller tests. A has a dependency on B. In order for it to complete its process() it must use B's process(). Because of this, B has a dependency on A.

Related

NUnit 3.X - How to pass dynamic parameters into a TestCase or TestCaseSource?

CGrunddaten m_grdDaten;
[SetUp]
public void Init()
{
m_grdDaten = new CGrunddaten();
m_grdDaten.m_cwdGeoH.m_dW = 325.0;
m_grdDaten.m_cwd_tl.m_dW = 15;
}
[Test]
public void TestMethod()
{
m_grdDaten.RechGrdDaten();
Assert.That(m_grdDaten.m_cwd_pl.m_dW, Is.EqualTo(93344).Within(.1),"Außenluftdruck");
Assert.That(m_grdDaten.m_cwd_pl_neb.m_dW, Is.EqualTo(93147.3).Within(.1), "Außenluftdruck Nebenluftberechnung");
Assert.That(m_grdDaten.m_cwd_pl_pmax.m_dW, Is.EqualTo(92928.2).Within(.1), "Außenluftdruck max. zul. Unterdruck");
Assert.That(m_grdDaten.m_cwdRho_l.m_dW, Is.EqualTo(1.124).Within(.001), "Dichte Außenluft");
Assert.That(m_grdDaten.m_cwdRho_l_neb.m_dW, Is.EqualTo(1.184).Within(.001), "Dichte Außenluft Nebenluftberechnung");
Assert.That(m_grdDaten.m_cwdRho_l_pmax.m_dW, Is.EqualTo(1.249).Within(.001), "Dichte Außenluft max. zul. Unterdruck");
}
Is there a way to get this in a TestCase or TestCaseSource, so that I have only one Assert-line ?
I'm talking about this:
m_grdDaten.m_cwd_pl.m_dW, 93344
m_grdDaten.m_cwd_pl_neb.m_dW, 93147.3
m_grdDaten.m_cwd_pl_pmax.m_dW, 92928.2
....
I know that TestCase and TestCaseSource are static.... but is there another way?
The best way to do this test would be using the not-yet-implemented multiple asserts feature, so that all the asserts would run even if some failed.
Since that's not available yet, I can understand your wanting to make this into multiple tests, where each gets reported separately. Using test cases makes this possible, of course, even though this is really logically just one test.
The fact that a test case source method must be static doesn't prevent it from creating an instance of your CGrunddaten class. The tests themselves are all just comparing two doubles for equality and don't need to know anything about that class.
You could write something like this:
private static IEnumerable<TestCaseData> GrundDatenDaten
{
var gd = new CGrunddaten();
gd.m_cwdGeoH.m_dW = 325.0;
gd.m_cwd_tl.m_dW = 15;
gd.RechGrdDaten();
yield return new TestCaseData(gd.m_cwd_pl.m_dW, 93344, .1, "Außenluftdruck");
// und so weiter
}
[TestCaseSource("GrundDatenDaten")]
public void testMethod(object actual, object expected, object tolerance, string label)
{
Assert.That(actual, Is.EqualTo(expected).Within(tolerance), label);
}
However, I don't like that very much as it hides the true function of the test in the data source. I think your original formulation is the best way to do it for now and leaves you with the ability to include the code in an Assert.Multiple block once that feature is implemented.

How to avoid multiple asserts in a JUnit test?

I have a DTO which I'm populating from the request object, and the request object has many fields. I want to write a test to check if the populateDTO() method is putting values in the right places or not. If I follow the rule of one assert per test, I would have to write a large number of tests, to test each field. The other approach would be to write multiple asserts in a single test. Is it really recommended to follow one assert per test rule or can we relax in these cases. How do I approach this problem?
Keep them separate. A unit test is supposed to tell you which unit failed. Keeping them separate also allows you to isolate the problem quickly w/o requiring you to go through a lengthy debug cycle.
Is it really recommended to have only one assert per unit test? Yes it is, there are people who make that recommendation. Are they right? I don't think so. I find it hard to believe such people have actually worked on real code for a long time.
So, imangine you have a mutator method you want to unit test. The mutator has some kind of effect, or effects, which you want to check. Typically the expected effect of a mutator are few in number, because many effects suggests an overly complicated design for the mutator. With one assert per effect and one test case per assert, you will not need many test cases per mutator, so the recommendation does not seem so bad.
But the flaw in this reasoning is that those tests are looking at only the expected effects of the mutator. But if the mutator has a bug in it, it might have unexpected faulty side effects. The tests are making the foolish assumption that the code does not have a whole class of bugs, and that no future refactoring will introduce such bugs. When the method was originally written it might be obvious to the author that particular side effects were impossible, but refactoring and addition of new functionality might make such side effects possible.
The only safe way to test long lived code is to check that the mutators do not have unexpected side effects. But how can you test for those? Most classes have some invariants: things that no mutator can ever change. The size method of a container will never return a negative value, for example. Each invariant is, in effect, a post condition for every mutator (and also the constructor). Each mutator also typically has a set of invariants that describe what kind of changes it does not make. A sort method does not change the length of the container, for example. The class and mutator invariants are, in effect, post conditions for every mutator call. Adding assertions for all them is the only way of checking for unexpected side effects.
So, just add more test cases? In practice the number of invariants multiplied by the number of mutators to test is large, so one assertion per test leads to many test cases. And the information about your invariants is scattered over many test cases. A design change to tweak one invariant will require alteration of many test cases. It becomes impractical. Its better to have parameterised test cases for a mutator, which check several invariants for the mutator, using several assertions.
And the authors of JUnit5 seem to agree. They provide an assertAll for checking several assertions in one test-case.
this construction help you to have 1 big assert (with small asserts inside)
import static org.junit.jupiter.api.Assertions.assertAll;
assertAll(
() -> assertThat(actual1, is(expected)),
() -> assertThat(actual2, is(expected))
);
You can have a parameterized test where the 1st parameter is the propertyname and the second the expected value.
Is that rule extended to being in a loop? Consider this
Collection expectedValues = // populate expected values
populateDTO();
for(DTO dto : myDtoContainer)
assert_equal(dto, expectedValues.get(someIndexRelatedToDto))
Now I'm not so big on the exact syntax, but this is just the notion I'm looking at.
EDIT:
After the comments...
The answer is ... Nope!
The reason the principle exists is so you can identify which parts of the object fail. If you have them in one method, you're going to run into only one assertion, then the next, then the next, and you won't see them all.
So you can have it one of two ways:
One method, less boilerplate code.
Many methods, better reporting on the test run
It's up to you, both have ups and downs.
3. List item
[caveat: I'm very "unfluent" in Java/JUnit, so beware of errors in the details below]
There's a couple of ways to do this:
1) Write multiple assertions in the same test. This should be ok if you are only testing the DTO generation once. You could start here, and move to another solution when this starts to hurt.
2) Write a helper assertion, e.g. assertDtoFieldsEqual, passing in the expected and actual DTO. Inside the helper assertion you assert each field separately. This at least gives you the illusion of only one assert per test and will make things clearer if you test DTO generation for multiple scenarios.
3) Implement equals for the object that check each property and implement toString so that you at least can inspect the assertion result manually to find out what part is incorrect.
4) For each scenario where the DTO is generated, create a separate test fixture that generates the DTO and initializes the expected properties in the setUp method. The create a separate test for testing each of the properties. This also results in a lot of tests, but they will at least be one-liners only. Example in pseudo-code:
public class WithDtoGeneratedFromXxx : TestFixture
{
DTO dto = null;
public void setUp()
{
dto = GenerateDtoFromXxx();
expectedProp1 = "";
...
}
void testProp1IsGeneratedCorrectly()
{
assertEqual(expectedProp1, dto.prop1);
}
...
}
If you need to test the DTO generation under different scenarios and choose this last method it could soon become tedious to write all those tests. If this is the case you could implement an abstract base fixture that leaves out the details on how to create the DTO and setup the expected properties to derived classes. Pseudo-code:
abstract class AbstractDtoTest : TestFixture
{
DTO dto;
SomeType expectedProp1;
abstract DTO createDto();
abstract SomeType getExpectedProp1();
void setUp()
{
dto = createDto();
...
}
void testProp1IsGeneratedCorrectly()
{
assertEqual(getExpectedProp1(), dto.prop1);
}
...
}
class WithDtoGeneratedFromXxx : AbstractDtoTest
{
DTO createDto() { return GenerateDtoFromXxx(); }
abstract SomeType getExpectedProp1() { return new SomeType(); }
...
}
Or you can do some workaround.
import junit.framework.Assert;
import org.junit.After;
import org.junit.AfterClass;
import org.junit.Before;
import org.junit.BeforeClass;
import org.junit.Test;
public class NewEmptyJUnitTest {
public NewEmptyJUnitTest() {
}
#BeforeClass
public static void setUpClass() throws Exception {
}
#AfterClass
public static void tearDownClass() throws Exception {
}
#Before
public void setUp() {
}
#After
public void tearDown() {
}
#Test
public void checkMultipleValues() {
String errMessages = new String();
try{
this.checkProperty1("someActualResult", "someExpectedResult");
} catch (Exception e){
errMessages += e.getMessage();
}
try{
this.checkProperty2("someActualResult", "someExpectedResult");
} catch (Exception e){
errMessages += e.getMessage();
}
try{
this.checkProperty3("someActualResult", "someExpectedResult");
} catch (Exception e){
errMessages += e.getMessage();
}
Assert.assertTrue(errMessages, errMessages.isEmpty());
}
private boolean checkProperty1(String propertyValue, String expectedvalue) throws Exception{
if(propertyValue == expectedvalue){
return true;
}else {
throw new Exception("Property1 has value: " + propertyValue + ", expected: " + expectedvalue);
}
}
private boolean checkProperty2(String propertyValue, String expectedvalue) throws Exception{
if(propertyValue == expectedvalue){
return true;
}else {
throw new Exception("Property2 has value: " + propertyValue + ", expected: " + expectedvalue);
}
}
private boolean checkProperty3(String propertyValue, String expectedvalue) throws Exception{
if(propertyValue == expectedvalue){
return true;
}else {
throw new Exception("Property3 has value: " + propertyValue + ", expected: " + expectedvalue);
}
}
}
Maybe not the best approach and if overused than can confuse... but it is a possibility.

How to unit test works in salesforce?

I've done writing code on salesforce and in order to release the unit tests have to cover at least 75%.
What I am facing is that the classOne that calls methods from classTwo also have to cover classTwo's unit test within classOne even though it is done in classTwo file already.
File MyClassTwo
public with sharing class ClassTwo {
public String method1() {
return 'one';
}
public String method2() {
return 'two';
}
public static testMethod void testMethod1() {
ClassTwo two = new ClassTwo();
String out = two.method1();
system.assertEquals(out, 'one'); //valid
}
public static testMethod void testMethod2() {
ClassTwo two = new ClassTwo();
String out = two.method2();
system.assertEquals(out, 'two'); // valid
}
}
File MyClassOne
public with sharing class ClassOne {
public String callClassTwo() {
ClassTwo foo = new ClassTwo();
String something = foo.method1();
return something;
}
public static testMethod void testCallClassTwo() {
ClassOne one = new ClassOne();
String out = one.callClassTwo();
system.assertEquals(out, 'one');
}
}
The result of testing MyClassOne would not return 100% test coverage because it says I have not covered MyClassTwo method2() part inside of MyClassOne file.
But I already wrote unit test for MyClassTwo inside of MyClassTwo file as you can see.
So does this mean I have to copy and paste the unit test in MyClassTwo file over to MyClassOne?
Doing so gives me 100% coverage but this seems really annoying and rediculous. Having same test in ClassA and ClassB....? Am I doing wrong or is this the way?
Having said, is it possible to create mock object in salesforce? I haven't figure how yet..
http://sites.force.com/answers/ideaView?c=09a30000000D9xt&id=087300000007m3fAAA&returnUrl=/apex/ideaList%3Fc%3D09a30000000D9xt%26category%3DApex%2B%2526%2BVisualforce%26p%3D19%26sort%3Dpopular
UDPATE
I re-wrote the code and updated above, this time for sure classOne test would not return 100% even though it is not calling classTwo method2()
Comments about Java mock libraries aren't very helpful in Salesforce world ;) At my projects we usually aimed for making our own test data in the test method, calling real functionality, checking the results... and whole test framework on Salesforce side is responsible for transaction rollback (so no test data is saved to DB in the end regardless whether the test failed or passed).
Anyway...
Masato, your classes do not compile (methods outside class scope, public String hello() without any String returned)... After I fixed it I simply right-clicked the MyClassA -> Force.com -> run tests and got full code coverage without any problems so your issue must lie somewhere else...
Here's how it looks: http://dl.dropbox.com/u/709568/stackoverflow/masato_code_coverage.png
I'm trying to think what might have gone wrong... are you sure all classes compile and were saved on server side? Did you put test methods in same classes as functionality or in separate ones (generally I make separate class name with similar name like MyClassATest). If it's a separate class - on which file did you click "run tests"?
Last but not least - if you're facing this issue during deployment from sandbox to production, make sure you selected all classes you need in the deployment wizard?
If you really want to "unit" test, you should test the behavior of your class B AND the behavior of your class A, mocking the call to the class B method.
That's a tough conversation between mock lovers and others (Martin Fowler I think is not a "mocker").
Anyway. You should stop thinking about 100% coverage. You should think about:
Why am i testing?
How am i testing?
Here, i'd definitely go for 2 tests:
One test for the B class into the b class test file to be sure the B method is well implemented, with all the side effects, side values etc.
one test for the A class mocking the class B
What is a mock?
To stay VERY simple: A mock is a portion of code in your test which is gonna say: when the B class method is called, always return this value: "+++" .
By doing this, you allow yourself having a maintanable and modulable test suite.
In java, I love mockito : http://mockito.org/
Although one of my colleagues is lead maintainer for easymock: http://easymock.org/
Hope this helps. Ask me if you need further help.
EDIT SOME EXAMPLE
With Java and mockito:
public class aUTest {
protected A a;
#Mock protected B b;
#Before
public void setUp(){
MockitoAnnotations.initMocks(this);
a = new A();
ReflectionTestUtils.setField(a, "b", b);
}
#Test
public void test_A_method_should_not_throw_exception()
when(b. execute()).thenReturn(true); //just an example of a return value from b. execute()
Boolean result = a.testHello();
// Assert
Assert.assertEquals(true, result);
}
I created an Apex class called TestHelper for all my mock objects. I use constants (static final) for values that I might need elsewhere and public static fields for objects. Works great and since no methods are used, no test coverage is needed.
public without sharing class TestHelper {
public static final string testPRODUCTNAME = 'test Product Name';
public static final string testCOMPANYID = '2508';
public static Account testAccount {
get{
Account tAccount = new Account(
Name = 'Test Account',
BillingStreet = '123 Main St',
BillingCity = 'Dallas',
BillingState = 'TX',
BillingPostalCode = '75234',
Website = 'http://www.google.com',
Phone = '222 345 4567',
Subscription_Start_Date__c = system.today(),
Subscription_End_Date__c = system.today().addDays(30),
Number_Of_Seats__c = 1,
companyId__c = testCOMPANYID,
ZProduct_Name__c = testPRODUCTNAME);
insert tAccount;
return tAccount;
}
}
}

How to mock a method which also belongs to the target class itself?

Let's say we are testing a class C which has 2 methods M1 and M2 where M1 calls M2 when executed.
Testing M2 is ok, but how can we test M1? The difficulty is that we need to mock M2 if I'm not misunderstanding things.
If so, how can we mock another method while testing a method defined in the same class?
[Edit]
Class C has no base classes.
You can do this by setting the CallBase property on the mock to true.
For example, if I have this class:
public class Foo
{
public virtual string MethodA()
{
return "A";
}
public virtual string MethodB()
{
return MethodA() + "B";
}
}
I can setup MethodA and call MethodB:
[Fact]
public void RunTest()
{
Mock<Foo> mockFoo = new Mock<Foo>();
mockFoo.Setup(x => x.MethodA()).Returns("Mock");
mockFoo.CallBase = true;
string result = mockFoo.Object.MethodB();
Assert.Equal("MockB", result);
}
You should let the call to M1 pass through to a real instance of the M2 method.
In general, you should be testing the black box behaviour of your classes. Your tests shouldn't care that M1 happens to call M2 - this is an implementation detail.
This isn't the same as mocking external dependencies (which you should do)...
For example, say I have a class like this:
class AccountMerger
{
public AccountMerger(IAccountDao dao)
{
this.dao = dao;
}
public void Merge(Account first, Account second, MergeStrategy strategy)
{
// merge logic goes here...
// [...]
dao.Save(first);
dao.Save(second);
}
public void Merge(Account first, Account second)
{
Merge(first, second, MergeStrategy.Default);
}
private readonly IAccountDao dao;
}
I want my tests to show that:
Calling Merge(first, second, strategy) results in two accounts getting saved that have been merged using the supplied rule.
Calling Merge(first, second) results in two accounts getting saved that have been merged using the default rule.
Note that both of these requirements are phrased in terms of inputs and effects - in particular, I don't care how the class achieves these results, as long as it does.
The fact that the second method happens to use the first isn't something I care about, or even that I want to enforce - if I do so, I'll write very brittle tests. (There's even an argument that if you've messed about with the object under test using a mocking framework, you're not even testing the original object any more, so what are you testing?) This is an internal dependency that could quite happily change without breaking the requirements:
// ...
// refactored AccountMerger methods
// these still work, and still fulfil the two requirements above
public void Merge(Account first, Account second, MergeStrategy strategy)
{
MergeAndSave(first, second, strategy ?? MergeStrategy.Default);
}
public void Merge(Account first, Account second)
{
// note this no longer calls the other Merge() method
MergeAndSave(first, second, MergeStrategy.Default);
}
private void MergeAndSave(Account first, Account second, MergeStrategy strategy)
{
// merge logic goes here...
// [...]
dao.Save(first);
dao.Save(second);
}
// ...
As long as my tests only check inputs and effects, I can easily make this kind of refactoring - my tests will even help me to do so, as they make sure I haven't broken the class while making changes.
On the other hand, I do about the AccountMerger using the IAccountDao to save accounts following a merge (although the AccountMerger shouldn't care about the implementation of the DAO, only that it has a Save() method.) This DAO is a prime candidate for mocking - an external dependency of the AccountMerger class, feeling an effect I want to check for certain inputs.
You shouldn't mock methods in the target class, you should only mock external dependencies.
If it seems to make sense to mock M2 while testing M1 it often means that your class is doing too many things. Consider splitting the class and keeping M2 in one class and move M1 to a higher level class, which would use the class containing M2. Then mocking M2 is easy, and your code will actually become better.

JUnit for Functions with Void Return Values

I've been working on a Java application where I have to use JUnit for testing. I am learning it as I go. So far I find it to be useful, especially when used in conjunction with the Eclipse JUnit plugin.
After playing around a bit, I developed a consistent method for building my unit tests for functions with no return values. I wanted to share it here and ask others to comment. Do you have any suggested improvements or alternative ways to accomplish the same goal?
Common Return Values
First, there's an enumeration which is used to store values representing test outcomes.
public enum UnitTestReturnValues
{
noException,
unexpectedException
// etc...
}
Generalized Test
Let's say a unit test is being written for:
public class SomeClass
{
public void targetFunction (int x, int y)
{
// ...
}
}
The JUnit test class would be created:
import junit.framework.TestCase;
public class TestSomeClass extends TestCase
{
// ...
}
Within this class, I create a function which is used for every call to the target function being tested. It catches all exceptions and returns a message based on the outcome. For example:
public class TestSomeClass extends TestCase
{
private UnitTestReturnValues callTargetFunction (int x, int y)
{
UnitTestReturnValues outcome = UnitTestReturnValues.noException;
SomeClass testObj = new SomeClass ();
try
{
testObj.targetFunction (x, y);
}
catch (Exception e)
{
UnitTestReturnValues.unexpectedException;
}
return outcome;
}
}
JUnit Tests
Functions called by JUnit begin with a lowercase "test" in the function name, and they fail at the first failed assertion. To run multiple tests on the targetFunction above, it would be written as:
public class TestSomeClass extends TestCase
{
public void testTargetFunctionNegatives ()
{
assertEquals (
callTargetFunction (-1, -1),
UnitTestReturnValues.noException);
}
public void testTargetFunctionZeros ()
{
assertEquals (
callTargetFunction (0, 0),
UnitTestReturnValues.noException);
}
// and so on...
}
Please let me know if you have any suggestions or improvements. Keep in mind that I am in the process of learning how to use JUnit, so I'm sure there are existing tools available that might make this process easier. Thanks!
It is true that if you are using JUnit 3, and you are testing whether a particular exception is thrown or not thrown within a method, you will need to use something like the try-catch pattern you define above.
However:
1) I'd argue that there is a lot more to testing a method with a void return value then checking for exceptions: is your method making the correct calls to (presumably mocked) dependencies; does it behave differently when the class is initialized with a different context or different sets of dependencies, etc. By wrapping all calls to that method, you make it hard to change other aspects of your test.
I'm also generally opposed to adding code and adding complexity if it can be avoided; I don't think it's a burden to have to put a try/catch in a given test when it's checking for exceptions.
2) Switch to JUnit 4! It makes it easy to check for expected exceptions:
#Test(expected=IndexOutOfBoundsException.class)
public void testIndexOutOfBoundsException() {
ArrayList emptyList = new ArrayList();
Object o = emptyList.get(0);
}
If you have the possibility, you should upgrade to JUnit 4.x.
Then your first example can be rewritten to:
#Test(expected=RuntimeException.class)
public void testTargetFunction() {
testObj.targetFunction (x, y);
}
The advantage here is that you can remove you the private UnitTestReturnValues callTargetFunction (int x, int y) method and use JUnit's built in support for expecting exceptions.
You should also test for specific exceptions instead.
Looks like you reimplemented most of JUnit :) In general you don't need to do it. You just call the function you want to call and compare results. If it throws an exception, JUnit will catch if for you and fail the test. If you expect an exception, either you can use the explicit annotation if you are using JUnit 4, or you can use the following pattern:
public void testThrows()
{
try {
obj.DoSth(); //this should throw MyException
assertFail("Expected exception");
} catch (MyException e) {
//assert the message etc
}
}
again, if obj.DoSth() throws a different exception JUnit will fail the test.
So to sum up, I am afraid I believe your approach is overcomplicated, sorry.
please correct me if I am wrong. As I understood from the provided code you're only checking if there may be an exception while executing the function. But you're actually not verifying, if the called functions "works" correctly unless the only way to end in case of an error would be an exception. I suggest writing additional tests like this:
public void testTargetFunctionSomeValue() {
int someValue = 0;
callTargetFunction(someValue, someValue);
assertTrue(verifyTargetFunction(someValue, someValue));
}
public boolean verifyTargetFucntion(int someValue, int someValue) {
// verify that execution of targetFunction made expected changes.
. . . . .
}
and the verifyTargetFunction would acutally check, if calling targetFunction would have made the expected changes - let's say to a database table by returning true or false.
Hope that helps.
Cheers,
Markus