Unit tests strategy : redundancy using Black-box - unit-testing

I'am having an issue designing black-box unit tests without redundancy.
Here is an example :
class A {
Float function operationA(int: aNumber){
if(aNumber > 0){
return aNumber * 10 + 5.2;
}
else if (aNumber < 0) {
return aNumber * 7 - 5.2;
}
else {
return aNumber * 78 + 9.3;
}
}
}
class B {
boolean status = true;
Float function opearationB(int: theNumber){
if(status == true){
return a.operationA(aNumber);
}
}
}
In order to correctly test A.operationA(), I would have to write at least three unit tests (aNumber = 0, aNumber > 0 and aNumber < 0).
Now let's say I want to test B.functionB, using black-box strategy, should I re-write the similar three unit tests (theNumber= 0, theNumber> 0 and theNumber< 0) ? In that case, I would have to create a lot of tests each time I use the method A.operationA ...

If the black box constraint can be loosened you can remove all the duplication. I really like Jay Fields definitions of solitary vs sociable unit tests, explained here.
It should be trivial to test class A in isolation. It has no side effects and no collaborators. Ideally class B could also be tested in isolation (solitary) where it's collaborator's, class a, is stubbed out. Not only does this let you exercise class B in isolation, it helps control cascading failures. If class B is tested with the real life class A when class A changes it could cause a failure in class B.
At some point collaboration (sociable) should probably be checked, a couple ways may be:
a single socialable test that calls b through its public interface and triggers the default case in class A
Higher level tests that exercise a specific user story or external flow path, which triggers class B
Sorry didn't answer your direct question.

Related

NUnit 3.X - How to pass dynamic parameters into a TestCase or TestCaseSource?

CGrunddaten m_grdDaten;
[SetUp]
public void Init()
{
m_grdDaten = new CGrunddaten();
m_grdDaten.m_cwdGeoH.m_dW = 325.0;
m_grdDaten.m_cwd_tl.m_dW = 15;
}
[Test]
public void TestMethod()
{
m_grdDaten.RechGrdDaten();
Assert.That(m_grdDaten.m_cwd_pl.m_dW, Is.EqualTo(93344).Within(.1),"Außenluftdruck");
Assert.That(m_grdDaten.m_cwd_pl_neb.m_dW, Is.EqualTo(93147.3).Within(.1), "Außenluftdruck Nebenluftberechnung");
Assert.That(m_grdDaten.m_cwd_pl_pmax.m_dW, Is.EqualTo(92928.2).Within(.1), "Außenluftdruck max. zul. Unterdruck");
Assert.That(m_grdDaten.m_cwdRho_l.m_dW, Is.EqualTo(1.124).Within(.001), "Dichte Außenluft");
Assert.That(m_grdDaten.m_cwdRho_l_neb.m_dW, Is.EqualTo(1.184).Within(.001), "Dichte Außenluft Nebenluftberechnung");
Assert.That(m_grdDaten.m_cwdRho_l_pmax.m_dW, Is.EqualTo(1.249).Within(.001), "Dichte Außenluft max. zul. Unterdruck");
}
Is there a way to get this in a TestCase or TestCaseSource, so that I have only one Assert-line ?
I'm talking about this:
m_grdDaten.m_cwd_pl.m_dW, 93344
m_grdDaten.m_cwd_pl_neb.m_dW, 93147.3
m_grdDaten.m_cwd_pl_pmax.m_dW, 92928.2
....
I know that TestCase and TestCaseSource are static.... but is there another way?
The best way to do this test would be using the not-yet-implemented multiple asserts feature, so that all the asserts would run even if some failed.
Since that's not available yet, I can understand your wanting to make this into multiple tests, where each gets reported separately. Using test cases makes this possible, of course, even though this is really logically just one test.
The fact that a test case source method must be static doesn't prevent it from creating an instance of your CGrunddaten class. The tests themselves are all just comparing two doubles for equality and don't need to know anything about that class.
You could write something like this:
private static IEnumerable<TestCaseData> GrundDatenDaten
{
var gd = new CGrunddaten();
gd.m_cwdGeoH.m_dW = 325.0;
gd.m_cwd_tl.m_dW = 15;
gd.RechGrdDaten();
yield return new TestCaseData(gd.m_cwd_pl.m_dW, 93344, .1, "Außenluftdruck");
// und so weiter
}
[TestCaseSource("GrundDatenDaten")]
public void testMethod(object actual, object expected, object tolerance, string label)
{
Assert.That(actual, Is.EqualTo(expected).Within(tolerance), label);
}
However, I don't like that very much as it hides the true function of the test in the data source. I think your original formulation is the best way to do it for now and leaves you with the ability to include the code in an Assert.Multiple block once that feature is implemented.

Should I add features in a class just to make it testable?

I am still trying to get the hang of unit testing, I have a simple question. Today I wanted to write a test for a very simple function. This function was doing just this:
void OnSomething()
{
increment++;
if (increment == 20)
SaveIt();
}
I said, this function could be testable. I could write a test that calls it 20 times and then verifies that SaveIt has been called.
Then my doubt arose. How can I test that SaveIt has been called? My first answer was to add a boolean, but then I thought: is it correct to add class features just to make it testable?
Please advise. Thank you.
I would suggest having SaveIt return a success or failure result, this just makes it easier to test overall. You could do something as simple as having it return a bool, or you could create a generic result class that contains the ability to set messages as well, if you ever need to report whether it passed or failed.
A simple example example
public class Result
{
public bool IsSuccess;
public List<string> Messages;
}
In the unit test you're trying to test only the OnSomething behavior though -- what happens inside "SaveIt" should not be tested. So ideally you'd want SaveIt() to occur in another class so you can mock its response.
I use Moq for this purpose. Moq is free, you can get it here: http://code.google.com/p/moq/
my method would then become
Result OnSomething()
{
Result result=null;
increment++;
if(increment == 20)
{
result = saver.SaveIt();
}
return result;
}
Your class constructor would take an object that implements ISaver interface (defining SaveIt() method) (ideally injected by a DI framework but you could generate it manually if you had to).
Now in your unit test you would create a mock version of ISaver and tell it what to return when it gets called:
Mock<ISaver> mock = new Mock<ISaver>();
mock.Setup(x=> x.SaveIt()).Returns(new Result{IsSuccess=true});
You'd instantiate your class passing mock.Object in the constructor ISaver parameter.
ex.
MyClass myClass = new MyClass(mock.Object);
//(assuming it didn't have other parameters)
Then, you could Assert whether result is null or not -- if it never got called, it would be null because the setup you did above would never trigger.
(in nunit)
Result result = myClass.OnSomething();
Assert.IsNotNull(result);
If you really didn't want OnSomething() to return a result, or it couldn't because it's an event, then I would have OnSomething() call a method to do the work for you:
void OnSomething()
{
Result result = DoTheWork();
}
Result DoTheWork()
{
Result result=null;
increment++;
if(increment == 20)
{
result = saver.SaveIt();
}
return result;
}
And then run your unit test on DoTheWork() instead of OnSomething().
Definitely not! Production code should not depend on tests at all, but the tests should verify the correct behaviour of the actual code. This can be achieved by several methods, such as IOC, and using mocks. You can take a look at some existing frameworks which simplify your life a lot:
http://code.google.com/p/mockito/
http://code.google.com/p/jmockit/
http://www.easymock.org/

How to avoid multiple asserts in a JUnit test?

I have a DTO which I'm populating from the request object, and the request object has many fields. I want to write a test to check if the populateDTO() method is putting values in the right places or not. If I follow the rule of one assert per test, I would have to write a large number of tests, to test each field. The other approach would be to write multiple asserts in a single test. Is it really recommended to follow one assert per test rule or can we relax in these cases. How do I approach this problem?
Keep them separate. A unit test is supposed to tell you which unit failed. Keeping them separate also allows you to isolate the problem quickly w/o requiring you to go through a lengthy debug cycle.
Is it really recommended to have only one assert per unit test? Yes it is, there are people who make that recommendation. Are they right? I don't think so. I find it hard to believe such people have actually worked on real code for a long time.
So, imangine you have a mutator method you want to unit test. The mutator has some kind of effect, or effects, which you want to check. Typically the expected effect of a mutator are few in number, because many effects suggests an overly complicated design for the mutator. With one assert per effect and one test case per assert, you will not need many test cases per mutator, so the recommendation does not seem so bad.
But the flaw in this reasoning is that those tests are looking at only the expected effects of the mutator. But if the mutator has a bug in it, it might have unexpected faulty side effects. The tests are making the foolish assumption that the code does not have a whole class of bugs, and that no future refactoring will introduce such bugs. When the method was originally written it might be obvious to the author that particular side effects were impossible, but refactoring and addition of new functionality might make such side effects possible.
The only safe way to test long lived code is to check that the mutators do not have unexpected side effects. But how can you test for those? Most classes have some invariants: things that no mutator can ever change. The size method of a container will never return a negative value, for example. Each invariant is, in effect, a post condition for every mutator (and also the constructor). Each mutator also typically has a set of invariants that describe what kind of changes it does not make. A sort method does not change the length of the container, for example. The class and mutator invariants are, in effect, post conditions for every mutator call. Adding assertions for all them is the only way of checking for unexpected side effects.
So, just add more test cases? In practice the number of invariants multiplied by the number of mutators to test is large, so one assertion per test leads to many test cases. And the information about your invariants is scattered over many test cases. A design change to tweak one invariant will require alteration of many test cases. It becomes impractical. Its better to have parameterised test cases for a mutator, which check several invariants for the mutator, using several assertions.
And the authors of JUnit5 seem to agree. They provide an assertAll for checking several assertions in one test-case.
this construction help you to have 1 big assert (with small asserts inside)
import static org.junit.jupiter.api.Assertions.assertAll;
assertAll(
() -> assertThat(actual1, is(expected)),
() -> assertThat(actual2, is(expected))
);
You can have a parameterized test where the 1st parameter is the propertyname and the second the expected value.
Is that rule extended to being in a loop? Consider this
Collection expectedValues = // populate expected values
populateDTO();
for(DTO dto : myDtoContainer)
assert_equal(dto, expectedValues.get(someIndexRelatedToDto))
Now I'm not so big on the exact syntax, but this is just the notion I'm looking at.
EDIT:
After the comments...
The answer is ... Nope!
The reason the principle exists is so you can identify which parts of the object fail. If you have them in one method, you're going to run into only one assertion, then the next, then the next, and you won't see them all.
So you can have it one of two ways:
One method, less boilerplate code.
Many methods, better reporting on the test run
It's up to you, both have ups and downs.
3. List item
[caveat: I'm very "unfluent" in Java/JUnit, so beware of errors in the details below]
There's a couple of ways to do this:
1) Write multiple assertions in the same test. This should be ok if you are only testing the DTO generation once. You could start here, and move to another solution when this starts to hurt.
2) Write a helper assertion, e.g. assertDtoFieldsEqual, passing in the expected and actual DTO. Inside the helper assertion you assert each field separately. This at least gives you the illusion of only one assert per test and will make things clearer if you test DTO generation for multiple scenarios.
3) Implement equals for the object that check each property and implement toString so that you at least can inspect the assertion result manually to find out what part is incorrect.
4) For each scenario where the DTO is generated, create a separate test fixture that generates the DTO and initializes the expected properties in the setUp method. The create a separate test for testing each of the properties. This also results in a lot of tests, but they will at least be one-liners only. Example in pseudo-code:
public class WithDtoGeneratedFromXxx : TestFixture
{
DTO dto = null;
public void setUp()
{
dto = GenerateDtoFromXxx();
expectedProp1 = "";
...
}
void testProp1IsGeneratedCorrectly()
{
assertEqual(expectedProp1, dto.prop1);
}
...
}
If you need to test the DTO generation under different scenarios and choose this last method it could soon become tedious to write all those tests. If this is the case you could implement an abstract base fixture that leaves out the details on how to create the DTO and setup the expected properties to derived classes. Pseudo-code:
abstract class AbstractDtoTest : TestFixture
{
DTO dto;
SomeType expectedProp1;
abstract DTO createDto();
abstract SomeType getExpectedProp1();
void setUp()
{
dto = createDto();
...
}
void testProp1IsGeneratedCorrectly()
{
assertEqual(getExpectedProp1(), dto.prop1);
}
...
}
class WithDtoGeneratedFromXxx : AbstractDtoTest
{
DTO createDto() { return GenerateDtoFromXxx(); }
abstract SomeType getExpectedProp1() { return new SomeType(); }
...
}
Or you can do some workaround.
import junit.framework.Assert;
import org.junit.After;
import org.junit.AfterClass;
import org.junit.Before;
import org.junit.BeforeClass;
import org.junit.Test;
public class NewEmptyJUnitTest {
public NewEmptyJUnitTest() {
}
#BeforeClass
public static void setUpClass() throws Exception {
}
#AfterClass
public static void tearDownClass() throws Exception {
}
#Before
public void setUp() {
}
#After
public void tearDown() {
}
#Test
public void checkMultipleValues() {
String errMessages = new String();
try{
this.checkProperty1("someActualResult", "someExpectedResult");
} catch (Exception e){
errMessages += e.getMessage();
}
try{
this.checkProperty2("someActualResult", "someExpectedResult");
} catch (Exception e){
errMessages += e.getMessage();
}
try{
this.checkProperty3("someActualResult", "someExpectedResult");
} catch (Exception e){
errMessages += e.getMessage();
}
Assert.assertTrue(errMessages, errMessages.isEmpty());
}
private boolean checkProperty1(String propertyValue, String expectedvalue) throws Exception{
if(propertyValue == expectedvalue){
return true;
}else {
throw new Exception("Property1 has value: " + propertyValue + ", expected: " + expectedvalue);
}
}
private boolean checkProperty2(String propertyValue, String expectedvalue) throws Exception{
if(propertyValue == expectedvalue){
return true;
}else {
throw new Exception("Property2 has value: " + propertyValue + ", expected: " + expectedvalue);
}
}
private boolean checkProperty3(String propertyValue, String expectedvalue) throws Exception{
if(propertyValue == expectedvalue){
return true;
}else {
throw new Exception("Property3 has value: " + propertyValue + ", expected: " + expectedvalue);
}
}
}
Maybe not the best approach and if overused than can confuse... but it is a possibility.

How to mock a method which also belongs to the target class itself?

Let's say we are testing a class C which has 2 methods M1 and M2 where M1 calls M2 when executed.
Testing M2 is ok, but how can we test M1? The difficulty is that we need to mock M2 if I'm not misunderstanding things.
If so, how can we mock another method while testing a method defined in the same class?
[Edit]
Class C has no base classes.
You can do this by setting the CallBase property on the mock to true.
For example, if I have this class:
public class Foo
{
public virtual string MethodA()
{
return "A";
}
public virtual string MethodB()
{
return MethodA() + "B";
}
}
I can setup MethodA and call MethodB:
[Fact]
public void RunTest()
{
Mock<Foo> mockFoo = new Mock<Foo>();
mockFoo.Setup(x => x.MethodA()).Returns("Mock");
mockFoo.CallBase = true;
string result = mockFoo.Object.MethodB();
Assert.Equal("MockB", result);
}
You should let the call to M1 pass through to a real instance of the M2 method.
In general, you should be testing the black box behaviour of your classes. Your tests shouldn't care that M1 happens to call M2 - this is an implementation detail.
This isn't the same as mocking external dependencies (which you should do)...
For example, say I have a class like this:
class AccountMerger
{
public AccountMerger(IAccountDao dao)
{
this.dao = dao;
}
public void Merge(Account first, Account second, MergeStrategy strategy)
{
// merge logic goes here...
// [...]
dao.Save(first);
dao.Save(second);
}
public void Merge(Account first, Account second)
{
Merge(first, second, MergeStrategy.Default);
}
private readonly IAccountDao dao;
}
I want my tests to show that:
Calling Merge(first, second, strategy) results in two accounts getting saved that have been merged using the supplied rule.
Calling Merge(first, second) results in two accounts getting saved that have been merged using the default rule.
Note that both of these requirements are phrased in terms of inputs and effects - in particular, I don't care how the class achieves these results, as long as it does.
The fact that the second method happens to use the first isn't something I care about, or even that I want to enforce - if I do so, I'll write very brittle tests. (There's even an argument that if you've messed about with the object under test using a mocking framework, you're not even testing the original object any more, so what are you testing?) This is an internal dependency that could quite happily change without breaking the requirements:
// ...
// refactored AccountMerger methods
// these still work, and still fulfil the two requirements above
public void Merge(Account first, Account second, MergeStrategy strategy)
{
MergeAndSave(first, second, strategy ?? MergeStrategy.Default);
}
public void Merge(Account first, Account second)
{
// note this no longer calls the other Merge() method
MergeAndSave(first, second, MergeStrategy.Default);
}
private void MergeAndSave(Account first, Account second, MergeStrategy strategy)
{
// merge logic goes here...
// [...]
dao.Save(first);
dao.Save(second);
}
// ...
As long as my tests only check inputs and effects, I can easily make this kind of refactoring - my tests will even help me to do so, as they make sure I haven't broken the class while making changes.
On the other hand, I do about the AccountMerger using the IAccountDao to save accounts following a merge (although the AccountMerger shouldn't care about the implementation of the DAO, only that it has a Save() method.) This DAO is a prime candidate for mocking - an external dependency of the AccountMerger class, feeling an effect I want to check for certain inputs.
You shouldn't mock methods in the target class, you should only mock external dependencies.
If it seems to make sense to mock M2 while testing M1 it often means that your class is doing too many things. Consider splitting the class and keeping M2 in one class and move M1 to a higher level class, which would use the class containing M2. Then mocking M2 is easy, and your code will actually become better.

Splitting a test to a set of smaller tests

I want to be able to split a big test to smaller tests so that when the smaller tests pass they imply that the big test would also pass (so there is no reason to run the original big test). I want to do this because smaller tests usually take less time, less effort and are less fragile. I would like to know if there are test design patterns or verification tools that can help me to achieve this test splitting in a robust way.
I fear that the connection between the smaller tests and the original test is lost when someone changes something in the set of smaller tests. Another fear is that the set of smaller tests doesn't really cover the big test.
An example of what I am aiming at:
//Class under test
class A {
public void setB(B b){ this.b = b; }
public Output process(Input i){
return b.process(doMyProcessing(i));
}
private InputFromA doMyProcessing(Input i){ .. }
..
}
//Another class under test
class B {
public Output process(InputFromA i){ .. }
..
}
//The Big Test
#Test
public void theBigTest(){
A systemUnderTest = createSystemUnderTest(); // <-- expect that this is expensive
Input i = createInput();
Output o = systemUnderTest.process(i); // <-- .. or expect that this is expensive
assertEquals(o, expectedOutput());
}
//The splitted tests
#PartlyDefines("theBigTest") // <-- so something like this should come from the tool..
#Test
public void smallerTest1(){
// this method is a bit too long but its just an example..
Input i = createInput();
InputFromA x = expectedInputFromA(); // this should be the same in both tests and it should be ensured somehow
Output expected = expectedOutput(); // this should be the same in both tests and it should be ensured somehow
B b = mock(B.class);
when(b.process(x)).thenReturn(expected);
A classUnderTest = createInstanceOfClassA();
classUnderTest.setB(b);
Output o = classUnderTest.process(i);
assertEquals(o, expected);
verify(b).process(x);
verifyNoMoreInteractions(b);
}
#PartlyDefines("theBigTest") // <-- so something like this should come from the tool..
#Test
public void smallerTest2(){
InputFromA x = expectedInputFromA(); // this should be the same in both tests and it should be ensured somehow
Output expected = expectedOutput(); // this should be the same in both tests and it should be ensured somehow
B classUnderTest = createInstanceOfClassB();
Output o = classUnderTest.process(x);
assertEquals(o, expected);
}
The first suggestion that I'll make is to re-factor your tests on red (failing). To do so, you'll have to break your production code temporarily. This way, you know the tests are still valid.
One common pattern is to use a separate test fixture per collection of "big" tests. You don't have to stick to the "all tests for one class in one test class" pattern. If a a set of tests are related to each other, but are unrelated to another set of tests, then put them in their own class.
The biggest advantage to using a separate class to hold the individual small tests for the big test is that you can take advantage of setup and tear-down methods. In your case, I would move the lines you have commented with:
// this should be the same in both tests and it should be ensured somehow
to the setup method (in JUnit, a method annotated with #Before). If you have some unusually expensive setup that needs to be done, most xUnit testing frameworks have a way to define a setup method that runs once before all of the tests. In JUnit, this is a public static void method that has the #BeforeClass annotation.
If the test data is immutable, I tend to define the variables as constants.
Putting all this together, you might have something like:
public class TheBigTest {
// If InputFromA is immutable, it could be declared as a constant
private InputFromA x;
// If Output is immutable, it could be declared as a constant
private Output expected;
// You could use
// #BeforeClass public static void setupExpectations()
// instead if it is very expensive to setup the data
#Before
public void setUpExpectations() throws Exception {
x = expectedInputFromA();
expected = expectedOutput();
}
#Test
public void smallerTest1(){
// this method is a bit too long but its just an example..
Input i = createInput();
B b = mock(B.class);
when(b.process(x)).thenReturn(expected);
A classUnderTest = createInstanceOfClassA();
classUnderTest.setB(b);
Output o = classUnderTest.process(i);
assertEquals(o, expected);
verify(b).process(x);
verifyNoMoreInteractions(b);
}
#Test
public void smallerTest2(){
B classUnderTest = createInstanceOfClassB();
Output o = classUnderTest.process(x);
assertEquals(o, expected);
}
}
All I can suggest is the book xUnit Test Patterns. If there is a solution it should be in there.
theBigTest is missing the dependency on B. Also smallerTest1 mocks B dependency. In smallerTest2 you should mock InputFromA.
Why did you create a dependency graph like you did?
A takes a B then when A::process Input, you then post process InputFromA in B.
Keep the big test and refactor A and B to change the dependency mapping.
[EDIT] in response to remarks.
#mkorpela, my point is that by looking at the code and their dependencies is how you start to get an idea of how to create smaller tests. A has a dependency on B. In order for it to complete its process() it must use B's process(). Because of this, B has a dependency on A.