reusing business logic in a test class to save time - unit-testing

Just having a conversation with someone in the office about using a business logic class to build up some data in order to test another class.
Basically, he has class A which takes a complex type as a parameter and then generates a collection of a different complex type as a result. He's written tests around this class already. Now he's moved on to testing another class (class B) which takes the result of class A then performs some logic on it.
He's asked the question, "should I use class A to build up a scenario to test class B with".
At first I said yes as class A has tests around it. But then I figured well what if there's some bugs in class A that we haven't found yet... so I guess there must be a better way to address this scenario.
Does anyone have any thoughts on this? Is it OK to use existing logic to save time writing other tests?
Regards,
James

Stances on that might differ. Generally, if code is tested and you assume it works, you're free to use it. This is especially true when using already tested methods to help with testing others (within single class/unit). However, as this happens within single unit it's a bit different from your case.
Now, when dealing with 2 separate classes I'd say you should avoid such approach. Purely for the reason that those two classes might not be related in obvious way or their context/scope of usage might vastly differ. As in, somebody might change class A without even knowing class B exists. Class B tests suddenly break even though no changes were made to B code. This brings unnecessary confusion and is situation you usually don't want to find yourself in.
Instead, I suggest creating helper method within tested class B file. With stubs/fakes and tools like AutoFixture, you should be able to easily reproduce generation logic used by class A and have your own "copy" contained in class B tests.

In order to test class B, the result returned from the class A should be replicated somewhere in your test. If class A returns a list of persons, you will have an helper function in your test returning a fake List<Person> to use for your test.
Because you are only testing the class B, class A should not be used in your test.
NUnit provides a built in functionality in order to provide data for your tests, have a look to :
http://www.nunit.org/index.php?p=testCaseSource&r=2.5
Or you can simply create a DataFactory class with methods that returns the data (simple objects, collections etc..) you will consume in your tests.

Related

Should I use inherited tests?

I am reviewing some code where the developer has some classes ClassA and ClassB. They both inherit from ParentClass so both must implement a number of abstract methods (e.g. return5Values(2))
In ClassA the values are all double the previous value: [2,4,8,16,32]
In ClassB the values are all +1 the previous value [2,3,4,5,6]
There are also other constraints such as raising an error if the parameter is negative etc.
Other tests like getting the 3rd value only, also exist etc.
(Obviously these are just fake examples to get my point across)
Now, instead of writing a lot of similar tests for both ClassA and ClassB, what the developer has done is created ParentClassChildTests which contains a some code something like this:
public void testVariablesAreCorrect() {
returnedValues = clazz.return5Values(2)
# Does a bunch of other things as well
# ...
assertEqual(expectedValues, returnedValues)
}
ClassATests now inherits from ParentClassChildTest and must define expectedValues as a class variable.
The expectedValues are used within a few different tests as well, so they aren't being defined just for this single test.
Now when ClassATests and ClassBTests are run, it also runs all the tests inside ParentClassChildTests.
My question is: Is this a good method to avoid a lot of duplicate tests and ensure everything works as expected in child classes? Are there any major issues this can lead to? Or a better way of handling this?
Whilst this is all Java code, my question isn't about any particular testing framework or language but the idea in general of inheriting from a parent class which also has tests in it.
The situation that it is possible and sensible to re-use tests for different implementations of an interface / base class is not very common. The following aspects limit the applicability:
Derived classes have different dependencies, which may require different mocks to be created and to be set up. In such a case, the test methods can not be identical. Even if classA and classB currently do not have dependencies or the same dependencies with (coincidentially) the same setup, this can change over time or the next class classC will have different dependencies.
Each derived class will implement different algorithms. In your case, return5Values performs different algorithms in ClassA and ClassB. Due to the different algorithms, the behaviour of the SUT for the same set up and the same inputs may be different: For example, each algorithm will run into overflows at different points. Even the call return5Values(2) that allows to use a derived test for classA and classB today, could with a potential future classC lead to an overflow scenario with possible exceptions thrown.
The different algorithms implemented in the derived classes will have different potential bugs and different corner cases. That is, the necessary set up and inputs for the SUT will have to be different to stimulate the respective boundaries. For some implementations, testing the call return5Values(2) may simply not bring any benefit while test for other parameters than 2 are necessary.
If you share test methods between the classes and only provide the parameters, it is not the test method which is associated with the tests' intent - each parameter set has its own intent. The intent/scenario, however, should ideally be part of the output of each individual test.
Given all these problems, inheritance of test methods does not seem to be the best approach for re-use here. Instead, it may be more beneficial to have have some common helper functions that can be used by the different derived classes.
Having class hierarchies in tests creates dependencies between them. A UnitTest serves the purpose of testing a Unit in isolation where Unit refers to a certain class. I'd argue that it is ok to have helpers and utils to avoid duplicating very basic functionality.
As much as possible unit tests should allow for quick and independent changes of a certain Unit. Having a commonly enforced structure for all tests increases the amount of work to be done if the implementation of unrelated parts of the application changes.
When it comes to integration testing there will be shared functionality for setting up the infrastructure. So the answer is a very clear it depends. Generally it is favorable to reduce dependencies between tests as much as possible and having a base test that determines the inner workings of a derived test is detrimental to that goal.

UnitTesting help: using stub?

I have a question concerning unit testing.
I have to test a function, call it function A, that receives as input an instance of a class B, and returns true or false.
Now in my test I have to somehow create an object to pass to the function A; what I'm currently doing is to initialize the class B using its constructor and calling some of its methods to populate it, create the right structures, etc. and then pass it to the function A. Now, I'm not sure that this is a good pattern: in particular what happens if there's a bug in one the methods of class B, or I change its interface?
So I guess I would need to use a stub; but it seems weird to write a class that has basically the same structures of my class B; also the function A will ultimately work with the class B, so if I change the interface of class B then the test for function A should crash to tell me to change function A to accommodate the new interface.
What is the correct pattern to use?
NOTE: If you think this is primarily opinion based, please reformulate the question as: "According to the principles advocated in "The Art of Unit Testing", what would be the best thing to do here?" - for the rest of you who are sane, feel free to write an answer with a larger perspective
Edit
I should clarify, the sole purpose of function A is to take an instance of class B and verify that a certain condition is met. Now, I could create a stub in place of class B, but I am not sure this would make sense; it seems rather pointless. On the other hand to initialize B I do something like classB.addData(randomData); what happens if this code fails? I will get an error in the test for function A while the actual problem is in the initialization of class B
Edit 2
Some code that shows more explicitly what the function does. The real code is exactly the same, except the methods are more complicated but otherwise it's exactly the same
def functionA(objectB):
return objectB.data < 10
def testFunctionA():
objectB = classB()
objectB.addData(19) #Is this a problem or should I stub objectB?
assert(functionA(objectB) is False)
In my opinion, the most powerful tool in the unit testing toolbox is dependency injection, and that fits perfectly here.
When constructing an object, all its collaborator objects should be passed to its constructor in the form of interface references. In your case the unit to be tested is a function, but the principle is the same. The functions collaborator objects are passed to it in the same manner.
If your test object only collaborates with other objects through interfaces, it is possible to pass mock objects in unit testing. Also, frameworks like google mock makes it very convenient to create mocks, and write clean test cases where the expected interactions are easy to understand.
If this is what you meant, then yes, that is a good pattern.
Edit:
This is how i would write the test function.
def functionA(objectB):
return objectB.data < 10
def testFunctionA():
objectB = fakeClassB()
EXPECT_CALL(objectB, data).WillOnce(Return(19))
assert(functionA(objectB) is False)
Instead of passing a real classB object, pass a fakeClassB object. This makes the test depending on the interface of classB rather than the actual implementation of classB. A failing test is caused by a faulty use of the interface and not some implementation detail n classB. Depending on what language you are working with, this could be possible or not i suppose.
Another perk is build complexity. You can build the test function without building the implementation of classB. You only need to build the interface of classB and the fakeClassB.
There is a rule in testing: mock all units other than the tested one. The point of mocking is to provide some correct output values when you would call units that are used in the tested unit. You should explicitly provide some samples that you know are correct, so that the tested unit can not fail because of some fail caused of the unit used by it.
Too long for a comment:
in the scope of unit testing A, Bs implementation should not matter. Using a mock/stub/whatever in order to test the different branches of A is fine.
If Bs implementation at some point changes, that should not matter in regards to As implementation (in an ideal scenario, but your example is very abstract). If Bs implementation were to be significantly changed in so much that A is looking at different members of B, then yes, you could have an impact to As unit tests. If Bs implementation of arriving at the members that A relies on changes, that will not matter with a mocked version of B.
It will however matter to Bs unit tests.
If you have anything concrete I can elaborate, but right now it's mostly just an abstract answer for an abstract question.
def functionA(objectB):
return objectB.data < 10
def testFunctionA():
objectB = classB()
objectB.addData(19) #Is this a problem or should I stub objectB?
assert(functionA(objectB) is False)
is this python? I don't know python, but I'm guessing functionA returns true when objectB.data lt 10, and false otherwise.
Given all of that, functionA is still only returning a true or false based on a member of objectB. How objectB.data gets its data is not relevant to the scope of unit testing functionA, so objectB.data can and should (where possible) be mocked/stubbed in order to get a true unit test. Integration test is another story (you should use the real implementation for that, but your question was specific to unit testing)

How do unit tests change when a base class is driven out?

This is in part a followup to this question.
I'm not sure the best way to ask this, so I'll try a short story to set the scene:
Once upon a time, there was a class ‘A’, which had a unit test class ‘ATests’ responsible for testing its behaviour through the public interface. They lived happily together for a while and then a change happened, and class ‘B’ came along, which as it turned out had a lot in common with class ‘A’, so a base class was introduced.
The public behaviour of the base class is already covered by the tests for class A. So, the question is what happens next?
• Does class B need to have tests for the common (base class behaviour)? It seems like the behaviour is a part of B, so it should be tested, but should these tests be shared with those for class A? For the base class? If so, what’s the best way to share?
• Does the new base class need unit tests, or is it ok for base classes to be tested through the tests of their children? Does it matter if the base class is abstract?
• Is it enough to ensure that classes A & B derive from the base class and ‘trust’ the unit tests for the base class to test the common behaviour (so the tests don’t need to be replicated in the child classes)? The tests for A & B only need to test they’re new/changed behaviour?
• Am I following totally the wrong approach having approximately one unit test class per real class?
I’ve taken different views at different times and the different approaches can have quite an impact on the ability to refactor the code, time taken to write tests etc. What approaches have people found works best?
Personally, given time, I tend to test all three (base and two derived). It shows that you're not inadvertently overriding the base methods and changing their behavior, and your inherited class still provides the expected semantics. If the behavior really doesn't change, then it could be as simple as a copy-paste job, but it provides more complete coverage.
Note the "given time" part, though. If time is an issue (and it always is), testing the base class or the inherited functionality would probably be lower priority. But testing is great inoculation against yourself, and makes you much more confident when refactoring later, so you're only shortchanging yourself, your customers, and/or your maintainers by not doing as complete coverage as you have time for.
However, pawning repetitive things like this off on dedicated testers or a QA team, if you have one, is perfectly acceptable. But buy them a beer sometimes :-) They make you look better!
You might look at code coverage tools; they can show you if you're actually testing all of the code. Personally if I have a test covering the base class behavior and I"m not overriding that, I won't test it again. The goal is to have a code change (potentially) break only one test.
Don't feel the need to stick to one unit test class per real class. One unit test class per unique setup fixture is one way, and then there's the whole BDD school...
Refactoring test code is just as important as refactoring production code. Both should be treated as first class citizens. So if you are extracting public methods to the base class then that should have its own set of tests. If your test cases are designed properly where each tests tests one thing each then the test refactoring should be easy.
If you are extracting protected functionality then probably its it a slightly grey area. If test methods are new to the class then I would expect them to be tested in the derived class simply because they are probably there for the derived class to function properly. Ideally this should be kept to a minimum as the base class functionality becomes less obvious.
The derived classes then will have the remaining public methods tested in their own set of tests.
Again if you change the production code and not the tests then you should incorporate a coverage tool so you feel confident enough your tests are covered enough.

Unit testing factory methods which have a concrete class as a return type

So I have a factory class and I'm trying to work out what the unit tests should do. From this question I could verify that the interface returned is of a particular concrete type that I would expect.
What should I check for if the factory is returning concrete types (because there is no need - at the moment - for interfaces to be used)? Currently I'm doing something like the following:
[Test]
public void CreateSomeClassWithDependencies()
{
// m_factory is instantiated in the SetUp method
var someClass = m_factory.CreateSomeClassWithDependencies();
Assert.IsNotNull(someClass);
}
The problem with this is that the Assert.IsNotNull seems somewhat redundant.
Also, my factory method might be setting up the dependencies of that particular class like so:
public SomeClass CreateSomeClassWithDependencies()
{
return new SomeClass(CreateADependency(), CreateAnotherDependency(),
CreateAThirdDependency());
}
And I want to make sure that my factory method sets up all these dependencies correctly. Is there no other way to do this then to make those dependencies public/internal properties which I then check for in the unit test? (I'm not a big fan of modifying the test subjects to suit the testing)
Edit: In response to Robert Harvey's question, I'm using NUnit as my unit testing framework (but I wouldn't have thought that it would make too much of a difference)
Often, there's nothing wrong with creating public properties that can be used for state-based testing. Yes: It's code you created to enable a test scenario, but does it hurt your API? Is it conceivable that other clients would find the same property useful later on?
There's a fine line between test-specific code and Test-Driven Design. We shouldn't introduce code that has no other potential than to satisfy a testing requirement, but it's quite alright to introduce new code that follow generally accepted design principles. We let the testing drive our design - that's why we call it TDD :)
Adding one or more properties to a class to give the user a better possibility of inspecting that class is, in my opinion, often a reasonable thing to do, so I don't think you should dismiss introducing such properties.
Apart from that, I second nader's answer :)
If the factory is returning concrete types, and you're guaranteeing that your factory always returns a concrete type, and not null, then no, there isn't too much value in the test. It does allows you to make sure, over time that this expectation isn't violated, and things like exceptions aren't thrown.
This style of test simply makes sure that, as you make changes in the future, your factory behaviour won't change without you knowing.
If your language supports it, for your dependencies, you can use reflection. This isn't always the easiest to maintain, and couples your tests very tightly to your implementation. You have to decide if that's acceptable. This approach tends to be very brittle.
But you really seem to be trying to separate which classes are constructed, from how the constructors are called. You might just be better off with using a DI framework to get that kind of flexibility.
By new-ing up all your types as you need them, you don't give yourself many seams (a seam is a place where you can alter behaviour in your program without editing in that place) to work with.
With the example as you give it though, you could derive a class from the factory. Then override / mock CreateADependency(), CreateAnotherDependency() and CreateAThirdDependency(). Now when you call CreateSomeClassWithDependencies(), you are able to sense whether or not the correct dependencies were created.
Note: the definition of "seam" comes from Michael Feather's book, "Working Effectively with Legacy Code". It contains examples of many techniques to add testability to untested code. You may find it very useful.
What we do is create the dependancies with factories, and we use a dependancy injection framework to substitute mock factories for the real ones when the test is run. Then we set up the appropriate expectations on those mock factories.
You can always check stuff with reflection. There is no need to expose something just for unit tests. I find it quite rare that I need to reach in with reflection and it may be a sign of bad design.
Looking at your sample code, yes the Assert not null seems redundant, depending on the way you designed your factory, some will return null objects from the factory as opposed to exceptioning out.
As I understand it you want to test that the dependencies are built correctly and passed to the new instance?
If I was not able to use a framework like google guice, I would probably do it something like this (here using JMock and Hamcrest):
#Test
public void CreateSomeClassWithDependencies()
{
dependencyFactory = context.mock(DependencyFactory.class);
classAFactory = context.mock(ClassAFactory.class);
myDependency0 = context.mock(MyDependency0.class);
myDependency1 = context.mock(MyDependency1.class);
myDependency2 = context.mock(MyDependency2.class);
myClassA = context.mock(ClassA.class);
context.checking(new Expectations(){{
oneOf(dependencyFactory).createDependency0(); will(returnValue(myDependency0));
oneOf(dependencyFactory).createDependency1(); will(returnValue(myDependency1));
oneOf(dependencyFactory).createDependency2(); will(returnValue(myDependency2));
oneOf(classAFactory).createClassA(myDependency0, myDependency1, myDependency2);
will(returnValue(myClassA));
}});
builder = new ClassABuilder(dependencyFactory, classAFactory);
assertThat(builder.make(), equalTo(myClassA));
}
(if you cannot mock ClassA you can assign a non-mock version to myClassA using new)

How do you avoid duplicate unit tests when testing interactions on composites?

Imagine a system of filters (maybe audio filters, or text stream filters).
A Filter base class has a do_filter() method, which takes some input, modifies it (perhaps), and returns that as output.
Several subclasses exist, built with TDD, and each has a set of tests which test them in isolation.
Along comes a composite class, of an unrelated type Widget, which has two members of different Filter types (a and b), which deal with quite different input - that is, certain input which would be modified by filter a is passed through unmodified by filter b, and vice versa. Its process_data() method calls each filter member's do_filter().
While developing the composite class, there emerge tests that check the assumption that Widget's filters aren't both processing the same data.
The problem is, these sort of tests look identical to the individual filter's test. Although there might be other tests, which test input which should be modified by both filters, many of the tests could almost be copied and pasted from each of the filter's tests, with only small modifications needed to have them test with Widget (such as calling process_data()), but the input data and the assert checks are identical.
This duplication smells pretty bad. But it seems right to want to test the components' interactions. What sort of options will avoid this sort of duplication?
Within one Test suite/class have a method
public void TestForFooBehaviour(IFilter filter)
{
/* whatever you would normally have in a test method */
}
Then invoke this method from both the original test on the simple filter as well as from the composite filter. This also works for abstract base classes. Obviously FooBehaviour should be a meaningful description of the aspect of filters you are testing. Do this for each behaviour you want to test.
If you language supports duck typing or generics feel free to use it if it helps.
I fairly frequently extract test-logic to separate classes, so I'd extract the filter test to a separate class that is essentially not a unit test by itself. Especially if your test classes are physically separated from your production code this is really a decent way to solve this problem (i.e. No-one will think it is production code since it's in the test space)
I asked something similar about a abstract base class and unit testing here, it has some interesting points that you might find useful.
How to unit test abstract classes: extend with stubs?