DRY while writing tests - unit-testing

When writing tests i usually have some methods creating test data:
#Test
public void someMethod_somePrecondition_someResult() {
ClassUnderTest cut = new ClassUnderTest();
NeededData data = createNeededData();
cut.performSomeActionWithTheData(data);
assertTrue(cut.someMethod());
}
private NeededData createNeededData() {
NeededData data = new NeededData();
// Initialize data with needed values for the tests
return data;
}
I think this is a good approach to minimize duplication in the test class (most unit testing frameworks also provide functionality to set up test data). But what if i test classes that need similar test data? Is it a good choice to provide every test class with its own createNeededData() method, even if they are all the same, or should i use other classes to generate test data to minimize code duplication?

Disclaimer: I haven't used what I'm suggesting here yet, so this is just what I believe.
I recently read about a pattern called object mother which basically is a factory that creates objects with the different data that you might need. M. Fowler also talks about these objects as akin to personas, that is, you might generate different objects that represents different use cases.
Now the object mother pattern is not without it's problems, it can easily grow a lot and become quite cumbersome to maintain as your project grows. In this article 'TEST DATA BUILDERS AND OBJECT MOTHER: ANOTHER LOOK' the author talkes about using the builder pattern to create testobjects, which he concludes is also not perfect and then goes on to hypothesize about a combination between a builder combined with an object mother.
So basically you'd use the object mother pattern to bootstrap some repetetive data, and then use the returned builder to configure the object to your tests specific
I believe that wether you should do it like explained above or just repeat yourself in your tests (which isn't necessarily a bad thing when it comes to testing) is a matter of trying to evaluate the cost of implementing this contra continuing with how you're doing things now.

Related

How to test if the object is created correctly while keeping the class under test in isolation?

It is often told that writing unit tests one must test only a single class and mock all the collaborators. I am trying to learn TDD to make my code design better and now I am stuck with a situation where this rule should be broken. Or shouldn't it?
An example: class under test has a method that gets a Person, creates an Employee based on the Person and returns the Employee.
public class EmployeeManager {
private DataMiner dataMiner;
public Employee getCoolestEmployee() {
Person dankestPerson = dataMiner.getDankestPerson();
Employee employee = new Employee();
employee.setName(dankestPerson.getName() + "bug in my code");
return employee;
}
// ...
}
Should Employee be considered a collaborator? If not, why not? If yes, how do I properly test that 'Employee' is created correctly?
Here is the test I have in mind (using JUnit and Mockito):
#Test
public void coolestEmployeeShouldHaveDankestPersonsName() {
when(dataMinerMock.getDankestPerson()).thenReturn(dankPersonMock);
when(dankPersonMock.getName()).thenReturn("John Doe");
Employee coolestEmployee = employeeManager.getCoolestEmployee();
assertEquals("John Doe", coolestEmployee.getName());
}
As you see, I have to use coolestEmployee.getName() - method of the Employee class that is not under Test.
One possible solution that comes to mind is to extract the task of transforming Persons into Employees to a new method of Employee class, something like
public Employee createFromPerson(Person person);
Am I overthinking the problem? What is the correct way?
The goal of a unit test is to quickly and reliably determine whether a single system is broken. That doesn't mean you need to simulate the entire world around it, just that you should ensure that collaborators you use are fast, deterministic, and well-tested.
Data objects—POJOs and generated value objects in particular—tend to be stable and well-tested, with very few dependencies. Like other heavily-stateful objects, they also tend to be very tedious to mock, because mocking frameworks don't tend to have powerful control over state (e.g. getX should return n after setX(n)). Assuming Employee is a data object, it is likely a good candidate for actual use in unit tests, provided that any logic it contains is well-tested.
Other collaborators not to mock in general:
JRE classes and interfaces. (Never mock a List, for instance. It'll be impossible to read, and your test won't be any better for it.)
Deterministic third-party classes. (If any classes or methods change to be final, your mock will fail; besides, if you're using a stable version of the library, it's no spurious source of failure either.)
Stateful classes, just because mocks are much better at testing interactions than state. Consider a fake instead, or some other test double.
Fast and well-tested other classes that have few dependencies. If you have confidence in a system, and there's no hazard to your test's determinism or speed, there's no need to mock it.
What does that leave? Non-deterministic or slow service classes or wrappers that you've written, that are stateless or that change very little during your test, and that may have many collaborators of their own. In these cases, it would be hard to write a fast and deterministic test using the actual class, so it makes a lot of sense to use a test double—and it'd be very easy to create one using a mocking framework.
See also: Martin Fowler's article "Mocks Aren't Stubs", which talks about all sorts of test doubles along with their advantages and disadvantages.
Getters that only read a private field are usually not worth testing. By default, you can rely on them pretty safely in other tests. Therefore I wouldn't worry about using dankestPerson.getName() in a test for EmployeeManager.
There's nothing wrong with your test as far as testing goes. The design of the production code might be different - mocking dankestPerson probably means that it has an interface or abstract base class, which might be a sign of overengineering especially for a business entity. What I would do instead is just new up a Person, set its name to the expected value and set up dataMinerMock to return it.
Also, the use of "Manager" in a class name might indicate a lack of cohesion and too broad a range of responsibilities.

Pattern for creating a graph of Test objects?

I have to write Unit Tests for a method which requires a complex object Graph. Currently I am writing a Create method for each Test as shown below.
1. private static Entity Create_ObjectGraph_For_Test1()
2. private static Entity Create_ObjectGraph_For_Test2()
...... And So on
The create method has about 10 Steps and they might vary by 1-2 steps with each other. What is the best way to Create complex object graph. Apart from creating a Create method for each Test I can add parameter to a single Create methods but that might become confusing if the no of Tests are about 10 or so.
You can extract the steps into methods, possibly parametrizing them, and make them chain-able so that one can write:
Entity myGraph = GraphFactory.createGraph().step1().step2(<parm>).step3(<parm>);
Choising meaningful names makes the fixture readable.
It may be possible to put a substantial amount of common setup code into - well, of course: the setup() method, and then modify the object graph slightly for each individual test. If the setups for the different tests are sufficiently different, then I would encourage you to put the tests into separate classes and their setup into each test class independently.

Unit testing factory methods which have a concrete class as a return type

So I have a factory class and I'm trying to work out what the unit tests should do. From this question I could verify that the interface returned is of a particular concrete type that I would expect.
What should I check for if the factory is returning concrete types (because there is no need - at the moment - for interfaces to be used)? Currently I'm doing something like the following:
[Test]
public void CreateSomeClassWithDependencies()
{
// m_factory is instantiated in the SetUp method
var someClass = m_factory.CreateSomeClassWithDependencies();
Assert.IsNotNull(someClass);
}
The problem with this is that the Assert.IsNotNull seems somewhat redundant.
Also, my factory method might be setting up the dependencies of that particular class like so:
public SomeClass CreateSomeClassWithDependencies()
{
return new SomeClass(CreateADependency(), CreateAnotherDependency(),
CreateAThirdDependency());
}
And I want to make sure that my factory method sets up all these dependencies correctly. Is there no other way to do this then to make those dependencies public/internal properties which I then check for in the unit test? (I'm not a big fan of modifying the test subjects to suit the testing)
Edit: In response to Robert Harvey's question, I'm using NUnit as my unit testing framework (but I wouldn't have thought that it would make too much of a difference)
Often, there's nothing wrong with creating public properties that can be used for state-based testing. Yes: It's code you created to enable a test scenario, but does it hurt your API? Is it conceivable that other clients would find the same property useful later on?
There's a fine line between test-specific code and Test-Driven Design. We shouldn't introduce code that has no other potential than to satisfy a testing requirement, but it's quite alright to introduce new code that follow generally accepted design principles. We let the testing drive our design - that's why we call it TDD :)
Adding one or more properties to a class to give the user a better possibility of inspecting that class is, in my opinion, often a reasonable thing to do, so I don't think you should dismiss introducing such properties.
Apart from that, I second nader's answer :)
If the factory is returning concrete types, and you're guaranteeing that your factory always returns a concrete type, and not null, then no, there isn't too much value in the test. It does allows you to make sure, over time that this expectation isn't violated, and things like exceptions aren't thrown.
This style of test simply makes sure that, as you make changes in the future, your factory behaviour won't change without you knowing.
If your language supports it, for your dependencies, you can use reflection. This isn't always the easiest to maintain, and couples your tests very tightly to your implementation. You have to decide if that's acceptable. This approach tends to be very brittle.
But you really seem to be trying to separate which classes are constructed, from how the constructors are called. You might just be better off with using a DI framework to get that kind of flexibility.
By new-ing up all your types as you need them, you don't give yourself many seams (a seam is a place where you can alter behaviour in your program without editing in that place) to work with.
With the example as you give it though, you could derive a class from the factory. Then override / mock CreateADependency(), CreateAnotherDependency() and CreateAThirdDependency(). Now when you call CreateSomeClassWithDependencies(), you are able to sense whether or not the correct dependencies were created.
Note: the definition of "seam" comes from Michael Feather's book, "Working Effectively with Legacy Code". It contains examples of many techniques to add testability to untested code. You may find it very useful.
What we do is create the dependancies with factories, and we use a dependancy injection framework to substitute mock factories for the real ones when the test is run. Then we set up the appropriate expectations on those mock factories.
You can always check stuff with reflection. There is no need to expose something just for unit tests. I find it quite rare that I need to reach in with reflection and it may be a sign of bad design.
Looking at your sample code, yes the Assert not null seems redundant, depending on the way you designed your factory, some will return null objects from the factory as opposed to exceptioning out.
As I understand it you want to test that the dependencies are built correctly and passed to the new instance?
If I was not able to use a framework like google guice, I would probably do it something like this (here using JMock and Hamcrest):
#Test
public void CreateSomeClassWithDependencies()
{
dependencyFactory = context.mock(DependencyFactory.class);
classAFactory = context.mock(ClassAFactory.class);
myDependency0 = context.mock(MyDependency0.class);
myDependency1 = context.mock(MyDependency1.class);
myDependency2 = context.mock(MyDependency2.class);
myClassA = context.mock(ClassA.class);
context.checking(new Expectations(){{
oneOf(dependencyFactory).createDependency0(); will(returnValue(myDependency0));
oneOf(dependencyFactory).createDependency1(); will(returnValue(myDependency1));
oneOf(dependencyFactory).createDependency2(); will(returnValue(myDependency2));
oneOf(classAFactory).createClassA(myDependency0, myDependency1, myDependency2);
will(returnValue(myClassA));
}});
builder = new ClassABuilder(dependencyFactory, classAFactory);
assertThat(builder.make(), equalTo(myClassA));
}
(if you cannot mock ClassA you can assign a non-mock version to myClassA using new)

How do you avoid duplicate unit tests when testing interactions on composites?

Imagine a system of filters (maybe audio filters, or text stream filters).
A Filter base class has a do_filter() method, which takes some input, modifies it (perhaps), and returns that as output.
Several subclasses exist, built with TDD, and each has a set of tests which test them in isolation.
Along comes a composite class, of an unrelated type Widget, which has two members of different Filter types (a and b), which deal with quite different input - that is, certain input which would be modified by filter a is passed through unmodified by filter b, and vice versa. Its process_data() method calls each filter member's do_filter().
While developing the composite class, there emerge tests that check the assumption that Widget's filters aren't both processing the same data.
The problem is, these sort of tests look identical to the individual filter's test. Although there might be other tests, which test input which should be modified by both filters, many of the tests could almost be copied and pasted from each of the filter's tests, with only small modifications needed to have them test with Widget (such as calling process_data()), but the input data and the assert checks are identical.
This duplication smells pretty bad. But it seems right to want to test the components' interactions. What sort of options will avoid this sort of duplication?
Within one Test suite/class have a method
public void TestForFooBehaviour(IFilter filter)
{
/* whatever you would normally have in a test method */
}
Then invoke this method from both the original test on the simple filter as well as from the composite filter. This also works for abstract base classes. Obviously FooBehaviour should be a meaningful description of the aspect of filters you are testing. Do this for each behaviour you want to test.
If you language supports duck typing or generics feel free to use it if it helps.
I fairly frequently extract test-logic to separate classes, so I'd extract the filter test to a separate class that is essentially not a unit test by itself. Especially if your test classes are physically separated from your production code this is really a decent way to solve this problem (i.e. No-one will think it is production code since it's in the test space)
I asked something similar about a abstract base class and unit testing here, it has some interesting points that you might find useful.
How to unit test abstract classes: extend with stubs?

What is an ObjectMother?

What is an ObjectMother and what are common usage scenarios for this pattern?
ObjectMother starts with the factory pattern, by delivering prefabricated test-ready objects via a simple method call. It moves beyond the realm of the factory by
facilitating the customization of created objects,
providing methods to update the objects during the tests, and
if necessary, deleting the object from the database at the completion of the test.
Some reasons to use ObjectMother:
* Reduce code duplication in tests, increasing test maintainability
* Make test objects super-easily accessible, encouraging developers to write more tests.
* Every test runs with fresh data.
* Tests always clean up after themselves.
(http://c2.com/cgi/wiki?ObjectMother)
See "Test Data Builders: an alternative to the Object Mother pattern" for an argument of why to use a Test Data Builder instead of an Object Mother. It explains what both are.
As stated elsewhere, ObjectMother is a Factory for generating Objects typically (exclusively?) for use in Unit Tests.
Where they are of great use is for generating complex objects where the data is of no particular significance to the test.
Where you might have created an empty instance below such as
Order rubishOrder = new Order("NoPropertiesSet");
_orderProcessor.Process(rubishOrder);
you would use a sensible one from the ObjectMother
Order motherOrder = ObjectMother.SimpleOrder();
_orderProcessor.Process(motherOrder);
This tends to help with situations where the class being tested starts to rely on a sensible object being passed in.
For instance if you added some OrderNumber validation to the Order class above, you would simply need to instantiate the OrderNumber on the SimpleObject class in order for all the existing tests to pass, leaving you to concentrate on writing the validation tests.
If you had just instantiated the object in the test you would need to add it to every test (it is shocking how often I have seen people do this).
Of course, this could just be extracted out to a method, but putting it in a separate class allows it to be shared between multiple test classes.
Another recommended behavior is to use good descriptive names for your methods, to promote reuse. It is all too easy to end up with one object per test, which is definitely to be avoided. It is better to generate objects that represent general rather than specific attributes and then customize for your test. For instance ObjectMother.WealthyCustomer() rather than ObjectMother.CustomerWith1MdollarsSharesInBigPharmaAndDrivesAPorsche() and ObjectMother.CustomerWith1MdollarsSharesInBigPharmaAndDrivesAPorscheAndAFerrari()