What is an ObjectMother? - unit-testing

What is an ObjectMother and what are common usage scenarios for this pattern?

ObjectMother starts with the factory pattern, by delivering prefabricated test-ready objects via a simple method call. It moves beyond the realm of the factory by
facilitating the customization of created objects,
providing methods to update the objects during the tests, and
if necessary, deleting the object from the database at the completion of the test.
Some reasons to use ObjectMother:
* Reduce code duplication in tests, increasing test maintainability
* Make test objects super-easily accessible, encouraging developers to write more tests.
* Every test runs with fresh data.
* Tests always clean up after themselves.
(http://c2.com/cgi/wiki?ObjectMother)

See "Test Data Builders: an alternative to the Object Mother pattern" for an argument of why to use a Test Data Builder instead of an Object Mother. It explains what both are.

As stated elsewhere, ObjectMother is a Factory for generating Objects typically (exclusively?) for use in Unit Tests.
Where they are of great use is for generating complex objects where the data is of no particular significance to the test.
Where you might have created an empty instance below such as
Order rubishOrder = new Order("NoPropertiesSet");
_orderProcessor.Process(rubishOrder);
you would use a sensible one from the ObjectMother
Order motherOrder = ObjectMother.SimpleOrder();
_orderProcessor.Process(motherOrder);
This tends to help with situations where the class being tested starts to rely on a sensible object being passed in.
For instance if you added some OrderNumber validation to the Order class above, you would simply need to instantiate the OrderNumber on the SimpleObject class in order for all the existing tests to pass, leaving you to concentrate on writing the validation tests.
If you had just instantiated the object in the test you would need to add it to every test (it is shocking how often I have seen people do this).
Of course, this could just be extracted out to a method, but putting it in a separate class allows it to be shared between multiple test classes.
Another recommended behavior is to use good descriptive names for your methods, to promote reuse. It is all too easy to end up with one object per test, which is definitely to be avoided. It is better to generate objects that represent general rather than specific attributes and then customize for your test. For instance ObjectMother.WealthyCustomer() rather than ObjectMother.CustomerWith1MdollarsSharesInBigPharmaAndDrivesAPorsche() and ObjectMother.CustomerWith1MdollarsSharesInBigPharmaAndDrivesAPorscheAndAFerrari()

Related

DRY while writing tests

When writing tests i usually have some methods creating test data:
#Test
public void someMethod_somePrecondition_someResult() {
ClassUnderTest cut = new ClassUnderTest();
NeededData data = createNeededData();
cut.performSomeActionWithTheData(data);
assertTrue(cut.someMethod());
}
private NeededData createNeededData() {
NeededData data = new NeededData();
// Initialize data with needed values for the tests
return data;
}
I think this is a good approach to minimize duplication in the test class (most unit testing frameworks also provide functionality to set up test data). But what if i test classes that need similar test data? Is it a good choice to provide every test class with its own createNeededData() method, even if they are all the same, or should i use other classes to generate test data to minimize code duplication?
Disclaimer: I haven't used what I'm suggesting here yet, so this is just what I believe.
I recently read about a pattern called object mother which basically is a factory that creates objects with the different data that you might need. M. Fowler also talks about these objects as akin to personas, that is, you might generate different objects that represents different use cases.
Now the object mother pattern is not without it's problems, it can easily grow a lot and become quite cumbersome to maintain as your project grows. In this article 'TEST DATA BUILDERS AND OBJECT MOTHER: ANOTHER LOOK' the author talkes about using the builder pattern to create testobjects, which he concludes is also not perfect and then goes on to hypothesize about a combination between a builder combined with an object mother.
So basically you'd use the object mother pattern to bootstrap some repetetive data, and then use the returned builder to configure the object to your tests specific
I believe that wether you should do it like explained above or just repeat yourself in your tests (which isn't necessarily a bad thing when it comes to testing) is a matter of trying to evaluate the cost of implementing this contra continuing with how you're doing things now.

How to verify method class in test class

I have a repository with a method called ConvertToBusinessEntity which converts the data from the data source to a business object. This method is supposed to be used by other methods in the repository namely the Get, GetAll, etc.
This method is unit tested. I check if the data is being retrieved correctly from the data source and the values are being put in the entity correct properties.
But now I need to test and create the GetEntity method which is supposed to call ConvertToBusinessEntity. The logic behind ConvertToBusiness is tested. I just need to know how to verify that a method in the class being tested is called (not a dependency).
Does anyone know how to do this or any alternative method?
I thought of an alternative method but I am not sure if it's the best. I could extend the class under test and have a counter increasing each time the method is called.
Another one would be to mock the class itself.
What do you think? Any other suggestions?
Having ConvertToBusinessEntity in repository is not very good idea. Responsibility of repository is working with data store (CRUD). Mapping of data types is responsibility of some mapper class. Otherwise your repository has too many responsibilities. Take a look on what you are trying to test:
I check if the data is being retrieved correctly from the data source
and the values are being put in the entity correct properties
You see this and? Your test can fail on two completely different reasons. Also you should change repository on two completely different reasons. Best approach here is persisting business entities directly. Modern ORMs allows doing that without polluting business entity with attributes or forcing it inheriting some data-access specific class.
If you really want to have data mapping logic in repository, then make it private (actually only repository should require conversion of business entity to some data-access object) and don't care how this logic is implemented. Let it be part of internal class implementation. You should care only about repository being able to accept or return filled business entities - that's the responsibility of repository. It doesn't matter how mapping is implemented in repository. You should test what repository does, instead of how. So just check that expected business objects are returned by repository.
I just need to know how to verify that a method in the class being tested is called (not a dependency).
But do you really need to do that? If your GetEntity method operates correctly, do you really care how it operates? Do you really care if it performs its function by delegating to ConvertToBusiness, or by some other means?
I recommend instead that you
Think of each method as having a specification.
That specification describes what the outputs and publicly visible manipulations it must make. That do not describe how a method performs its function; that is an implementation detail that could change.
Your unit tests check only that your methods conform to their specification.
You might nevertheless use your knowledge about the implementation to choose good test cases.
But, you might declare, if I do that I am not unit testing my method code; my test of GetEntity depends on both the GetEntity method and the ConvertToBusiness method: two units, so an integration test rather than a unit test. But do you mock the methods of the runtime environment? Of course not. The line between unit and integration testing is not so clear.
More philosophically, you can not create good mock objects in many cases. The reason is that, for most methods, the manner in which an object delegates to associated objects is undefined. Whether it does delegate, and how, is left by the specification as an implementation detail. The only requirement is that, on delegating, the method satisfies the preconditions of its delegate. In such a situation, only a fully functional (non-mock) delegate will do. If the real object checks its preconditions, failure to satisfy a precondition on delegating will cause a test failure. And debugging that test failure will be easy.

Is there a standard for whether a class under test should be constructed in a fixture or in a test?

I'm just curious, are there any standard guidelines that state whether an instance of a class under test should be constructed in a fixture or in the actual test case?
Thanks!
I'm not aware of a standard reference on that topic. Here's what I'd do:
If I had only one test to write, or if I needed an instance of the class under test that was constructed differently than any other instance of that class in my test suite, I'd just instantiate it in the test. Why make it any more complicated that you have to? If I needed to use the same instance over and over again, I'd put it in a fixture.
I do think it's important to construct only the fixtures you need for a given test case, so that there's nothing to mislead the reader. That means either using whatever scoping mechanism your test framework provides (e.g. an rspec context block or a whole new xUnit TestCase) to construct a given fixture only before the tests that need it, or moving instance construction from fixtures to test. To avoid duplication, you can always write a method to construct an instance and call it from as many tests as you want.
I tend to avoid putting anything inside a fixture.
After a while the CUT state tends to get out of hand as the number of tests in that fixture increase. Each test require a simialr but different behavior which could or could not be added to some initialization/setup method.
Having the CUT in the fixture level creates a shared states between the tests which can cause test failure due to run order - which is a pain to find and fix.
Another readability issue happens when a test fails - people tend to forget the initialization that might have happened in another method.
There are better ways to avoid code duplication - Using AutoMocking container to create objects with fake parameters or Factory methods which enable a different initialization for each test (if required) and create more readable and maintainable tests.

Should dynamic dependencies of service objects be avoided?

This question is about testable software design based on mostly value objects and services.
Services that have static dependencies are straightforward to instantiate or configure when using a DI container. However, in some cases, services require dependencies that are known at runtime only.
Say, imagine a simple FileSystemDataStore with some CRUD methods in it for managing files in a directory. This service will need a directory name as one of its constructor parameters. That name could be known at runtime only and will have to be provided by its collaborators.
This seems to be somewhat of a problem because you can't configure such service in a DI container because of its dynamic nature. You'll probably have to use a factory to create such services. However, this will result in a quirk in the unit tests of the service's clients. You will have to mock the factory to return a mock of the service. This adds additional complexity to unit tests. Mocks returning mocks is often considered a test smell.
What is your opinion about this problem? Is it even a problem in your experience? Should such services be instead refactored to be more "pure"?
As a general observation, when services depend on run-time values, an Abstract Factory is indeed the appropriate response.
However, as pointed out in the question, this does have an impact on the maintainability of the tests, so if you can redesign the API to avoid such situations, you should do that. It's not always possible, though.
You would like to inject the directory name, but it is not known during the construction phase. I see three options here.
1. Inject a Provider
Instead of saying "Here is the directory name you need" you are saying "Here is an object that can give you the directory name at run-time". The way to implement this is to declaring a constructor argument Provider<String> directoryNameProvider. The constructor stores a reference to this provider as a member variable. When called apon to do some real-work in the run phase, the class would contain code like this when the directory name is needed:
directoryName = directoryNameProvider.get();
In java, the interface you implement is [javax.inject.Provider<T>][1]. This has a single method: get() which returns type T. The use of the generic provider interface means you do not have a proliferation of intefaces.
When it comes to your unit test, you can inject an anonymous inner class that implements the single method of Provider<T> to return a constant value easily enough. Our code base has a SimpleProvider<T> class that wraps a given object in the Provider interface.
Pro: Allows you to construct the object in the main construction phase. Unit testing is pretty easy.
Con: Details about dependency creation issues are leaking into the class when they should entirely be the concern of the factory. Too bad if the class is already written and accepts directoryName rather than directoryNameProvider already.
Despite the seemingly long list of cons, this is an option I use alot. It is my opinion that there is a missing language construct here.
2. Construct the troublesome object later
You can enter an inner scope when you know more. Within a run-phase method, you can enter a new scope. This means that you go through a whole new mini-construction phase, and then a mini-run phase. Ths is similiar to what happens in your application main() but at a smaller level.
Pro: Class receiving the dependency remains pure.
Con: Entering and exiting too many scopes can make the application and object life-cycles difficult to understand.
3. Use a method argument
You can decide that directoryName is to be a method argument and pass it to your class during the run phase rather than trying to inject it as a constructor argument. This is effectively deciding not to use dependency inject style for this occasion.
Pro: Simplicity
Con: Class that passes directoryName as a method parameter is tightly coupled to the class that needs it. It will be very difficult to implement an alternate implementation that depends on say, a database connection.
These are matters that I have been considering alot lately, so I'm interested in any comments or edits. Are there any other options?

Pattern for creating a graph of Test objects?

I have to write Unit Tests for a method which requires a complex object Graph. Currently I am writing a Create method for each Test as shown below.
1. private static Entity Create_ObjectGraph_For_Test1()
2. private static Entity Create_ObjectGraph_For_Test2()
...... And So on
The create method has about 10 Steps and they might vary by 1-2 steps with each other. What is the best way to Create complex object graph. Apart from creating a Create method for each Test I can add parameter to a single Create methods but that might become confusing if the no of Tests are about 10 or so.
You can extract the steps into methods, possibly parametrizing them, and make them chain-able so that one can write:
Entity myGraph = GraphFactory.createGraph().step1().step2(<parm>).step3(<parm>);
Choising meaningful names makes the fixture readable.
It may be possible to put a substantial amount of common setup code into - well, of course: the setup() method, and then modify the object graph slightly for each individual test. If the setups for the different tests are sufficiently different, then I would encourage you to put the tests into separate classes and their setup into each test class independently.