I have to write Unit Tests for a method which requires a complex object Graph. Currently I am writing a Create method for each Test as shown below.
1. private static Entity Create_ObjectGraph_For_Test1()
2. private static Entity Create_ObjectGraph_For_Test2()
...... And So on
The create method has about 10 Steps and they might vary by 1-2 steps with each other. What is the best way to Create complex object graph. Apart from creating a Create method for each Test I can add parameter to a single Create methods but that might become confusing if the no of Tests are about 10 or so.
You can extract the steps into methods, possibly parametrizing them, and make them chain-able so that one can write:
Entity myGraph = GraphFactory.createGraph().step1().step2(<parm>).step3(<parm>);
Choising meaningful names makes the fixture readable.
It may be possible to put a substantial amount of common setup code into - well, of course: the setup() method, and then modify the object graph slightly for each individual test. If the setups for the different tests are sufficiently different, then I would encourage you to put the tests into separate classes and their setup into each test class independently.
Related
When writing tests i usually have some methods creating test data:
#Test
public void someMethod_somePrecondition_someResult() {
ClassUnderTest cut = new ClassUnderTest();
NeededData data = createNeededData();
cut.performSomeActionWithTheData(data);
assertTrue(cut.someMethod());
}
private NeededData createNeededData() {
NeededData data = new NeededData();
// Initialize data with needed values for the tests
return data;
}
I think this is a good approach to minimize duplication in the test class (most unit testing frameworks also provide functionality to set up test data). But what if i test classes that need similar test data? Is it a good choice to provide every test class with its own createNeededData() method, even if they are all the same, or should i use other classes to generate test data to minimize code duplication?
Disclaimer: I haven't used what I'm suggesting here yet, so this is just what I believe.
I recently read about a pattern called object mother which basically is a factory that creates objects with the different data that you might need. M. Fowler also talks about these objects as akin to personas, that is, you might generate different objects that represents different use cases.
Now the object mother pattern is not without it's problems, it can easily grow a lot and become quite cumbersome to maintain as your project grows. In this article 'TEST DATA BUILDERS AND OBJECT MOTHER: ANOTHER LOOK' the author talkes about using the builder pattern to create testobjects, which he concludes is also not perfect and then goes on to hypothesize about a combination between a builder combined with an object mother.
So basically you'd use the object mother pattern to bootstrap some repetetive data, and then use the returned builder to configure the object to your tests specific
I believe that wether you should do it like explained above or just repeat yourself in your tests (which isn't necessarily a bad thing when it comes to testing) is a matter of trying to evaluate the cost of implementing this contra continuing with how you're doing things now.
Just having a conversation with someone in the office about using a business logic class to build up some data in order to test another class.
Basically, he has class A which takes a complex type as a parameter and then generates a collection of a different complex type as a result. He's written tests around this class already. Now he's moved on to testing another class (class B) which takes the result of class A then performs some logic on it.
He's asked the question, "should I use class A to build up a scenario to test class B with".
At first I said yes as class A has tests around it. But then I figured well what if there's some bugs in class A that we haven't found yet... so I guess there must be a better way to address this scenario.
Does anyone have any thoughts on this? Is it OK to use existing logic to save time writing other tests?
Regards,
James
Stances on that might differ. Generally, if code is tested and you assume it works, you're free to use it. This is especially true when using already tested methods to help with testing others (within single class/unit). However, as this happens within single unit it's a bit different from your case.
Now, when dealing with 2 separate classes I'd say you should avoid such approach. Purely for the reason that those two classes might not be related in obvious way or their context/scope of usage might vastly differ. As in, somebody might change class A without even knowing class B exists. Class B tests suddenly break even though no changes were made to B code. This brings unnecessary confusion and is situation you usually don't want to find yourself in.
Instead, I suggest creating helper method within tested class B file. With stubs/fakes and tools like AutoFixture, you should be able to easily reproduce generation logic used by class A and have your own "copy" contained in class B tests.
In order to test class B, the result returned from the class A should be replicated somewhere in your test. If class A returns a list of persons, you will have an helper function in your test returning a fake List<Person> to use for your test.
Because you are only testing the class B, class A should not be used in your test.
NUnit provides a built in functionality in order to provide data for your tests, have a look to :
http://www.nunit.org/index.php?p=testCaseSource&r=2.5
Or you can simply create a DataFactory class with methods that returns the data (simple objects, collections etc..) you will consume in your tests.
I am working on a reporting application (in PHP). This app has a huge amount of different filters, granulations, etc. in the UI and based on those filters etc, the backend constructs a massive query to pull hundreds of rows of data from the db.
How is it possible to write unit tests for something like this?
Lets say I create a test db with some known data. Would I create a bunch of tests where I compare the returned data set (for whatever filter settings) against hardcoded SQL queries in the tests?
Would this mean that for any schema change, I have to go back and change every single SQL query in the tests?
Unit testing isn't testing in way that uses real code or data, you mock everything you work with. You wouldn't test it in the way you are describing, nor need to. You aren't testing what data you get, only that the data you feed it, after the method processes it, is what you expect or similar.
For example, if you have a method that returns data retrieved from a database, the database has nothing to do with your test. You are testing just that method and the logic there within; what methods you may call within it, expectations as to what you expect those methods within it to do (like return a generic representation of a value you can do an assertion on) etc, and everything outside of that method is mocked (i.e. a generic representation).
In a simple example, if you created one method that is a setter of something, and a one method used as a getter of that something, then you will write a test that says when I use the setter the getter will return the same value.... boom, both methods are tested.
This is the reason why you hear about TDD (test driven development), which may feel counter intuitive at first, but it forces a developer to put together the pieces required to write testable code, which ultimately leads to code that's better. Yes, you can write code that functions perfectly, but it's not necessarily testable (or nearly impossible to), and that's an indicator that it's entirely too coupled, meaning it's not that reusable. For example, instead of creating a method that returns the number of apples, you could create a method that injects the object type so no matter what type of fruit you are using in that part of the project, it could return you a count (oranges, apples, pears, or not even fruit at all). That makes that method reusable, and also means you won't be writing methods for each type of fruit either (so you write less code).
Anyway, provide an example of your code, and your test, to see what the issue is.
Imagine a system of filters (maybe audio filters, or text stream filters).
A Filter base class has a do_filter() method, which takes some input, modifies it (perhaps), and returns that as output.
Several subclasses exist, built with TDD, and each has a set of tests which test them in isolation.
Along comes a composite class, of an unrelated type Widget, which has two members of different Filter types (a and b), which deal with quite different input - that is, certain input which would be modified by filter a is passed through unmodified by filter b, and vice versa. Its process_data() method calls each filter member's do_filter().
While developing the composite class, there emerge tests that check the assumption that Widget's filters aren't both processing the same data.
The problem is, these sort of tests look identical to the individual filter's test. Although there might be other tests, which test input which should be modified by both filters, many of the tests could almost be copied and pasted from each of the filter's tests, with only small modifications needed to have them test with Widget (such as calling process_data()), but the input data and the assert checks are identical.
This duplication smells pretty bad. But it seems right to want to test the components' interactions. What sort of options will avoid this sort of duplication?
Within one Test suite/class have a method
public void TestForFooBehaviour(IFilter filter)
{
/* whatever you would normally have in a test method */
}
Then invoke this method from both the original test on the simple filter as well as from the composite filter. This also works for abstract base classes. Obviously FooBehaviour should be a meaningful description of the aspect of filters you are testing. Do this for each behaviour you want to test.
If you language supports duck typing or generics feel free to use it if it helps.
I fairly frequently extract test-logic to separate classes, so I'd extract the filter test to a separate class that is essentially not a unit test by itself. Especially if your test classes are physically separated from your production code this is really a decent way to solve this problem (i.e. No-one will think it is production code since it's in the test space)
I asked something similar about a abstract base class and unit testing here, it has some interesting points that you might find useful.
How to unit test abstract classes: extend with stubs?
What is an ObjectMother and what are common usage scenarios for this pattern?
ObjectMother starts with the factory pattern, by delivering prefabricated test-ready objects via a simple method call. It moves beyond the realm of the factory by
facilitating the customization of created objects,
providing methods to update the objects during the tests, and
if necessary, deleting the object from the database at the completion of the test.
Some reasons to use ObjectMother:
* Reduce code duplication in tests, increasing test maintainability
* Make test objects super-easily accessible, encouraging developers to write more tests.
* Every test runs with fresh data.
* Tests always clean up after themselves.
(http://c2.com/cgi/wiki?ObjectMother)
See "Test Data Builders: an alternative to the Object Mother pattern" for an argument of why to use a Test Data Builder instead of an Object Mother. It explains what both are.
As stated elsewhere, ObjectMother is a Factory for generating Objects typically (exclusively?) for use in Unit Tests.
Where they are of great use is for generating complex objects where the data is of no particular significance to the test.
Where you might have created an empty instance below such as
Order rubishOrder = new Order("NoPropertiesSet");
_orderProcessor.Process(rubishOrder);
you would use a sensible one from the ObjectMother
Order motherOrder = ObjectMother.SimpleOrder();
_orderProcessor.Process(motherOrder);
This tends to help with situations where the class being tested starts to rely on a sensible object being passed in.
For instance if you added some OrderNumber validation to the Order class above, you would simply need to instantiate the OrderNumber on the SimpleObject class in order for all the existing tests to pass, leaving you to concentrate on writing the validation tests.
If you had just instantiated the object in the test you would need to add it to every test (it is shocking how often I have seen people do this).
Of course, this could just be extracted out to a method, but putting it in a separate class allows it to be shared between multiple test classes.
Another recommended behavior is to use good descriptive names for your methods, to promote reuse. It is all too easy to end up with one object per test, which is definitely to be avoided. It is better to generate objects that represent general rather than specific attributes and then customize for your test. For instance ObjectMother.WealthyCustomer() rather than ObjectMother.CustomerWith1MdollarsSharesInBigPharmaAndDrivesAPorsche() and ObjectMother.CustomerWith1MdollarsSharesInBigPharmaAndDrivesAPorscheAndAFerrari()