Imagine a system of filters (maybe audio filters, or text stream filters).
A Filter base class has a do_filter() method, which takes some input, modifies it (perhaps), and returns that as output.
Several subclasses exist, built with TDD, and each has a set of tests which test them in isolation.
Along comes a composite class, of an unrelated type Widget, which has two members of different Filter types (a and b), which deal with quite different input - that is, certain input which would be modified by filter a is passed through unmodified by filter b, and vice versa. Its process_data() method calls each filter member's do_filter().
While developing the composite class, there emerge tests that check the assumption that Widget's filters aren't both processing the same data.
The problem is, these sort of tests look identical to the individual filter's test. Although there might be other tests, which test input which should be modified by both filters, many of the tests could almost be copied and pasted from each of the filter's tests, with only small modifications needed to have them test with Widget (such as calling process_data()), but the input data and the assert checks are identical.
This duplication smells pretty bad. But it seems right to want to test the components' interactions. What sort of options will avoid this sort of duplication?
Within one Test suite/class have a method
public void TestForFooBehaviour(IFilter filter)
{
/* whatever you would normally have in a test method */
}
Then invoke this method from both the original test on the simple filter as well as from the composite filter. This also works for abstract base classes. Obviously FooBehaviour should be a meaningful description of the aspect of filters you are testing. Do this for each behaviour you want to test.
If you language supports duck typing or generics feel free to use it if it helps.
I fairly frequently extract test-logic to separate classes, so I'd extract the filter test to a separate class that is essentially not a unit test by itself. Especially if your test classes are physically separated from your production code this is really a decent way to solve this problem (i.e. No-one will think it is production code since it's in the test space)
I asked something similar about a abstract base class and unit testing here, it has some interesting points that you might find useful.
How to unit test abstract classes: extend with stubs?
Related
To bring a specific example, I have PromoCode Aggregate Root that consists of PromoCodeUsage Entity which is only controlled by that AR, so that some methods on AR are simply delegated to that Entity, like:
public function useFor(Order $order): void
{
$this->promoCodeUsage->useFor($order);
}
And some of them are partly delegated, like:
public function applyFor(Order $order): void
{
if (!$this->published) {
throw new NotPublishedPromoCodeCanNotBeApplied();
}
$this->promoCodeUsage->applyFor($order);
}
My test suite fully covers all PromoCode behavior including PromoCodeUsage functionality, because at that iteration there was no PromoCodeUsage and all logic was mixed in PromoCode. Then I refactored some of that logic into PromoCodeUsage. This test suite for PromoCode had many tests and I was happy I can split it too (but it worked well even after splitting entities). So I created another test suite (PromoCodeUsageTest), where I moved part of tests from PromoCode.
But PromoCodeUsageTests are testing PromoCodeUsage entity through the behavior of PromoCode, same way like it was in original test before splitting. They are not touching PromoCodeUsage directly. Now I have PromoCodeTest suite with:
And PromoCodeUsageTest suite with:
But it is somehow weird, that 1) in PromoCodeTest I omit some tests (that are elsewhere) and 2) in PromoCodeUsageTest I am actually not touching PromoCodeUsage entity. 3) I use Roy Osherove’s template for test naming, and I do not know what method name should I use in test name - from PromoCode or PromoCodeUsage? In my case they are same but they could differ and that idea smells.
If I rewrite PromoCodeUsageTests to test directly PromoCodeUsage entity, I end up with some uncovered methods on PromoCode (that are just delegated to PromoCodeUsage). So that takes me back to my approach to test PromoCodeUsage through PromoCode AR.
Uncle Bob (and others) says it is good practice to test behavior, not API. Does my approach head to conform to that?
Because I feel some smell in my approach, do you? How to do it better?
You're correct to think about testing behavior. I assume all the behavior of your aggregate is exposed through the aggregate root, so it makes sense to test through the root. I would just suggest you name your tests to describe the behavior they are testing . Don't use method names in the test names because these could change - this is tying your test names to the internal implementation of your production code.
If a test class is getting very large, it makes sense to break it into smaller classes - there's no rule that you must have 1:1 relationship between test and production classes. However, this may suggest your class, and your aggregate in this case, may have too many responsibilities and might be broken into smaller pieces.
I tend to see Aggregates as state machines and test them accordingly.
What's important is not in which test file the tests are, but that you test all possible resulting states of the PromoCode aggregate depending on the starting state and the kind of promo code usage/application you're doing.
Of course, this might require looking deep down in the guts of the aggregate, in dependent entities. If you're more comfortable putting in a different test class all the tests whose Asserts look at PromoCodeUsage for instance, then fine, as long as test names reflect the domain and not some technical details.
When writing tests i usually have some methods creating test data:
#Test
public void someMethod_somePrecondition_someResult() {
ClassUnderTest cut = new ClassUnderTest();
NeededData data = createNeededData();
cut.performSomeActionWithTheData(data);
assertTrue(cut.someMethod());
}
private NeededData createNeededData() {
NeededData data = new NeededData();
// Initialize data with needed values for the tests
return data;
}
I think this is a good approach to minimize duplication in the test class (most unit testing frameworks also provide functionality to set up test data). But what if i test classes that need similar test data? Is it a good choice to provide every test class with its own createNeededData() method, even if they are all the same, or should i use other classes to generate test data to minimize code duplication?
Disclaimer: I haven't used what I'm suggesting here yet, so this is just what I believe.
I recently read about a pattern called object mother which basically is a factory that creates objects with the different data that you might need. M. Fowler also talks about these objects as akin to personas, that is, you might generate different objects that represents different use cases.
Now the object mother pattern is not without it's problems, it can easily grow a lot and become quite cumbersome to maintain as your project grows. In this article 'TEST DATA BUILDERS AND OBJECT MOTHER: ANOTHER LOOK' the author talkes about using the builder pattern to create testobjects, which he concludes is also not perfect and then goes on to hypothesize about a combination between a builder combined with an object mother.
So basically you'd use the object mother pattern to bootstrap some repetetive data, and then use the returned builder to configure the object to your tests specific
I believe that wether you should do it like explained above or just repeat yourself in your tests (which isn't necessarily a bad thing when it comes to testing) is a matter of trying to evaluate the cost of implementing this contra continuing with how you're doing things now.
Just having a conversation with someone in the office about using a business logic class to build up some data in order to test another class.
Basically, he has class A which takes a complex type as a parameter and then generates a collection of a different complex type as a result. He's written tests around this class already. Now he's moved on to testing another class (class B) which takes the result of class A then performs some logic on it.
He's asked the question, "should I use class A to build up a scenario to test class B with".
At first I said yes as class A has tests around it. But then I figured well what if there's some bugs in class A that we haven't found yet... so I guess there must be a better way to address this scenario.
Does anyone have any thoughts on this? Is it OK to use existing logic to save time writing other tests?
Regards,
James
Stances on that might differ. Generally, if code is tested and you assume it works, you're free to use it. This is especially true when using already tested methods to help with testing others (within single class/unit). However, as this happens within single unit it's a bit different from your case.
Now, when dealing with 2 separate classes I'd say you should avoid such approach. Purely for the reason that those two classes might not be related in obvious way or their context/scope of usage might vastly differ. As in, somebody might change class A without even knowing class B exists. Class B tests suddenly break even though no changes were made to B code. This brings unnecessary confusion and is situation you usually don't want to find yourself in.
Instead, I suggest creating helper method within tested class B file. With stubs/fakes and tools like AutoFixture, you should be able to easily reproduce generation logic used by class A and have your own "copy" contained in class B tests.
In order to test class B, the result returned from the class A should be replicated somewhere in your test. If class A returns a list of persons, you will have an helper function in your test returning a fake List<Person> to use for your test.
Because you are only testing the class B, class A should not be used in your test.
NUnit provides a built in functionality in order to provide data for your tests, have a look to :
http://www.nunit.org/index.php?p=testCaseSource&r=2.5
Or you can simply create a DataFactory class with methods that returns the data (simple objects, collections etc..) you will consume in your tests.
I am working on a reporting application (in PHP). This app has a huge amount of different filters, granulations, etc. in the UI and based on those filters etc, the backend constructs a massive query to pull hundreds of rows of data from the db.
How is it possible to write unit tests for something like this?
Lets say I create a test db with some known data. Would I create a bunch of tests where I compare the returned data set (for whatever filter settings) against hardcoded SQL queries in the tests?
Would this mean that for any schema change, I have to go back and change every single SQL query in the tests?
Unit testing isn't testing in way that uses real code or data, you mock everything you work with. You wouldn't test it in the way you are describing, nor need to. You aren't testing what data you get, only that the data you feed it, after the method processes it, is what you expect or similar.
For example, if you have a method that returns data retrieved from a database, the database has nothing to do with your test. You are testing just that method and the logic there within; what methods you may call within it, expectations as to what you expect those methods within it to do (like return a generic representation of a value you can do an assertion on) etc, and everything outside of that method is mocked (i.e. a generic representation).
In a simple example, if you created one method that is a setter of something, and a one method used as a getter of that something, then you will write a test that says when I use the setter the getter will return the same value.... boom, both methods are tested.
This is the reason why you hear about TDD (test driven development), which may feel counter intuitive at first, but it forces a developer to put together the pieces required to write testable code, which ultimately leads to code that's better. Yes, you can write code that functions perfectly, but it's not necessarily testable (or nearly impossible to), and that's an indicator that it's entirely too coupled, meaning it's not that reusable. For example, instead of creating a method that returns the number of apples, you could create a method that injects the object type so no matter what type of fruit you are using in that part of the project, it could return you a count (oranges, apples, pears, or not even fruit at all). That makes that method reusable, and also means you won't be writing methods for each type of fruit either (so you write less code).
Anyway, provide an example of your code, and your test, to see what the issue is.
I have to write Unit Tests for a method which requires a complex object Graph. Currently I am writing a Create method for each Test as shown below.
1. private static Entity Create_ObjectGraph_For_Test1()
2. private static Entity Create_ObjectGraph_For_Test2()
...... And So on
The create method has about 10 Steps and they might vary by 1-2 steps with each other. What is the best way to Create complex object graph. Apart from creating a Create method for each Test I can add parameter to a single Create methods but that might become confusing if the no of Tests are about 10 or so.
You can extract the steps into methods, possibly parametrizing them, and make them chain-able so that one can write:
Entity myGraph = GraphFactory.createGraph().step1().step2(<parm>).step3(<parm>);
Choising meaningful names makes the fixture readable.
It may be possible to put a substantial amount of common setup code into - well, of course: the setup() method, and then modify the object graph slightly for each individual test. If the setups for the different tests are sufficiently different, then I would encourage you to put the tests into separate classes and their setup into each test class independently.