how to avoid returning mocks from a mocked object list - unit-testing

I'm trying out mock/responsibility driven design. I seem to have problems to avoid returning mocks from mocks in the case of objects that need a service to retrieve other objects.
An example could be an object that checks whether the bills from last month are paid. It needs a service that retrieves a list of bills. So I need to mock that billRetrievalService in my tests. At the same time I need that BillRetrievalMock to return mocked Bills (since I don't want my test to rely on the correctness of the Bill implementation).
Is my design flawed? Is there a better way to test this? Or is this the way it will need to be when using finder objects (the finding of the bills in this case)?
side note: althout Bill might be a value object candidate, the wider problem still remains when the collections aren't containing value objects (eg Users).

Most of the time, if I need a mock to return another mock, I find a dependency that makes more sense in the other direction. Stated differently, mock-returning-mock usually points to a violation of the Dependency Inversion Principle.
One common exception: a factory that creates objects (as opposed to a "holder" that simply returns the same object each time). If I need to create multiple objects of the same type during my lifetime, then I might need to depend on an ObjectFactory and invoke #createObject(), then perhaps set expectations on the Objects. Even so, I would question this. It might be possible for something else one level up the call stack to create Objects for me and give them to me as needed.
In the ObjectHolder case, rather than depending on the ObjectHolder to get the Object, I prefer to depend on the Object directly and force my caller to give it to me however it wants. This respects the desirable design property of context independence.
One specific version of this issue is the "Virtual Clock" pattern. Sometimes you need to depend on virtual clock, but often it's better simply to demand a timestamp ("Instantaneous Request" pattern) or, at worst, a stream of timestamps, wherever that comes from. Tests could provide a controled stream of convenient, hardcoded timestamps, but it's also easy to turn the system clock into a stream of timestamps.

Mock returning mocks is a strong code smell - a possible problem with the design. It could be that the Bills should be immutable values objects which should not be mocked. Or there is some confusion with the design and the responsibilities of the classes.
The book Growing Object-Oriented Software, Guided by Tests and paper Mock Roles, not Objects from the inventors of mock objects are worth reading.

As the Way of the Testuvius advices, no principle, however good, should be taken absolutely and so it is also with the rule that you shouldn't need mocks returning mocks, there are cases when this is quite suitable.
As Gutzofter suggests, you could break your object into two, one for the actual validation and another one for retrieval of the bills to validate. The advantage of this "single responsibility only" principle application is that the validator would be more generic and reusable. On the other hand, if you only have this simple use case and no special need for the higher reusability, it's a very pragmatic to keep the retrieval and validation in a single class. Layering, explosion of the number of objects etc. unjustified by a real need and a real benefit, done only for the sake of satisfying an abstract principle, isn't good. You always have to weight the pros and cons and the reality rarely is simple and as beautiful as we would like :-) Great examples of this pragmatic approach are in Adam Bien's Real World Java EE Patterns - Rethinking Best Practices.

Usually when I mock I end up having a triad of objects. The first object would be a coordinator BillsPaidLastMonthCoordinator this object has two dependencies BillRetrievalService and BillPaidValidator.
You would mock the two dependencies and your test would be for the interaction of retrieval and passing bills to the validation. So for this test you will not care what the data is. This helps separate responsibilities. Your original object was responsible for retrieving Bills and then seeing if it was a isPaid Bill.
With the way you described the problem you can end up is a noisy and brittle test. The brittleness comes from it being able to be broken in two ways.
With the corrdinator, it doesn't have to change if Bill implementation changes, Just the objects that actually use a Bill. My 2centavos.
[EDIT]
This is more aligned with using event handlers (the coordinator)

Related

Don't mock domain objects rule?

The most experienced developer in my current team has set a few hard rules based on best practices we should follow. Among them is "You never ever mock domain object". I asked him why we couldn't but he never had time to give me a proper answer. Now he's away for a week, and I reached a peak distrust to that rule, here's my situation.
My domain object has an Update method with several parameters, one being an interface for a calculator. It then updates a few fields, runs the calculators and assign its results to some other of its fields.
The proper unit test for the Update method itself is reasonably long.
Now I have some piece of code which do a few things then call Update on such a domain object.
I would normally mock the objects, and just check the Update method is being called with the proper arguments. But now, I have to test it's being properly called, by checking its fields and mocking the calculator then same way I did when unit testing the Update method itself. And I would have to do that everywhere the Update methods gets called.
How fun will it be when the Update method change a bit, and every of those tests suddenly break and need refactoring... I feels this rule is just like shooting myself in the foot..
So I need to know, why You never ever mock domain object"?
You never ever mock domain object
This is more of a heuristic than a best practice. And I can imagine that it make sense in some projects.
This heuristic can help to have better tests. Good tests are easy to write, easy to read and they will break as soon as something goes wrong. If you mock your domain model, you will probably also have to mock it's behavior. And if you have some complex behavior in your model, mocking it will be time consuming. The test will also be more fragile (mistakes can happen in predicting the domain model output).
The other alternative is to do sociable unit testing. That means the unit to be tested will not be a single class in isolation but a class and some of its dependencies. In other words, you just use a concrete object of your domain model in the test.
I would normally mock the objects, and just check the Update method is being called with the proper arguments. But now, I have to test it's being properly called, by checking its fields and mocking the calculator then same way I did when unit testing the Update method itself.
Checking that the update method is being called doesn't necessarily mean the success of the test. This is probably not what you want to check to see if your service works correctly. If you pass a concrete class of your model(including the calculators) you won't have to do any useless checks or repeat what has been done in other tests.
One of the differences between this solution and testing classes in isolation is that when there is bug in your domain model, many other tests will fail. But this is totally fine in my opinion since it will have to be fixed anyway.
You never ever mock domain object
There might be different reasons to avoid mocking your domain objects:
Mocking libraries, the same as ORMs, can affect design of your domain objects. They usually require interfaces or virtual methods in C#.
A preference to test against real instances. Very often people use mocking libraries because it is an easy way to create a fake or a stub. Very often you can build an instance of your domain object and use it in your test. Most likely a fact that method Update was called can be easily checked via state change.
I would normally mock the objects, and just check the Update method is being called with the proper arguments. But now, I have to test it's being properly called, by checking its fields and mocking the calculator then same way I did when unit testing the Update method itself.
How fun will it be when the Update method change a bit, and every of those tests suddenly break and need refactoring...
It was already mentioned in the comment that probably Update method has to many responsibilities and that's why is used in lots of use cases.
There are different tastes when it comes to defining a system under test (sut). If you tend to test classes then you often find yourself using mocking libraries. If you tend to test scenarios or behaviors then most likely you will only care about state changes.
You can probably explore a different design for your domain object: Try to split Update method into explicit domain specific methods so each client will trigger a different behavior. Given this approach a "reasonably long" test for the Update method can even become obsolete since it will be tested by other tests.
While Update() is probably not as tailored a method as you would expect from a rich domain model - which may have an impact on test fragility - I agree that you should take "never do X" advice with a grain of salt.
If you understand all the implications and know that the integration test version will be less maintainable, definitely use mocks. You, and not blanket statements or cargo cults, are the ultimate judge to what's better for your system in your context.

TDD creating some "controller" classes - at what level of intention should its tests be written?

I've recently started practising TDD and unit testing, with my main primers being the excellent GOOSGBT and a perusal of TDD-tagged questions here on SO.
Occasionally, the process I use creates a "controller" class - generally, a class which is a facade over a fairly complex subsystem where, as the number of features implemented in the subsystem grows, responsibilities are continually driven out into helper classes until the original class has essentially no responsibilities beyond making correct calls to a small set of collaborator classes and shunting the returned information (if any) to its other collaborator classes.
Originally, the tests for the soon-to-be controller classes were written at the level of intention of end-users of the class: "If I make this call, what should be the observable effects that I, as an end-user of the class, actually care about?". But as more and more responsibilities and tests for edge-cases were driven out into helper classes (which are replaced by Test Doubles in the tests for the controller class), these tests began to seem really ... vague and non-specific: they were invariably "happy-path" tests that didn't really seem to get to the heart of the matter. It's hard to explain what I mean, but reading the tests back left me with a kind of "So what? Why did you choose this particular happy-path test over any other? What is the significance? If someone breaks the code, will this test pinpoint the exact reason why the code is now broken?" As time went by, I was more and more strongly inclined to instead write the tests in terms of how the classes' collaborators were supposed to be used together: "the class should call this method on this collaborator, and pass the result to this other collaborator" which gave a much more focussed, descriptive and clearly-motivated set of tests (and generally, the resulting set of tests is small).
This obviously has its flaws: the tests are now strongly coupled to the specifics of the implementation of the controller class, rather than the much more flexible "what would an end-user of this class see that he would care about?". But really, the tests are already quite coupled to it by virtue of the fact that they must configure all of the Test Double collaborators to behave in the exact way required by the implementation to give the correct results from an end-user of the classes' point of view.
So my question is: do fellow TDD'ers find that a minority of classes do little but marshall their (small) set of collaborators around? Do you find keeping the tests for such classes to be written from an end-user of the classes' point of view to be imprecise and unsatisfactory and if so, is it acceptable to write tests for such classes explicitly in terms of how it calls and transfers data between their collaborators?
Hope it's reasonably clear what I'm driving at, here! :)
As a concrete example: one practise project I was working on was a TV listings downloader/ viewer (if you've ever seen "Digiguide", you'll know the kind of thing I mean), and I was implementing a core part of the app - the part that actually updates the listings over the net and integrates the newly downloaded listings into the current set of known TV programs. The interface to this (surprisingly complex when all requirements are taken on board) functionality was a class called ListingsUpdater, which had a method called "updateListings".
Now, end-users of ListingsUpdater only really care about a few things: after listingsUpdate has been called, is the database of TV listings now correct, and were the changes made to the database (adding TV programs, changing them if broadcast changes occurred etc) described to the provided change listeners? When the implementation was a very, very simple "fake it till you make it" type of deal, this worked fine: but as I progressively drove the implementation towards one that would work in the real-world, the "real work" got driven further and further away from ListingsUpdater, until it mainly just marshalled a few collaborators: a ListingsRequestPreparer for assessing the current state of the listings and building HTTP requests for a ListingsDownloader, and a ListingsIntegrator which unpacked the newly downloaded listings and incorporated them (it too delegating to collaborators) into the listings database. Now, note that in order to fulfil the contract of ListingsUpdater from a user's point of view, I must, in the test, instruct its ListingsIntegrator Test Double to populate the (fake) database with the correct data(!), which seems bizarre. It seems much more sensible to drop the "from the end-user of ListingsUpdater's point of view" tests and instead add a test that says "when the ListingsDownloader has downloaded the new listings ensure they are handed over to the ListingsIntegrator".
This obviously has its flaws: the tests are now strongly coupled to the specifics of the implementation of the controller class, rather than the much more flexible "what would an end-user of this class see that he would care about?". But really, the tests are already quite coupled to it by virtue of the fact that they must configure all of the Test Double collaborators to behave in the exact way required by the implementation to give the correct results from an end-user of the classes' point of view.
I'll repeat what I said in answer to another question:
I need to create either a mock a stub or a dummy object [a test double] for each dependency
This is commonly stated. But I think it is wrong. If a Car is associated with an Engine object, why not use a real Engine object when unit testing your Car class?
But, someone will declare, if you do that you are not unit testing your code; your test depends on both the Car class and the Engine class: two units, so an integration test rather than a unit test. But do those people mock the String class too? Or HashSet<String>? Of course not. The line between unit and integration testing is not so clear.
More philosophically, you can not create good mock objects [test doubles] in many cases. The reason is that, for most methods, the manner in which an object delegates to associated objects is undefined. Whether it does delegate, and how, is left by the contract as an implementation detail. The only requirement is that, on delegating, the method satisfies the preconditions of its delegate. In such a situation, only a fully functional (non-mock) delegate will do. If the real object checks its preconditions, failure to satisfy a precondition on delegating will cause a test failure. And debugging that test failure will be easy.
And I'll add in response to
they were invariably "happy-path" tests that didn't really seem to get to the heart of the matter
This is a more general testing problem, not specific to TDD or unit testing: how to you select a good set of test-cases, given that comprehensive testing is impossible? I rely on equivalence partitioning. When I start work on some code, I use equivalence partitioning to select the set of test-cases I want the code to pass, then work on each in turn in a TDD manner, but if passing one of the test-cases does not require a code change (because early work has created code that also satisfies that test case) I still add the test-case to my test-suite. My test suite therefore has better coverage of potential error paths.

Types of methods which are hard to unit test?

What unit tests generally tend to be hard to write and why? I am particularly interested in methods which don't need mocking.
Thanks
Two cases where unit testing is made difficult:
Methods that invoke static methods that belong to other classes, particularly when those other classes have static state, or do significant work. Being stuck trying to "unit" test a method that, through transitive closure, does database queries can suck.
Methods that create instances of other classes directly (i.e., via new), particularly when the constructor of the other class does itself requires static state, or when it does significant work in the constructor.
A great A to Z guide of testability concerns with side by side code examples of easy/hard to test code can be found in Misko's extensive testability guide.
Click on the "flaw #x" links (they look like plain text but they're separate links).
Big, complex methods that do lots of things at the same time that really should've been separated. (example: get something from a configuration object, create a URL based on some variables, encode the URL, send a request, do something with the response... you get the drill).
Everything static. Things created with New, although I haven't found a proper way to avoid it without spamming the entire application with factories.
It's almost always about dependencies.
Most code depends on external systems such as databases, file systems, email clients, networks, etc. It's also common to have dependencies on major internal systems (e.g, the spell checking module, or the recalc engine...).
If these dependences are not easily substitutable, then the system becomes hard to test.
Classes that call statics and singletons are the worst offenders, but any class that doesn't accept it's dependencies via constructor or properties will be hard to test.
There are some legitimate situations that are hard to test:
Concurrency
User Interface - this is why the trend is towards MVC architectures that create ViewModels which can be easily tested. The actual rendering is minimized - this is called the humble dialog or humble object pattern in the test literature.

When is it appropriate to do interaction based testing as opposed to state based testing?

When I use Easymock(or a similar mocking framework) to implement my unit tests, I'm forced to do interaction-based testing (as I don't get to assert on the state of my dependencies. Or am I mistaken?).
On the other hand if I use a hand written stub (instead of using easymock) I can implement state based testing.
I'm quite unclear if I want to go with interaction based testing or state based testing.
I'm biased and I want to use Easymock, but I'm not sure if there would be any side-effects that I may have to face in the future.
Can anyone please throw some light on this?
Thanks in advance!
You have to divide your objects into domainy value objects (which hold state and should be immutable) and services. Services are the things other objects should ask to perform a particular task, but your code shouldn't be concerned about how this task is performed. To test the service in isolation without testing its peers, use a mock.
Value objects, which may contain domain functionality such as calculations, should never be mocked, because their responsibility is calculating and not delegating.
In a well designed system, services should always be injected and never returned from other services, so generally speaking, mocks shouldn't return mocks.
There is no reason you cannot do both. I find behavior-based or interaction-based testing using mocks saves a lot of boilerplate when all you want to do is test behavior. With hand-written stubs you end up with a lot of booleans indicating that a method was called that you have to then test for. That is redundant, brittle and quite a drag.
On the other hand, sometimes you do want to test state. For example, if the object under test's behavior needs to change based on the state of the candidate for mocking or stubbing, and there is some complex interaction to work out.
In that case, mocking frameworks can get in the way, and a hand-written stub makes managing the state much easier for the purposes of the test.
So the bottom line is that they are not mutually exclusive - use what makes sense for a given test. As long as each test is small and tests only one thing (as much as is reasonable) then you shouldn't find yourself in a situation where you started with a mock an suddenly find you have to do a bunch of effort to get things back to a stub.

N-tier architecture design separation of concerns

I realize there have already been a number of posts on n-tier design and this could possibly be me over thinking things and going round in circles, but I have myself all confused now and would like to get some clarity from the community please.
I am trying to separate a project I created, (and didn't design architecturally very well to start with), out into different layers (each in their own project):
UI
Business Objects
Logic / Business
DAL
The UI should only call the Logic layer to get its stuff
The Business Objects should not call or have references to anything else, just be a way of storing the data
The Logic / BUSINESS layer should hold all of the methods to get, create, update, delete (CRUD) objects in the system and would have references to both the BO and the DAL. It would apply the business logic to the operations then delegate the actual CRUD to the DAL.
The DAL would just do the CRUD operations on the DB. It would have a reference to the BO's as it would return them for the Gets etc.
My question is should the Logic classes only call their equivalent DAL class and just call logic classes instead? In other words, CompanyLogic class should only call CompanyDAL class. So if it wanted to get A Client object by ID it would call ClientLogic.GetClientByID(int) rather than the ClientDAL.GetClientByID(int).
The reason I thought it maybe should stay on the its own layer was that:
It would seem to loosen the coupling between projects
What about the Logic, if getting a client object had some logic validation in it (possibly not the best example, but hope it gets the point across).
EDIT:
I am not sure if it is bad design by me but at the moment the BUSINESS layer has a number of classes including ClientBULL and CompnayBULL, both classes have a call to one another. I use an interface for each class and have a factory to build the objects to try and reduce any coupling but they can not exist without each other now due to calling methods in both classes. Is this a bad idea?
Well, here's my comments on your design:
Logic is a bad name for what essentially is a layer assigned to abstract persistence. I would probably call it "Repository" or "Persistence" or DAO (data access objects) instead of "Logic", which is ambiguous and could absolutely mean anything.
If you really want to decouple your business layer from your DAL, your Logic layer should only accept interfaces to the DAL, and not concrete DAL classes.
There are two schools of thought as to where validation should reside. Some are completely fine with validation sitting at the UI layer; others would rather throw exceptions or pass messages from the business layer. Whichever way you go, just be consistent, don't duplicate validations in multiple places, and you'll be fine.
Go ahead and try coding it would probably be the best piece of advice I could give you. It's well and fine thinking it through, but at one point you'll need to see it while you're coding it and only then will subtle quirks and pitfalls reveal themselves. Whatever prototypes you can come up with will definitely be valuable to the direction your development and design takes you.
Goodluck!
Update
Re your edit: Within the same namespace or assembly, calls to concrete classes are definitely fine. I think it will be overly convoluted for you to need to put up interfaces for business logic -- I mean is there more than one set of rules you should follow?
I'm a believer of keeping things simple and following YAGNI. Don't make an interface until there are more than two classes that are going to implement/already implementing that interface (the DAL is always an exception to this though).