Faking a Method of the Object Under Test - unit-testing

Is there a reason why you shouldn't create a partial fake of an object or just fake one method on the object that you are testing of it for the sake of testing another method? This might be helpful to save you from making an entire new mock object, or when there is an external dependency in the method you are faking which you can't reasonably get rid of and would like to keep out of all the other unit tests?

The objects you want to do this for are trying to do too many things. In particular, if you have an external dependency, you would normally create an object to isolate that dependency. The Façade pattern is one example of this. If your objects weren't designed with testability in mind you may have to do some refactoring. Take a look at Michael Feathers' PDF on working with legacy code(PDF). He also has a book by the same title that goes into much more detail.

It is a very bad idea to mock/fake part of a class to test another.
Doing this, you are not testing what the real code does in the conditions under test leading to unreliable test results.
It also increases the maintenance burden of the faked part of the class. If this is in effect for the whole test program, the fake implementation also makes other tests on the faked method harder.
You need to ask yourself why you need to fake out the part under test.
If it is because the method is accessing a file or database, then you should define an interface and pass an instance of that interface to the class constructor or method. This allows you to test different scenarios in the same test application.
If it is because you are using singletons, you should rethink your design to make it more testable: removing singletons will remove implicit dependencies and maintenance nightmares.
If you are using static methods/free-standing functions to access data in a registry or settings file, you should really move that out of the function under test and pass the data as a parameter or provide a settings provider interface. This will make the code more flexible and robust.
If it is to break a dependency for the purpose of testing (e.g. faking out a vector method to test a method in a matrix class) then you should not be faking that -- you should treat the code under test as what is defined by the class under test by its public interface: methods; pre-conditions, post-conditions, invariants, documentation, parameters and exception specifications.
You can use knowledge of the implementation details to test special edge cases, but trigger those through the main API, not by faking an implementation detail.
For example, suppose you faked std::vector::at() but the implementation switched to use operator[] instead. Your test would break or silently pass.

If the method you want to fake is virtual (as in, not static and not final), then you can subclass your object in your test, override the method in the subclass, and exercise the subclass in the test. No mock-object libraries required.
(Ideally you should consider refactoring, this is not a great long-term solution. But it is a way to get legacy code under test so you can start the refactoring process more easily.)

The Extract and Override technique described in Chapter 3 of Roy Osherove's The Art of Unit Testing does seem to be a way to fake part of the class under test (pp. 71-77). Osherove does not address the concerns raised in some of the other answers to this question.
In addition, Michael Feathers discusses this in Working Effectively with Legacy Code. He terms the resulting class a testing subclass (227) and the technique Subclass and Override Method (401). Now, granted, Feathers is not giving an exposition of pristine techniques that are recommended on new code. But he still gives it serious treatment as a potentially helpful technique.
I also asked my former computer professor about this. He is well-read and currently works full-time in the software industry, where he has advanced rapidly. He said that this technique definitely has a good application, and that there are several dozen classes in the codebase at his company that are under test in this way. He said that, like any technique, it can be overused.
I originally wrote the question when I was new to unit testing and knew next to nothing about dependency injection. Now, after some experience with both, I would add that the need to use this testing technique could be a smell. It may be a sign that need to rework your approach to dependencies. If the method that needs to be faked is one that is inherited from a base class, it may mean that you need to take the adage "favor composition over inheritance" more seriously. You should inject your dependencies rather than inheriting them.

There are some really nice packages for facilitating this kind of stuff. For instance, from the Mockito docs:
//You can mock concrete classes, not only interfaces
LinkedList mockedList = mock(LinkedList.class);
//stubbing
when(mockedList.get(0)).thenReturn("first");
does some real magic that's hard to believe at first. When you call
String firstMember = mockedList.get(0);
you'll get back "first", because of what you said in the "when" statement.

Related

TDD creating some "controller" classes - at what level of intention should its tests be written?

I've recently started practising TDD and unit testing, with my main primers being the excellent GOOSGBT and a perusal of TDD-tagged questions here on SO.
Occasionally, the process I use creates a "controller" class - generally, a class which is a facade over a fairly complex subsystem where, as the number of features implemented in the subsystem grows, responsibilities are continually driven out into helper classes until the original class has essentially no responsibilities beyond making correct calls to a small set of collaborator classes and shunting the returned information (if any) to its other collaborator classes.
Originally, the tests for the soon-to-be controller classes were written at the level of intention of end-users of the class: "If I make this call, what should be the observable effects that I, as an end-user of the class, actually care about?". But as more and more responsibilities and tests for edge-cases were driven out into helper classes (which are replaced by Test Doubles in the tests for the controller class), these tests began to seem really ... vague and non-specific: they were invariably "happy-path" tests that didn't really seem to get to the heart of the matter. It's hard to explain what I mean, but reading the tests back left me with a kind of "So what? Why did you choose this particular happy-path test over any other? What is the significance? If someone breaks the code, will this test pinpoint the exact reason why the code is now broken?" As time went by, I was more and more strongly inclined to instead write the tests in terms of how the classes' collaborators were supposed to be used together: "the class should call this method on this collaborator, and pass the result to this other collaborator" which gave a much more focussed, descriptive and clearly-motivated set of tests (and generally, the resulting set of tests is small).
This obviously has its flaws: the tests are now strongly coupled to the specifics of the implementation of the controller class, rather than the much more flexible "what would an end-user of this class see that he would care about?". But really, the tests are already quite coupled to it by virtue of the fact that they must configure all of the Test Double collaborators to behave in the exact way required by the implementation to give the correct results from an end-user of the classes' point of view.
So my question is: do fellow TDD'ers find that a minority of classes do little but marshall their (small) set of collaborators around? Do you find keeping the tests for such classes to be written from an end-user of the classes' point of view to be imprecise and unsatisfactory and if so, is it acceptable to write tests for such classes explicitly in terms of how it calls and transfers data between their collaborators?
Hope it's reasonably clear what I'm driving at, here! :)
As a concrete example: one practise project I was working on was a TV listings downloader/ viewer (if you've ever seen "Digiguide", you'll know the kind of thing I mean), and I was implementing a core part of the app - the part that actually updates the listings over the net and integrates the newly downloaded listings into the current set of known TV programs. The interface to this (surprisingly complex when all requirements are taken on board) functionality was a class called ListingsUpdater, which had a method called "updateListings".
Now, end-users of ListingsUpdater only really care about a few things: after listingsUpdate has been called, is the database of TV listings now correct, and were the changes made to the database (adding TV programs, changing them if broadcast changes occurred etc) described to the provided change listeners? When the implementation was a very, very simple "fake it till you make it" type of deal, this worked fine: but as I progressively drove the implementation towards one that would work in the real-world, the "real work" got driven further and further away from ListingsUpdater, until it mainly just marshalled a few collaborators: a ListingsRequestPreparer for assessing the current state of the listings and building HTTP requests for a ListingsDownloader, and a ListingsIntegrator which unpacked the newly downloaded listings and incorporated them (it too delegating to collaborators) into the listings database. Now, note that in order to fulfil the contract of ListingsUpdater from a user's point of view, I must, in the test, instruct its ListingsIntegrator Test Double to populate the (fake) database with the correct data(!), which seems bizarre. It seems much more sensible to drop the "from the end-user of ListingsUpdater's point of view" tests and instead add a test that says "when the ListingsDownloader has downloaded the new listings ensure they are handed over to the ListingsIntegrator".
This obviously has its flaws: the tests are now strongly coupled to the specifics of the implementation of the controller class, rather than the much more flexible "what would an end-user of this class see that he would care about?". But really, the tests are already quite coupled to it by virtue of the fact that they must configure all of the Test Double collaborators to behave in the exact way required by the implementation to give the correct results from an end-user of the classes' point of view.
I'll repeat what I said in answer to another question:
I need to create either a mock a stub or a dummy object [a test double] for each dependency
This is commonly stated. But I think it is wrong. If a Car is associated with an Engine object, why not use a real Engine object when unit testing your Car class?
But, someone will declare, if you do that you are not unit testing your code; your test depends on both the Car class and the Engine class: two units, so an integration test rather than a unit test. But do those people mock the String class too? Or HashSet<String>? Of course not. The line between unit and integration testing is not so clear.
More philosophically, you can not create good mock objects [test doubles] in many cases. The reason is that, for most methods, the manner in which an object delegates to associated objects is undefined. Whether it does delegate, and how, is left by the contract as an implementation detail. The only requirement is that, on delegating, the method satisfies the preconditions of its delegate. In such a situation, only a fully functional (non-mock) delegate will do. If the real object checks its preconditions, failure to satisfy a precondition on delegating will cause a test failure. And debugging that test failure will be easy.
And I'll add in response to
they were invariably "happy-path" tests that didn't really seem to get to the heart of the matter
This is a more general testing problem, not specific to TDD or unit testing: how to you select a good set of test-cases, given that comprehensive testing is impossible? I rely on equivalence partitioning. When I start work on some code, I use equivalence partitioning to select the set of test-cases I want the code to pass, then work on each in turn in a TDD manner, but if passing one of the test-cases does not require a code change (because early work has created code that also satisfies that test case) I still add the test-case to my test-suite. My test suite therefore has better coverage of potential error paths.

Do I only have to mock out external dependencies in a unit test? What's about internal dependencies?

Do I only have to mock out external dependencies in a unit test?
What if my method that I want to test, has a dependency on another class within the same assembly? Do I have to mock out the dependency for going sure to test only one thing and there for to make a unit test instead of an integration test?
Is an integration test a test that tests dependencies in general or do I have to difference between internal and external dependencies?
An example would be a method that has 2000 lines of code with 5 method invocations (all methods coming from the same assembly).
Generally a proper unit test is testing only that single piece of code. So a scenario like this is where you start to ask yourself about the coupling of these two classes. Does Class A internally depend on the implementation of Class B? Or does it just need to be supplied an instance of Type B (notice the difference between a class and a type)?
If the latter, then mock it because you're not testing Class B, just Class A.
If the former, then it sounds like creating the test has identified some coupling that can (perhaps even should) be re-factored.
Edit: (in response to your comment) I guess a key thing to remember while doing this (and retro-fitting unit tests into a legacy system is really, really difficult) is to mentally separate the concepts of a class and a type.
The unit tests are not for Class A, they are for Type A. Class A is an implementation of Type A which will either pass or fail the tests. Class A may have an internal dependency on Type B and need it to be supplied, but Type A might not. Type A is a contract of functionality, which is further expressed by its unit tests.
Does Type A specify in its contract that implementations will require an instance of Type B? Or does Class A resolve an instance of it internally? Does Type A need to specify this, or is it possible that different implementations of Type A won't need an instance of Type B?
If Type A requires an instance of Type B, then it should expose this externally and you'd supply the mock in your tests. If Class A internally resolves an instance of Type B, then you'd likely want to be using an IoC container where you'd bootstrap it with the mock of Type B before running the tests.
Either way, Type B should be a mock and not an implementation. It's just a matter of breaking that coupling, which may or may not be difficult in a legacy system. (And, additionally, may or may not have a good ROI for the business.)
Working with a code base you're describing isn't easy with multiple problems combined into something you don't know how to start changing. There are strong dependencies between classes as well as between problems and maybe even no overall design.
In my experience, this takes a lot of effort and time as well as skill in doing this kind of work. A very good resource to learn how to work with legacy code is Michael Feather's book: Working Effectively with Legacy Code.
In short, there are safe refactorings you can do without risking to break things, which might help you get started. There are also other refactorings which require tests to protect how things work. Tests are essential when refactoring code. This doesn't of course come with a 100% guarantee that things don't break, because there might be so many hidden "features" and complexity you cannot be aware of when you start. Depending on the code base the amount of work you need to do varies greatly, but for large code bases there is usually a lot of work.
You'll need to understand what the code does, either by simply knowing it or by finding out what the current code does. In either case, you start by writing "larger" tests which are not really unit tests, they just protect the current code. They might cover larger parts, more like integration/functional tests. These are your guards when you start to refactor the code. When you have such tests in place and you feel comfortable what the code does, you can start refactoring the parts the "larger" tests cover. For the smaller parts you change you write proper unit tests. Iterating doing various refactorings will at some point make the initial large tests unnecessary because you now have a much better code base and unit tests (or you simply keep them as functional test).
Now, coming back to your question.
I understand what you mean with your question, but I'd still like to change it slightly because there are more important aspects than external and internal. I believe a better question is to ask which dependencies do I need to break to get a better design and to write unit tests?
The answer to this question is you should break all dependencies you are not in control over, slow, non-deterministic or pulls in too much state for a single unit test. These are for sure all external (filesystem, printer, network etc.). Also note that multi-threading is not suitable for unit tests because this is not deterministic. For internal dependencies I assume you mean classes with members or functions calling other functions. The answer to this is maybe. You need to decide if you are in control and if the design is good. Probably in your case you are not in control and the code is not good, so here you need to refactor things to get things under control and into a better design. Michael Feather's book is great here, but you need to find how to apply the things on your code base of couse.
One very good technique for breaking dependencies is dependency injection. In short, it changes the design so that you pass in the members a class uses instead of letting the class itself instantiate them. For these you have an interface (abstract base class) for these dependencies you pass in, so you can easily change what you pass in. For instance, using this you can have different member implementations for a class in production and when you do unit test. This is a great technique and also leads to good design if use wisely.
Good luck and take your time! ;)
Generally speaking, a method with 2000 lines of code is just plain BAD. I usually start to look for reasons to make new classes -- not even methods, but classes -- when i have to use the pagedown key more than three or four times to browse through it (and collapsable regions doesn't count).
So, yes you do need to get rid of dependencies from outside and inside of the assembly, and you need to think of responsibility of the class. It sounds like this one has way too much weight on its shoulders, and it sounds like it is very close to impossible to write unittests for. If you think testability, you will automatically start to inject dependencies, and downsize your classes, and BAM!!!There you have it; nice and pretty code!! :-)
Regards,
Morten

Is it acceptable to use a 'real' utility class instead of mocking in TDD?

I have a project I am trying to learn unit testing and TDD practices with. I'm finding that I'm getting to quite confusing cases where I am spending a long time setting up mocks for a utility class that's used practically everywhere.
From what I've read about unit testing, if I am testing MyClass, I should be mocking any other functionality (such as provided by UtilityClass). Is it acceptable (assuming that UtilityClass itself has a comprehensive set of tests) to just use the UtilityClass rather than setting up mocks for all the different test cases?
Edit: One of the things I am making a lot of setup for.
I am modelling a map, with different objects in different locations. One of the common methods on my utility class is GetDistanceBetween. I am testing methods that have effects on things depending on their individual properties, so for example a test that selects all objects within 5 units of a point and an age over 3 will need several tests (gets old objects in range, ignores old objects out of range, ignores young objects in range, works correctly with multiples of each case) and all of those tests need setup of the GetDistanceBetween method. Multiply that out by every method that uses GetDistanceBetween (almost every one) and the different results that the method should return in different circumstances, and it gets to be a lot of setup.
I can see as I develop this further, there may be more utility class calls, large numbers of objects and a lot of setup on those mock utility classes.
The rule is not "mock everything" but "make tests simple". Mocking should be used if
You can't create an instance with reasonable effort (read: you need a single method call but to create the instance, you need a working database, a DB connection, and five other classes).
Creation of the additional classes is expensive.
The additional classes return unstable values (like the current time or primary keys from a database)
TDD isn't really about testing. Its main benefit is to help you design clean, easy-to-use code that other people can understand and change. If its main benefit was to test then you would be able to write tests after your code, rather than before, with much of the same effect.
If you can, I recommend you stop thinking of them as "unit tests". Instead, think of your tests as examples of how you can use your code, together with descriptions of its behaviour which show why your code is valuable.
As part of that behaviour, your class may want to use some collaborating classes. You can mock these out.
If your utility classes are a core part of your class's behaviour, and your class has no value or its behaviour makes no sense without them, then don't mock them out.
Aaron Digulla's answer is pretty good; I'd rephrase each of his answers according to these principles as:
The behaviour of the collaborating class is complex and independent of the behaviour of the class you're interested in.
Creation of the collaborating class is not a valuable aspect of your class and does not need to be part of your class's responsibility.
The collaborating class provides context which changes the behaviour of your class, and therefore plays into the examples of how you can use it and what kind of behaviour you might expect.
Hope that makes sense! If you liked it, take a look at BDD which uses this kind of vocabulary far more than "test".
In theory you should try to mock all dependencies, but in reality it's never possible. E.g. you are not going to mock the basic classes from the standard library. In your case if the utility class just contains some basic helper methods I think I wouldn't bother to mock it.
If it's more complicated than that or connects to some external resources, you have to mock it. You could consider creating a dedicated mock builder class, that would create you a standard mock (with some standard stubs defined etc), so that you can avoid mocking code duplication in all test classes.
No, it is not acceptable because you are no longer testing the class in isolation which is one of the most important aspects of a unit test. You are testing it with its dependency to this utility even if the utility has its own set of tests. To simplify the creation of mock objects you could use a mock framework. Here are some popular choices:
Rhino Mocks
Moq
NSubstitute
Of course if this utility class is private and can only be used within the scope of the class under test then you don't need to mock it.
Yes, it is acceptable. What's important is to have the UtilityClass thoroughly unit tested and to be able to differentiate if a test is failing because of the Class under test or because of the UtilityClass.
Testing a class in isolation means testing it in a controlled environment, in an environment where one control how the objects behave.
Having to create too many objects in a test setup is a sign that the environment is getting too large and thus is not controlled enough. Time has come to revert to mock objects.
All the previous answers are very good and really match with my point of view about static utility classes and mocking.
You have two types of utilities classes, your own classes you write and the third party utility classes.
As the purpose of an utility class is to provide small set of helper methods, your utility classes or a third party utility classes should be very well tested.
First Case: the first condition to use your own utility class (even if static) without mocking, is to provide a set of valid unit tests for this class.
Second Case: if you use a third party utility library, you should have enough confidence to this library. Most of the time, those libraries are well tested and well maintained. You can use it without mocking its methods.

In few words, what can be said about Mocking process in TDD

I'd like to brush my brain to avoid confusions. In few words, what can be said about Mocking process in TDD
What's the GREAT idea behind MOCKING?
Mocking frameworks are meant to be used only to avoid accessing DB during tests or they can be used for something else?
For new comers (like me), are all the frameworks equal or I need to choose one for this or that reason?
In addition to eliminating databases and other slow or ancillary concerns from the unit being tested, mocking allows you to start writing tests for a class without having to implement any collaborating classes.
As you design some piece of functionality, you'll realize that you need some other class or service, in order to stick to the single responsibility principle, but then you'll have to implement those to get the first one working, which in turn will demonstrate the need for still more classes.
If you can mock or stub those dependencies, then you can create the interfaces upon which that first class will rely, without actually having to implement anything outside of that class -- just return canned results from stubs of the interfaces.
This is an essential component to a test-first approach.
The GREAT idea: LIMIT THE SCOPE OF YOUR TESTS. By removing dependencies you remove the risk of test failures because of dependencies. That way you can focus on the correctness of the code that USES those dependencies.
Mocking DB's is very common but you can mock any dependency with an interface. In a recent project we mocked a web service, for example. You might even want to mock another business object just to make sure that you aren't relying on the correctness of the logic in that object.
I'd choose whichever one seems easiest to use. Moq is really nice.
I suggest you start here:
Mocks are not Stubs
It probably is the article that got me thinking the right way about Mocks. Sure the mocked object is usually heavy (otherwise it may not be worth mocking) but it doesn't have to be heavy in the sense that has some strong reliance on an external system like a database. It can be just a complex piece that you need to isolate to effectively be testing only your class and not the dependency.

Why is it so bad to mock classes?

I recently discussed with a colleague about mocking. He said that mocking classes is very bad and should not be done, only in few cases.
He says that only interfaces should be mocked, otherwise it's an architecture fault.
I wonder why this statement (I fully trust him) is so correct? I don't know it and would like to be convinced.
Did I miss the point of mocking (yes, I read Martin Fowler's article)
Mocking is used for protocol testing - it tests how you'll use an API, and how you'll react when the API reacts accordingly.
Ideally (in many cases at least), that API should be specified as an interface rather than a class - an interface defines a protocol, a class defines at least part of an implementation.
On a practical note, mocking frameworks tend to have limitations around mocking classes.
In my experience, mocking is somewhat overused - often you're not really interested in the exact interaction, you really want a stub... but mocking framework can be used to create stubs, and you fall into the trap of creating brittle tests by mocking instead of stubbing. It's a hard balance to get right though.
IMHO, what your colleague means is that you should program to an interface, not an implementation. If you find yourself mocking classes too often, it's a sign you broke the previous principle when designing your architecture.
Mocking classes (in contrast to mocking interfaces) is bad because the mock still has a real class in the background, it is inherited from, and it is possible that real implementation is executed during the test.
When you mock (or stub or whatever) an interface, there is no risk of having code executed you actually wanted to mock.
Mocking classes also forces you to make everything, that could possibly be mocked, to be virtual, which is very intrusive and could lead to bad class design.
If you want to decouple classes, they should not know each other, this is the reason why it makes sense to mock (or stub or whatever) one of them. So implementing against interfaces is recommended anyway, but this is mentioned here by others enough.
I would suggest to stay away from mocking frameworks as far as possible. At the same time, I would recommend to use mock/fake objects for testing, as much as possible. The trick here is that you should create built-in fake objects together with real objects. I explain it more in detail in a blog post I wrote about it: http://www.yegor256.com/2014/09/23/built-in-fake-objects.html
Generally you'd want to mock an interface.
While it is possible to mock a regular class, it tends to influence your class design too much for testability. Concerns like accessibility, whether or not a method is virtual, etc. will all be determined by the ability to mock the class, rather than true OO concerns.
There is one faking library called TypeMock Isolator that allows you to get around these limitations (have cake, eat cake) but it's pretty expensive. Better to design for testability.
The answer, like most questions about practices, is "it depends".
Overuse of mocks can lead to tests that don't really test anything. It can also lead to tests which are virtual re-implementations of the code under test, tightly bound to a specific implementation.
On the other hand, judicious use of mocks and stubs can lead to unit tests which are neatly isolated and test one thing and one thing alone - which is a good thing.
It's all about moderation.
It makes sense to mock classes so tests can be written early in the development lifecycle.
There is a tendency to continue to use mock classes even when concrete implementations become available. There is also the tendency to develop against mock classes (and stubs) necessary early in a project when some parts of the system have not been built.
Once a piece of the system has been built it is necessary to test against it and continue to test against it (for regression). In this case starting with mocks is good but they should be discarded in favour of the implementation as soon as possible. I have seen projects struggle because different teams continue to develop against the behaviour of the mock rather than the implementation (once it is available).
By testing against mocks you are assuming that the mock is characteristic of the system. Often this involves guessing what the mocked component will do. If you have a specification of the system you are mocking then you don't have to guess, but often the 'as-built' system doesn't match the original specification due to practical considerations discovered during construction. Agile development projects assume this will always happen.
You then develop code that works with the mock. When it turns out that the mock does not truly represent the behaviour of the real as-built system (eg. latency issues not seen in the mock, resource and efficiency issues not seen in the mock, concurrency issues, performance issues etc) you then have a bunch of worthless mocking tests you must now maintain.
I consider the use of mocks to be valuable at the start of development but these mocks should not contribute to project coverage. It is best later if the mocks are removed and proper integration tests are created to replace them otherwise your system will not be getting tested for the variety of conditions which your mock did not simulate (or simulates incorrectly relative to the real system).
So, the question is whether or not to use mocks, it is a matter of when to use them and when to remove them.
It depends how often you use (or are forced by bad design) mocks.
If instantiating the object becomes too hard (and it happens more than often), then it is a sign the code may need some serious refactoring or change in design (builder? factory?).
When you mock everything you end up with tests that know everything about your implementation (white box testing). Your tests no longer document how to use the system - they are basically a mirror of its implementation.
And then comes potential code refactoring..
From my experience it's one of the biggest issues related to overmocking. It becomes painful and takes time, lots of it.
Some developers become fearful of refactoring their code knowing how long will it take.
There is also question of purpose - if everything is mocked, are we really testing the production code?
Mocks of course tend to violate DRY principle by duplicating code in two places: once in the production code and once in the tests.
Therefore, as I mentioned before, any change to code has to be made in two places (if tests aren't written well, it can be in more than that..).
Edit: Since you have clarified that your colleague meant mock class is bad but mock interface is not, the answer below is outdated. You should refer to this answer.
I am talking about mock and stub as defined by Martin Fowler, and I assume that's what your colleague meant, too.
Mocking is bad because it can lead to overspecification of tests. Use stub if possible and avoid mock.
Here's the diff between mock and stub (from the above article):
We can then use state verification on
the stub like this.
class OrderStateTester...
public void testOrderSendsMailIfUnfilled() {
Order order = new Order(TALISKER, 51);
MailServiceStub mailer = new MailServiceStub();
order.setMailer(mailer);
order.fill(warehouse);
assertEquals(1, mailer.numberSent());
}
Of course this is a very simple test -
only that a message has been sent.
We've not tested it was send to the
right person, or with the right
contents, but it will do to illustrate
the point.
Using mocks this test would look quite
different.
class OrderInteractionTester...
public void testOrderSendsMailIfUnfilled() {
Order order = new Order(TALISKER, 51);
Mock warehouse = mock(Warehouse.class);
Mock mailer = mock(MailService.class);
order.setMailer((MailService) mailer.proxy());
mailer.expects(once()).method("send");
warehouse.expects(once()).method("hasInventory")
.withAnyArguments()
.will(returnValue(false));
order.fill((Warehouse) warehouse.proxy());
}
}
In order to use state verification on the stub, I need to make some extra methods on the >stub to help with verification. As a result the stub implements MailService but adds extra >test methods.