Mocks or real classes? [duplicate] - unit-testing

This question already has answers here:
When should I mock?
(4 answers)
Closed 9 years ago.
Classes that use other classes (as members, or as arguments to methods) need instances that behave properly for unit test. If you have these classes available and they introduce no additional dependencies, isn't it better to use the real thing instead of a mock?

I say use real classes whenever you can.
I'm a big believer in expanding the boundaries of "unit" tests as much as possible. At this point they aren't really unit tests in the traditional sense, but rather just an automated regression suite for your application. I still practice TDD and write all my tests first, but my tests are a little bigger than most people's and my green-red-green cycles take a little longer. But now that I've been doing this for a little while I'm completely convinced that unit tests in the traditional sense aren't all they're cracked up to be.
In my experience writing a bunch of tiny unit tests ends up being an impediment to refactoring in the future. If I have a class A that uses B and I unit test it by mocking out B, when I decide to move some functionality from A to B or vice versa all of my tests and mocks have to change. Now if I have tests that verify that the end to end flow through the system works as expected then my tests actually help me to identify places where my refactorings might have caused a change in the external behavior of the system.
The bottom line is that mocks codify the contract of a particular class and often end up actually specifying some of the implementation details too. If you use mocks extensively throughout your test suite your code base ends up with a lot of extra inertia that will resist any future refactoring efforts.

It is fine to use the "real thing" as long as you have absolute control over the object. For example if you have an object that just has properties and accessors you're probably fine. If there is logic in the object you want to use, you could run into problems.
If a unit test for class a uses an instance of class b and an change introduced to b breaks b, then the tests for class a are also broken. This is where you can run into problems where as with a mock object you could always return the correct value. Using "the real thing" Can kind of convolute tests and hide the real problem.
Mocks can have downsides too, I think there is a balance with some mocks and some real objects you will have to find for yourself.

There is one really good reason why you want to use stubs/mocks instead of real classes. I.e. to make your unit test's (pure unit test) class under test isolated from everything else. This property is extremely useful and the benefits for keeping tests isolated are plentiful:
Tests run faster because they don't need to call the real class implementation. If the implementation is to run against file system or relational database then the tests will become sluggish. Slow tests make developers not run unit tests as often. If you're doing Test Driven Development then time hogging tests are together a devastating waste of developers time.
It will be easier to track down problems if the test is isolated to the class under test. In contrast to a system test it will be much more difficult to track down nasty bugs that are not apparently visible in stack traces or what not.
Tests are less fragile on changes done on external classes/interfaces because you're purely testing the class that is under test. Low fragility is also an indication of low coupling, which is a good software engineering.
You're testing against external behaviour of a class rather than the internal implementation which is more useful when deciding code design.
Now if you want to use real class in your test, that's fine but then it is NOT a unit test. You're doing a integration test instead, which is useful for the purpose of validating requirements and overall sanity check. Integration tests are not run as often as unit tests, in practice it is mostly done before committing to favorite code repository, but is equally important.
The only thing you need to have in mind is the following:
Mocks and stubs are for unit tests.
Real classes are for integration/system tests.

Extracted and extended from an answer of mine How do I unit-test inheriting objects?">here:
You should always use real objects where possible.
You should only use mock objects if the real objects do something you dont want to set up (like use sockets, serial ports, get user input, retrieve bulky data etc). Essentially, mock objects are for when the estimated effort to implement and maintain a test using a real object is greater than that to implement and maintain a test using a mock object.
I dont buy into the "dependant test failure" argument. If a test fails because a depended-on class broke, the test did exactly what it should have done. This is not a smell! If a depended-on interface changes, I want to know!
Highly mocked testing environments are very high-maintenance, particularly early in a project when interfaces are in flux. Ive always found it better to start integration testing ASAP.

I always use a mock version of a dependency if the dependency accesses an external system like a database or web service.
If that isn't the case, then it depends on the complexity of the two objects. Testing the object under test with the real dependency is essentially multiplying the two sets of complexities. Mocking out the dependency lets me isolate the object under test. If either object is reasonably simple, then the combined complexity is still workable and I don't need a mock version.
As others have said, defining an interface on the dependency and injecting it into the object under test makes it much easier to mock out.
Personally, I'm undecided about whether it's worth it to use strict mocks and validate every call to the dependency. I usually do, but it's mostly habit.
You may also find these related questions helpful:
What is object mocking and when do I need it?
When should I mock?
How are mocks meant to be used?
And perhaps even, Is it just me, or are interfaces overused?

Use the real thing only if it has been unit tested itself first. If it introduces dependencies that prevent that (circular dependencies or if it requires certain other measures to be in place first) then use a 'mock' class (typically referred to as a "stub" object).

If your 'real things' are simply value objects like JavaBeans then thats fine.
For anything more complex I would worry as mocks generated from mocking frameworks can be given precise expectations about how they will be used e.g. the number of methods called, the precise sequence and the parameters expected each time. Your real objects cannot do this for you so you risk losing depth in your tests.

I've been very leery of mocked objects since I've been bitten by them a number of times. They're great when you want isolated unit tests, but they have a couple of issues. The major issue is that if the Order class needs a a collection of OrderItem objects and you mock them, it's almost impossible to verify that the behavior of of the mocked OrderItem class matches the real-world example (duplicating the methods with appropriate signatures is generally not enough). More than once I've seen systems fail because the mocked classes don't match the real ones and there weren't enough integration tests in place to catch the edge cases.
I generally program in dynamic languages and I prefer merely overriding the specific methods which are problematic. Unfortunately, this is sometimes hard to do in static languages. The downside of this approach is that you're using integration tests rather than unit tests and bugs are sometimes harder to track down. The upside is that you're using the actual code that is written, rather than a mocked version of that code.

If you don't care for verifying expectations on how your UnitUnderTest should interact with the Thing, and interactions with the RealThing have no other side-effects (or you can mock these away) then it is in my opinion perfectly fine to just let your UnitUnderTest use the RealThing.
That the test then covers more of your code base is a bonus.
I generally find it is easy to tell when I should use a ThingMock instead of a RealThing:
When I want to verify expectations in the interaction with the Thing.
When using the RealThing would bring unwanted side-effects.
Or when the RealThing is simply too hard/troublesome to use in a test setting.

If you write your code in terms of interfaces, then unit testing becomes a joy because you can simply inject a fake version of any class into the class you are testing.
For example, if your database server is down for whatever reason, you can still conduct unit testing by writing a fake data access class that contains some cooked data stored in memory in a hash map or something.

It depends on your coding style, what you are doing, your experience and other things.
Given all that, there's nothing stopping you from using both.
I know I use the term unit test way too often. Much of what I do might be better called integration test, but better still is to just think of it as testing.
So I suggest using all the testing techniques where they fit. The overall aim being to test well, take little time doing it and personally have a solid feeling that it's right.
Having said that, depending on how you program, you might want to consider using techniques (like interfaces) that make mocking less intrusive a bit more often. But don't use Interfaces and injection where it's wrong. Also if the mock needs to be fairly complex there is probably less reason to use it. (You can see a lot of good guidance, in the answers here, to what fits when.)
Put another way: No answer works always. Keep your wits about you, observe what works what doesn't and why.

Related

Unit Testing Classes VS Methods

When unit testing, is it better practice to test a class or individual methods?
Most of the examples I've seen, test the class apart from other classes, mocking dependencies between classes. Another method I've played around w/ is mocking methods you're not testing (by overriding) so that you're only testing the code in one method. Thus 1 bug breaks 1 test since the methods are isolated from each other.
I was wondering if there is a standard method and if there are any big disadvantages to isolating each method for testing as opposed to isolating classes.
The phrase unit testing comes from hardware systems testing, and is more or less semantics-free when applied to software. It can get used for anything from isolation testing of a single routine to testing a complete system in headless mode with an in-memory database.
So don't trust anyone who argues that the definition implies there is only one way to do things independently of context; there a variety of ways, some of which are sometimes more useful than others. And presumably every approach a smart person would argue for has at least some value somewhere.
The smallest unit of hardware is the atom, or perhaps some subatomic particle. Some people test software like they were scanning each atom to see if the laws of quantum mechanics still held. Others take a battleship and see if it floats.
Something in between is very likely better. Once you know something about the kind of thing you are producing beyond 'it is software', you can start to come up with a plan that is appropriate to what you are supposed to be doing.
The point of unit testing is to test a unit of code i.e. class.
This gives you confidence that part of the code on its one is doing what is expected.
This is also the first part of the testing process. It helps to catch those pesky bugs as early as possible and having a unit test to demonstrate it makes it easier to fix that further down the line.
Unit testing by definition is testing the smallest piece of written code you can. "Units" are not classes they are methods.
Every public method should have at least 1 unit test, that tests that method specifically.
If you follow the rule above, you will eventually get to where class interactions are being covered. As long as you write 1 test per method, you will cover class interaction as well.
There is probably no one standard answer. Unit tests are for the developer (or they should be), do what is most helpful to you.
One downside of testing individual methods is you may not test the actual behavior of the object. If the mocking of some methods is not accurate that may go undetected. Also mocks are a lot of work, and they tend to make the tests very fragile, because they make the tests care a lot about what specific method calls take place.
In my own code I try whenever possible to separate infrastructure-type dependencies from business logic so that I can write tests of the business logic classes entirely without mocks. If you have a nasty legacy code base it probably makes more sense to test individual methods and mock any collaborating methods of the object, in order to insulate the parts from each other.
Theoretically objects are supposed to be cohesive so it would make sense to test them as a whole. In practice a lot of things are not particularly object-oriented. In some cases it is easier to mock collaborator methods than it is to mock injected dependencies that get called by the collaborators.

Does isolation frameworks (Moq, RhinoMock, etc) lead to test overspecification?

In Osherove's great book "The Art of Unit Testing" one of the test anti-patterns is over-specification which is basically the same as testing the internal state of the object instead of some expected output. To my experience, using Isolation frameworks can cause the same unwanted side effects as testing internal behavior because one tends to only implement the behavior necessary to make your stub interact with the object under test. Now if your implementation changes later on (but the contract remains the same), your test will suddenly break because you are expecting some data from the stub which was not implemented.
So what do you think is the best approach to counter this?
1) Implement your stubs/mocks fully, this has the negative side-effect of potentially making your test less readable and also specifying more than necessary to make your test pass.
2) Favor manual, fully implemented fakes.
3) Implement your stubs/fakes so that they make your test just pass, and then deal with the brittleness that this might introduce.
I do not think you should favor manual testing - unless you prefer to test instead of code.
Instead you have another option - if you test the functionality and not the implementation, try to avoid testing private methods (that can be refactored) and in general write less-fragile tests you'll see that using a mocking/isolation framework does not require you to over specify the system nor does it cause your tests to become more fragile.
In a nutshell - writing fragile tests can be done with or without fakes/mocks and vise-versa.
I tend to use mocks instead of stubbed/fake objects. I find them a lot less trouble and they are way better at keeping test code under control because it's not cluttered with all sorts of half baked implementations. They also help to clarify what is being tested.
Another advantage is that I only have to address where the class under test needs something specific from the mock. So I don't have to code where it's not important. As for verification, again I only have to very the calls from the class under test to the mock that I care about and consider important aspects of the test.
I think, the problem is always the same, although it comes in different flavours: If you have tests that somehow cover the internals of a class, then you will break the tests that cover this internal code.
IMHO there are two ways to deal with that:
Your tests only cover the public contract of a class - a test strategy which is widely adopted for that exact reason: You don't have to change your tests as long as the public contract remains constant. Unfortunately, this is not, what you will have when doing Test-driven development.
If your tests come from a TDD process, then they will regularly cover non-public code. This means that they will break if you change the code. The only way to keep things in sync here is to 'fix' the tests together with the code. This means more maintenance during development. There's no recipe to easily deal with that (other than throw away the test, of course...).
My personal 'way out' is think in terms of 'code elements' rather than just code. A code element consists of three parts: Documentation, test, code. So if you change one part of the element, you have to also adjust the other two - otherwise you leave a broken code element behind.

Interfaces and unit tests - always white-box testing?

I have finally got in my mind what worried me about Dependency Injection and similar techniques that should make unit tests easier. Let's take this example:
public interface IRepository { void Item Find(); a lot of other methods here; }
[Test]
public void Test()
{
var repository = Mock<IRepository>();
repository.Expect(x => x.Find());
var service = new Service(repository);
service.ProcessWithItem();
}
Now, what's wrong with the code above? It's that our test roughly peeks into ProcessWithItem() implementation. What if it wants to do "from x in GetAll() where x..." - but no, our test knows what is going to happen there. And that's just a simple example. Imaging few calls that our test now is tied with, and when we want to change from GetAll() to a better GetAllFastWithoutStuff() inside the method... our test(s) are broken. Please change them. A lot of crappy work that happens so often without any real need.
And that's what often makes me to stop write tests. I just don't see how I can test without knowing implementation details. And knowing them, tests are now very fragile and pain to do.
Sure, it's not about interface (or DI) only. POCOs (and POJOs, why not) also suffer from the same thing, but they're now tied with the data, not with the interface. But the principle is the same - our final assertion is tightly coupled with our knowledge of what our SUT is going to do. "Yes you HAVE to provide this field, sir, and this better be of this value".
As a consequence, tests ARE going to fail - soon and often. This is pain. And the problem.
Are there any techniques to deal with this? AutoMockingContainer (which basically takes care all ALL methods and nested DI hierarchies) looks promising, but with its own drawback. Anything else?
Dependency Injection, per se, would let you inject an implementation of IRepository that accepts whatever calls are made on it, checks that the invariants and preconditions are satisfied, and returns results satisfying the postconditions. When you choose to inject a mock object that has very specific expectations for what methods will be called, then yes, you're doing highly implementation-specific testing -- but Dependency Injection is totally innocent in the matter, since it never dictates WHAT you should inject; rather, your beef appears to be with Mocking -- in fact, specifically the somewhat-automated mocking approach that you have chosen to use, which is one based on very specific expectations.
Mocking with very specific expectations IS indeed useful for white-box testing only. Depending on the tools / frameworks / libraries you're using (and you're not even specifying the exact programming language in a tag, so I assume your question is totally open ended) you may be able to specify the degrees of freedom allowed (these calls are allowed to come in any orders, these arguments must only satisfy the following preconditions, etc, etc). However, I don't know of an automated tool to perform exactly what you need for opaque-box testing, which is the "generic, tolerant implementation of yonder interface with all the ''programming by contract'' checks that are needed and no other".
What I tend to do over the life of a project is to build up a library of "not quite mocks" for the major interfaces needed. In some cases those may be somewhat obvious from the start, but in other cases they emerge incrementally as I'm considering some major refactoring, as follows (typical scenario)...:
The early stages of the refactoring break some aspect of the fragile strong-expectations mocking that I have cheaply put in place initially, I ponder whether to just tweak the expectations or go whole hog, if I decide it's not a one-off (i.e. the return in future refactorings and tests will justify the investment) then I hand-code a good "not quite mock" and stash it away in the project's specific bag of tricks -- actually often reusable across projects; such classes/packages as MockFilesystem, MockBigtable, MockDom, MockHttpClient, MockHttpServer, etc etc, go into a project-agnostic repository and get reused for testing all kinds of future projects (and in fact may be shared with other teams across the company, if several teams are using filesystem interfaces, bigtable interfaces, DOMs, http client/server interfaces, etc etc, that are uniform across the teams).
I acknowledge that the use of the word "mock" may be slightly inappropriate here if you take "mock" to refer specifically to the precise-expectation style of "fake implementation for testing purposes" of interfaces. Maybe Stub, Shim, Fake, Test, or some other prefix yet might be preferable (I do tend to use Mock for historical reasons, except when I remember to specifically call it Fake or the like;-).
If I was using languages with clear and precise way to express in the language itself the various design-by-contract specs in an interface, I imagine I'd get automatic tool support for most of this faking/shimming/etc; however I mostly code in other languages so I have to do a bit more manual work here. But I think that's a separate issue.
I read the excellent book http://www.manning.com/rainsberger/.
I would like to provide some insight I gained from it.
I believe several advice could help you to reduce the coupling between your tests and your implementation.
Edited: included in this coupling is the test asserting that the code under test calls some methods. Calling some method is never a functional need, it is an implementation concern. It relates to an interface other than the one being tested.
In many cases, the testing should be about the external behavior of an interface, and be completely black-box testing them.
The author gives the example that the test classes should be in a different package than the class to test. At first, I was sure this was wrong, because it makes it more difficult to test protected and package methods. But he argues that you should only test the external behavior of a system, that is the public methods. The non-public methods are implementation-details, and testing it results in coupling the test with the implementation. This was very insightful to me.
By the way, this book has so many excellent practical advice on how to design tests (say JUnit tests), that I would buy it on my own money if it wasn't provided by the company! ;-)
An excellent other advice from the book was to test at the functionality level, not the method level. For example, testing the add() method for a list requires trusted size() and get() methods, but they in turn require add() so we have a loop, we can't test safely. But testing the list's behavior globally (accross all methods) when adding involves testing the three methods at the same time, not proving that each is correct in isolation, but checking that together they provide the expected behavior. Often, when you try to test one of your methods in isolation, you cannot write a sensible test without using other methods, so you end up testing the implementation instead ; the consequence are coupling between test and implementation.
Only test functionalities, not methods.
Also, note that testing using external ressources (the database being the more common, but many others exist) is much slower, requires some access (IP, licence etc) from the executing machine, require a started container, may be sensitive to simultaneous access (a database can't run reliably multiple JUnit campaign at the same time), and has many other drawbacks. If all your tests use external resources, then you are in trouble, you can't run all your tests all the time, from any machine, from many machines at once, etc. So I understood (still from the book):
Test only once each external resource (database for example), in a dedicated test that is not a unit-test, but an integration test (although it can still use the same JUnit technology if appropriate).
Test enough dedicated tests to trust the resource is working. Then, other tests should never test it again, this is a waste, they should trust it.
Note that the current Maven best-practices give similar advice (see free book "Better builds with Maven"). I believe this is not a coincidence:
The JUnits in the test directory of a project are real unit tests. They run every time you do something with your project (except just compile).
The integration and functional tests should be provided in a different project, an integration-test project. They only run in a much later (optional) phase, after you have deployed your whole application in the container.
As a consequence, tests ARE going to
fail - soon and often. This is pain.
And the problem.
Well yes, unit tests can depend on internal implementation details. And sure, such "white box" tests are more brittle than "black box" tests which only rely on the externally published contract.
But I don't agree that this has to cause regular test failures. Think about how you arrived at testing with mocks in the first place: you've used dependency injection to limit the responsibilities of the class, to decrease coupling to other code, and to enable testing the class in isolation.
Are there any techniques to deal with
this?
A good unit test can only fail if you change the class under test, even if it depends on internal implementation details. And you can limit the responsibilities and coupling (to other classes) of your class, so that you will rarely have to change it.
In practice you'll have to be pragmatic; every now and then you'll write "unit tests" that are actually integration tests involving multiple classes or over-sized classes. Brittle tests depending on internal implementation details are more dangerous in that case. But for truly TDD-style classes, not so much.
Remember when you're writing a test you're not testing your repository, you're testing your Service class. In this specific example ProcessWithItem method. You create your expectations for repository object. By the way, you forgot to specify expected return for your x.Find method. That's the beauty of DI that you isolate everything from the code you about to write (I assume you do TDD).
To be honest I cannot relate to the problem you describe.
Yeah, that's one of the big problems with unit testing. That, and refactoring. And design changes that are a regular occurrence with Agile. And the inexperience of those creating the tests. And etc etc...
I think the only thing the average non-critical-systems developer can do is pick and choose your battles wisely. Early in development identify the truly critical paths and test those. Weigh the likelihood of that code changing before spending lots of time testing the rest of it.
If anybody figures it all out please let us know.

Why is it so bad to mock classes?

I recently discussed with a colleague about mocking. He said that mocking classes is very bad and should not be done, only in few cases.
He says that only interfaces should be mocked, otherwise it's an architecture fault.
I wonder why this statement (I fully trust him) is so correct? I don't know it and would like to be convinced.
Did I miss the point of mocking (yes, I read Martin Fowler's article)
Mocking is used for protocol testing - it tests how you'll use an API, and how you'll react when the API reacts accordingly.
Ideally (in many cases at least), that API should be specified as an interface rather than a class - an interface defines a protocol, a class defines at least part of an implementation.
On a practical note, mocking frameworks tend to have limitations around mocking classes.
In my experience, mocking is somewhat overused - often you're not really interested in the exact interaction, you really want a stub... but mocking framework can be used to create stubs, and you fall into the trap of creating brittle tests by mocking instead of stubbing. It's a hard balance to get right though.
IMHO, what your colleague means is that you should program to an interface, not an implementation. If you find yourself mocking classes too often, it's a sign you broke the previous principle when designing your architecture.
Mocking classes (in contrast to mocking interfaces) is bad because the mock still has a real class in the background, it is inherited from, and it is possible that real implementation is executed during the test.
When you mock (or stub or whatever) an interface, there is no risk of having code executed you actually wanted to mock.
Mocking classes also forces you to make everything, that could possibly be mocked, to be virtual, which is very intrusive and could lead to bad class design.
If you want to decouple classes, they should not know each other, this is the reason why it makes sense to mock (or stub or whatever) one of them. So implementing against interfaces is recommended anyway, but this is mentioned here by others enough.
I would suggest to stay away from mocking frameworks as far as possible. At the same time, I would recommend to use mock/fake objects for testing, as much as possible. The trick here is that you should create built-in fake objects together with real objects. I explain it more in detail in a blog post I wrote about it: http://www.yegor256.com/2014/09/23/built-in-fake-objects.html
Generally you'd want to mock an interface.
While it is possible to mock a regular class, it tends to influence your class design too much for testability. Concerns like accessibility, whether or not a method is virtual, etc. will all be determined by the ability to mock the class, rather than true OO concerns.
There is one faking library called TypeMock Isolator that allows you to get around these limitations (have cake, eat cake) but it's pretty expensive. Better to design for testability.
The answer, like most questions about practices, is "it depends".
Overuse of mocks can lead to tests that don't really test anything. It can also lead to tests which are virtual re-implementations of the code under test, tightly bound to a specific implementation.
On the other hand, judicious use of mocks and stubs can lead to unit tests which are neatly isolated and test one thing and one thing alone - which is a good thing.
It's all about moderation.
It makes sense to mock classes so tests can be written early in the development lifecycle.
There is a tendency to continue to use mock classes even when concrete implementations become available. There is also the tendency to develop against mock classes (and stubs) necessary early in a project when some parts of the system have not been built.
Once a piece of the system has been built it is necessary to test against it and continue to test against it (for regression). In this case starting with mocks is good but they should be discarded in favour of the implementation as soon as possible. I have seen projects struggle because different teams continue to develop against the behaviour of the mock rather than the implementation (once it is available).
By testing against mocks you are assuming that the mock is characteristic of the system. Often this involves guessing what the mocked component will do. If you have a specification of the system you are mocking then you don't have to guess, but often the 'as-built' system doesn't match the original specification due to practical considerations discovered during construction. Agile development projects assume this will always happen.
You then develop code that works with the mock. When it turns out that the mock does not truly represent the behaviour of the real as-built system (eg. latency issues not seen in the mock, resource and efficiency issues not seen in the mock, concurrency issues, performance issues etc) you then have a bunch of worthless mocking tests you must now maintain.
I consider the use of mocks to be valuable at the start of development but these mocks should not contribute to project coverage. It is best later if the mocks are removed and proper integration tests are created to replace them otherwise your system will not be getting tested for the variety of conditions which your mock did not simulate (or simulates incorrectly relative to the real system).
So, the question is whether or not to use mocks, it is a matter of when to use them and when to remove them.
It depends how often you use (or are forced by bad design) mocks.
If instantiating the object becomes too hard (and it happens more than often), then it is a sign the code may need some serious refactoring or change in design (builder? factory?).
When you mock everything you end up with tests that know everything about your implementation (white box testing). Your tests no longer document how to use the system - they are basically a mirror of its implementation.
And then comes potential code refactoring..
From my experience it's one of the biggest issues related to overmocking. It becomes painful and takes time, lots of it.
Some developers become fearful of refactoring their code knowing how long will it take.
There is also question of purpose - if everything is mocked, are we really testing the production code?
Mocks of course tend to violate DRY principle by duplicating code in two places: once in the production code and once in the tests.
Therefore, as I mentioned before, any change to code has to be made in two places (if tests aren't written well, it can be in more than that..).
Edit: Since you have clarified that your colleague meant mock class is bad but mock interface is not, the answer below is outdated. You should refer to this answer.
I am talking about mock and stub as defined by Martin Fowler, and I assume that's what your colleague meant, too.
Mocking is bad because it can lead to overspecification of tests. Use stub if possible and avoid mock.
Here's the diff between mock and stub (from the above article):
We can then use state verification on
the stub like this.
class OrderStateTester...
public void testOrderSendsMailIfUnfilled() {
Order order = new Order(TALISKER, 51);
MailServiceStub mailer = new MailServiceStub();
order.setMailer(mailer);
order.fill(warehouse);
assertEquals(1, mailer.numberSent());
}
Of course this is a very simple test -
only that a message has been sent.
We've not tested it was send to the
right person, or with the right
contents, but it will do to illustrate
the point.
Using mocks this test would look quite
different.
class OrderInteractionTester...
public void testOrderSendsMailIfUnfilled() {
Order order = new Order(TALISKER, 51);
Mock warehouse = mock(Warehouse.class);
Mock mailer = mock(MailService.class);
order.setMailer((MailService) mailer.proxy());
mailer.expects(once()).method("send");
warehouse.expects(once()).method("hasInventory")
.withAnyArguments()
.will(returnValue(false));
order.fill((Warehouse) warehouse.proxy());
}
}
In order to use state verification on the stub, I need to make some extra methods on the >stub to help with verification. As a result the stub implements MailService but adds extra >test methods.

How are Mocks meant to be used?

When I originally was introduced to Mocks I felt the primary purpose was to mock up objects that come from external sources of data. This way I did not have to maintain an automated unit testing test database, I could just fake it.
But now I am starting to think of it differently. I am wondering if Mocks are more effective used to completely isolate the tested method from anything outside of itself. The image that keeps coming to mind is the backdrop you use when painting. You want to keep the paint from getting all over everything. I am only testing that method, and I only want to know how it reacts to these faked up external factors?
It seems incredibly tedious to do it this way but the advantage I am seeing is when the test fails it is because it is screwed up and not 16 layers down. But now I have to have 16 tests to get the same testing coverage because each piece would be tested in isolation. Plus each test becomes more complicated and more deeply tied to the method it is testing.
It feels right to me but it also seems brutal so I kind of want to know what others think.
I recommend you take a look at Martin Fowler's article Mocks Aren't Stubs for a more authoritative treatment of Mocks than I can give you.
The purpose of mocks is to unit test your code in isolation of dependencies so you can truly test a piece of code at the "unit" level. The code under test is the real deal, and every other piece of code it relies on (via parameters or dependency injection, etc) is a "Mock" (an empty implementation that always returns expected values when one of its methods is called.)
Mocks may seem tedious at first, but they make Unit Testing far easier and more robust once you get the hang of using them. Most languages have Mock libraries which make mocking relatively trivial. If you are using Java, I'll recommend my personal favorite: EasyMock.
Let me finish with this thought: you need integration tests too, but having a good volume of unit tests helps you find out which component contains a bug, when one exists.
Don't go down the dark path Master Luke. :) Don't mock everything. You could but you shouldn't... here's why.
If you continue to test each method in isolation, you have surprises and work cut out for you when you bring them all together ala the BIG BANG. We build objects so that they can work together to solve a bigger problem.. By themselves they are insignificant. You need to know if all the collaborators are working as expected.
Mocks make tests brittle by introducing duplication - Yes I know that sounds alarming. For every mock expect you setup, there are n places where your method signature exists. The actual code and your mock expectations (in multiple tests). Changing actual code is easier... updating all the mock expectations is tedious.
Your test is now privy to insider implementation information. So your test depends on how you chose to implement the solution... bad. Tests should be a independent spec that can be met by multiple solutions. I should have the freedom to just press delete on a block of code and reimplement without having to rewrite the test suite.. coz the requirements still stay the same.
To close, I'll say "If it quacks like a duck, walks like a duck, then it probably is a duck" - If it feels wrong.. it probably is. *Use mocks to abstract out problem children like IO operations, databases, third party components and the like.. Like salt, some of it is necessary.. too much and :x *
This is the holy war of State based vs Iteraction based testing.. Googling will give you deeper insight.
Clarification: I'm hitting some resistance w.r.t. integration tests here :) So to clarify my stand..
Mocks do not figure in the 'Acceptance tests'/Integration realm. You'll only find them in the Unit Testing world.. and that is my focus here.
Acceptance tests are different and are very much needed - not belittling them. But Unit tests and Acceptance tests are different and should be kept different.
All collaborators within a component or package do not need to be isolated from each other.. Like micro-optimization that is Overkill. They exist to solve a problem together.. cohesion.
Yes, I agree. I see mocking as sometimes painful, but often necessary, for your tests to truly become unit tests, i.e. only the smallest unit that you can make your test concerned with is under test. This allows you to eliminate any other factors that could potentially affect the outcome of the test. You do end up with a lot more small tests, but it becomes so much easier to work out where a problem is with your code.
My philosophy is that you should write testable code to fit the tests,
not write tests to fit the code.
As for complexity, my opinion is that tests should be simple to write, simply because you write more tests if they are.
I might agree that could be a good idea if the classes you're mocking doesn't have a test suite, because if they did have a proper test suite, you would know where the problem is without isolation.
Most of them time I've had use for mock objects is when the code I'm writing tests for is so tightly coupled (read: bad design), that I have to write mock objects when classes they depend on is not available. Sure there are valid uses for mock objects, but if your code requires their usage, I would take another look at the design.
Yes, that is the downside of testing with mocks. There is a lot of work that you need to put in that it feels brutal. But that is the essence of unit testing. How can you test something in isolation if you don't mock external resources?
On the other hand, you're mocking away slow functionality (such as databases and i/o operations). If the tests run faster then that will keep programmers happy. There is nothing much more painful than waiting for really slow tests, that take more than 10 seconds to finish running, while you're trying to implement one feature.
If every developer in your project spent time writing unit tests, then those 16 layers (of indirection) wouldn't be that much of a problem. Hopefully you should have that test coverage from the beginning, right? :)
Also, don't forget to write a function/integration test between objects in collaboration. Or else you might miss something out. These tests won't need to be run often, but are still important.
On one scale, yes, mocks are meant to be used to simulate external data sources such as a database or a web service. On a more finely grained scale however if you're designing loosely coupled code then you can draw lines throughout your code almost arbitrarily as to what might at any point be an 'outside system'. Take a project I'm working on currently:
When someone attempts to check in, the CheckInUi sends a CheckInInfo object to a CheckInMediator object which validates it using a CheckInValidator, then if it is ok, it fills a domain object named Transaction with CheckInInfo using CheckInInfoAdapter then passes the Transaction to an instance of ITransactionDao.SaveTransaction() for persistence.
I am right now writing some automated integration tests and obviously the CheckInUi and ITransactionDao are windows unto external systems and they're the ones which should be mocked. However, whose to say that at some point CheckInValidator won't be making a call to a web service? That is why when you write unit tests you assume that everything other than the specific functionality of your class is an external system. Therefore in my unit test of CheckInMediator I mock out all the objects that it talks to.
EDIT: Gishu is technically correct, not everything needs to be mocked, I don't for example mock CheckInInfo since it is simply a container for data. However anything that you could ever see as an external service (and it is almost anything that transforms data or has side-effects) should be mocked.
An analogy that I like is to think of a properly loosely coupled design as a field with people standing around it playing a game of catch. When someone is passed the ball he might throw a completely different ball to the next person, he might even throw a multiple balls in succession to different people or throw a ball and wait to receive it back before throwing it to yet another person. It is a strange game.
Now as their coach and manager, you of course want to check how your team works as a whole so you have team practice (integration tests) but you also have each player practice on his own against backstops and ball-pitching machines (unit tests with mocks). The only piece that this picture is missing is mock expectations and so we have our balls smeared with black tar so they stain the backstop when they hit it. Each backstop has a 'target area' that the person is aiming for and if at the end of a practice run there is no black mark within the target area you know that something is wrong and the person needs his technique tuned.
Really take the time to learn it properly, the day I understood Mocks was a huge a-ha moment. Combine it with an inversion of control container and I'm never going back.
On a side note, one of our IT people just came in and gave me a free laptop!
As someone said before, if you mock everything to isolate more granular than the class you are testing, you give up enforcing cohesion in you code that is under test.
Keep in mind that mocking has a fundamental advantage, behavior verification. This is something that stubs don't provide and is the other reason that makes the test more brittle (but can improve code coverage).
Mocks were invented in part to answer the question: How would you unit test objects if they had no getters or setters?
These days, recommended practice is to mock roles not objects. Use Mocks as a design tool to talk about collaboration and separation of responsibilities, and not as "smart stubs".
Mock objects are 1) often used as a means to isolate the code under test, BUT 2) as keithb already pointed out, are important to "focus on the relationships between collaborating objects". This article gives some insights and history related to the subject: Responsibility Driven Design with Mock Objects.