The most experienced developer in my current team has set a few hard rules based on best practices we should follow. Among them is "You never ever mock domain object". I asked him why we couldn't but he never had time to give me a proper answer. Now he's away for a week, and I reached a peak distrust to that rule, here's my situation.
My domain object has an Update method with several parameters, one being an interface for a calculator. It then updates a few fields, runs the calculators and assign its results to some other of its fields.
The proper unit test for the Update method itself is reasonably long.
Now I have some piece of code which do a few things then call Update on such a domain object.
I would normally mock the objects, and just check the Update method is being called with the proper arguments. But now, I have to test it's being properly called, by checking its fields and mocking the calculator then same way I did when unit testing the Update method itself. And I would have to do that everywhere the Update methods gets called.
How fun will it be when the Update method change a bit, and every of those tests suddenly break and need refactoring... I feels this rule is just like shooting myself in the foot..
So I need to know, why You never ever mock domain object"?
You never ever mock domain object
This is more of a heuristic than a best practice. And I can imagine that it make sense in some projects.
This heuristic can help to have better tests. Good tests are easy to write, easy to read and they will break as soon as something goes wrong. If you mock your domain model, you will probably also have to mock it's behavior. And if you have some complex behavior in your model, mocking it will be time consuming. The test will also be more fragile (mistakes can happen in predicting the domain model output).
The other alternative is to do sociable unit testing. That means the unit to be tested will not be a single class in isolation but a class and some of its dependencies. In other words, you just use a concrete object of your domain model in the test.
I would normally mock the objects, and just check the Update method is being called with the proper arguments. But now, I have to test it's being properly called, by checking its fields and mocking the calculator then same way I did when unit testing the Update method itself.
Checking that the update method is being called doesn't necessarily mean the success of the test. This is probably not what you want to check to see if your service works correctly. If you pass a concrete class of your model(including the calculators) you won't have to do any useless checks or repeat what has been done in other tests.
One of the differences between this solution and testing classes in isolation is that when there is bug in your domain model, many other tests will fail. But this is totally fine in my opinion since it will have to be fixed anyway.
You never ever mock domain object
There might be different reasons to avoid mocking your domain objects:
Mocking libraries, the same as ORMs, can affect design of your domain objects. They usually require interfaces or virtual methods in C#.
A preference to test against real instances. Very often people use mocking libraries because it is an easy way to create a fake or a stub. Very often you can build an instance of your domain object and use it in your test. Most likely a fact that method Update was called can be easily checked via state change.
I would normally mock the objects, and just check the Update method is being called with the proper arguments. But now, I have to test it's being properly called, by checking its fields and mocking the calculator then same way I did when unit testing the Update method itself.
How fun will it be when the Update method change a bit, and every of those tests suddenly break and need refactoring...
It was already mentioned in the comment that probably Update method has to many responsibilities and that's why is used in lots of use cases.
There are different tastes when it comes to defining a system under test (sut). If you tend to test classes then you often find yourself using mocking libraries. If you tend to test scenarios or behaviors then most likely you will only care about state changes.
You can probably explore a different design for your domain object: Try to split Update method into explicit domain specific methods so each client will trigger a different behavior. Given this approach a "reasonably long" test for the Update method can even become obsolete since it will be tested by other tests.
While Update() is probably not as tailored a method as you would expect from a rich domain model - which may have an impact on test fragility - I agree that you should take "never do X" advice with a grain of salt.
If you understand all the implications and know that the integration test version will be less maintainable, definitely use mocks. You, and not blanket statements or cargo cults, are the ultimate judge to what's better for your system in your context.
Related
I'm currently broadening my Unit Testing by utilising Mock objects (nSubsitute in this particular case). However I'm wondering what the current wisdom when creating a Mock objects. For instance, I'm working with an object that contains various routines to grab and process data - no biggie here but it will be utilised in a fair number of tests.
Should I create a shared function that returns the Mock Object with all the appropriate methods and behaviours mocked for pretty much most of the Testing project and call that object into my Unit Tests? Or shall I Mock the object into every Unit Test, only mocking the behaviour I need for that test (although there will be times I'll be mocking the same behaviour more than one occasion).
Thoughts or advice is gratefully received...
I'm not sure if there is an agreed "current wisdom" on this, but here's my 2 cents.
First, as #codebox pointed out, re-creating your mocks for each unit test is a good idea, as you want your unit tests to run independently of each other. Doing otherwise can result in tests that pass when run together but fail when run in isolation (or vis versa). Creating mocks required for tests is commonly done in test setup ([SetUp] in NUnit, constructor in XUnit), so each test will get a newly created mock.
In terms of configuring these mocks, it depends on the situation and how you test. My preference is to configure them in each test with the minimum amount of configuration necessary. This is a good way of communicating exactly what that test requires of its dependencies. There is nothing wrong with some duplication in these cases.
If a number of tests require the same configuration, I would consider using a scenario-based test fixture (link disclaimer: shameless self-promotion). A scenario could be something like When_the_service_is_unavailable, and the setup for that scenario could configure the mocked service to throw an exception or return an error code. Each test then makes assertions based on that common configuration/scenario (e.g. should display error message, should send email to admin etc).
Another option if you have lots of duplicated bits of configuration is to use a Test Data Builder. This gives you reusable ways of configuring a number of different aspects of your mock or other any other test data.
Finally, if you're finding a large amount of configuration is required it might be worth considering changing the interface of the test dependency to be less "chatty". By looking for a valid abstraction that reduces the number of calls required by the class under test you'll have less to configure in your tests, and have a nice encapsulation of the responsibilities on which that class depends.
It is worth experimenting with a few different approaches and seeing what works for you. Any removal of duplication needs to be balanced with keeping each test case independent, simple, maintainable and reliable. If you find you have a large number of tests fail for small changes, or that you can't figure out the configuration an individual tests needs, or if tests fail depending on the order in which they are run, then you'll want to refine your approach.
I would create new mocks for each test - if you re-use them you may get unexpected behaviour where the state of the mock from earlier tests affects the outcome of later tests.
It's hard to provide a general answer without looking at a specific case.
I'd stick with the same approach as I do everywhere else: first look at the tests as independent beings, then look for similarities and extract the common part out.
Your goal here is to follow DRY, so that your tests are maintainable in case the requirements change.
So...
If it's obvious that every test in a group is going to use the same mock behaviour, provide it in your common set-up
If each of them is significantly different, as in: the content of the mock constitutes a significant part of what you're testing and the test/mock relationship looks like 1:1, then it's reasonable to keep them close to the tests
If the mocks differ between them, but only to some degree, you still want to avoid redundancy. A common SetUp won't help you, but you may want to introduce an utility like PrepareMock(args...) that will cover different cases. This will make your actual test methods free of repetitive set-up, but still let you introduce any degree of difference between them.
The tests look nice when you extract all similarities upwards (to a SetUp or helper methods) so that the only thing that remains in test methods is what's different between them.
I've recently started practising TDD and unit testing, with my main primers being the excellent GOOSGBT and a perusal of TDD-tagged questions here on SO.
Occasionally, the process I use creates a "controller" class - generally, a class which is a facade over a fairly complex subsystem where, as the number of features implemented in the subsystem grows, responsibilities are continually driven out into helper classes until the original class has essentially no responsibilities beyond making correct calls to a small set of collaborator classes and shunting the returned information (if any) to its other collaborator classes.
Originally, the tests for the soon-to-be controller classes were written at the level of intention of end-users of the class: "If I make this call, what should be the observable effects that I, as an end-user of the class, actually care about?". But as more and more responsibilities and tests for edge-cases were driven out into helper classes (which are replaced by Test Doubles in the tests for the controller class), these tests began to seem really ... vague and non-specific: they were invariably "happy-path" tests that didn't really seem to get to the heart of the matter. It's hard to explain what I mean, but reading the tests back left me with a kind of "So what? Why did you choose this particular happy-path test over any other? What is the significance? If someone breaks the code, will this test pinpoint the exact reason why the code is now broken?" As time went by, I was more and more strongly inclined to instead write the tests in terms of how the classes' collaborators were supposed to be used together: "the class should call this method on this collaborator, and pass the result to this other collaborator" which gave a much more focussed, descriptive and clearly-motivated set of tests (and generally, the resulting set of tests is small).
This obviously has its flaws: the tests are now strongly coupled to the specifics of the implementation of the controller class, rather than the much more flexible "what would an end-user of this class see that he would care about?". But really, the tests are already quite coupled to it by virtue of the fact that they must configure all of the Test Double collaborators to behave in the exact way required by the implementation to give the correct results from an end-user of the classes' point of view.
So my question is: do fellow TDD'ers find that a minority of classes do little but marshall their (small) set of collaborators around? Do you find keeping the tests for such classes to be written from an end-user of the classes' point of view to be imprecise and unsatisfactory and if so, is it acceptable to write tests for such classes explicitly in terms of how it calls and transfers data between their collaborators?
Hope it's reasonably clear what I'm driving at, here! :)
As a concrete example: one practise project I was working on was a TV listings downloader/ viewer (if you've ever seen "Digiguide", you'll know the kind of thing I mean), and I was implementing a core part of the app - the part that actually updates the listings over the net and integrates the newly downloaded listings into the current set of known TV programs. The interface to this (surprisingly complex when all requirements are taken on board) functionality was a class called ListingsUpdater, which had a method called "updateListings".
Now, end-users of ListingsUpdater only really care about a few things: after listingsUpdate has been called, is the database of TV listings now correct, and were the changes made to the database (adding TV programs, changing them if broadcast changes occurred etc) described to the provided change listeners? When the implementation was a very, very simple "fake it till you make it" type of deal, this worked fine: but as I progressively drove the implementation towards one that would work in the real-world, the "real work" got driven further and further away from ListingsUpdater, until it mainly just marshalled a few collaborators: a ListingsRequestPreparer for assessing the current state of the listings and building HTTP requests for a ListingsDownloader, and a ListingsIntegrator which unpacked the newly downloaded listings and incorporated them (it too delegating to collaborators) into the listings database. Now, note that in order to fulfil the contract of ListingsUpdater from a user's point of view, I must, in the test, instruct its ListingsIntegrator Test Double to populate the (fake) database with the correct data(!), which seems bizarre. It seems much more sensible to drop the "from the end-user of ListingsUpdater's point of view" tests and instead add a test that says "when the ListingsDownloader has downloaded the new listings ensure they are handed over to the ListingsIntegrator".
This obviously has its flaws: the tests are now strongly coupled to the specifics of the implementation of the controller class, rather than the much more flexible "what would an end-user of this class see that he would care about?". But really, the tests are already quite coupled to it by virtue of the fact that they must configure all of the Test Double collaborators to behave in the exact way required by the implementation to give the correct results from an end-user of the classes' point of view.
I'll repeat what I said in answer to another question:
I need to create either a mock a stub or a dummy object [a test double] for each dependency
This is commonly stated. But I think it is wrong. If a Car is associated with an Engine object, why not use a real Engine object when unit testing your Car class?
But, someone will declare, if you do that you are not unit testing your code; your test depends on both the Car class and the Engine class: two units, so an integration test rather than a unit test. But do those people mock the String class too? Or HashSet<String>? Of course not. The line between unit and integration testing is not so clear.
More philosophically, you can not create good mock objects [test doubles] in many cases. The reason is that, for most methods, the manner in which an object delegates to associated objects is undefined. Whether it does delegate, and how, is left by the contract as an implementation detail. The only requirement is that, on delegating, the method satisfies the preconditions of its delegate. In such a situation, only a fully functional (non-mock) delegate will do. If the real object checks its preconditions, failure to satisfy a precondition on delegating will cause a test failure. And debugging that test failure will be easy.
And I'll add in response to
they were invariably "happy-path" tests that didn't really seem to get to the heart of the matter
This is a more general testing problem, not specific to TDD or unit testing: how to you select a good set of test-cases, given that comprehensive testing is impossible? I rely on equivalence partitioning. When I start work on some code, I use equivalence partitioning to select the set of test-cases I want the code to pass, then work on each in turn in a TDD manner, but if passing one of the test-cases does not require a code change (because early work has created code that also satisfies that test case) I still add the test-case to my test-suite. My test suite therefore has better coverage of potential error paths.
I have a .NET application with a web front-end, WCF Windows service back-end. The application is fairly simple - it takes some user input, sending it to the service. The service does this - takes the input (Excel spreadsheet), extracts the data items, checks SQL DB to make sure the items are not already existing - if they do not exist, we make a real-time request to a third party data vendor and retrieve the results, inserting them into the database. It does some logging along the way.
I have a Job class with a single public ctor and public Run() method. The ctor takes all the params, and the Run() method does all of the above logic. Each logical piece of functionality is split into a separate class - IParser does file parsing, IConnection does the interaction with the data vendor, IDataAccess does the data access, etc. The Job class has private instances of these interfaces, and uses DI to construct the actual implementations by default, but allows the class user to inject any interface.
In the real code, I use the default ctor. In my unit tests for the Run() method, I use all mock objects creating via NMock2.0. This Run() method is essentially the 'top level' function of this application.
Now here's my issue / question: the unit tests for this Run() method are crazy. I have three mock objects I'm sending into the ctor, and each mock object sets expectations on themselves. At the end I verify. I have a few different flows that the Run method can take, each flow having its own test - it could find everything is already in the database and not make a request to vendor... or an exception could be thrown and the job status could be set to 'failed'... OR we can have the case where we didn't have the data and needed to make the vendor request (so all those function calls would need to be made).
Now - before you yell at me and say 'your Run() method is too complicated!' - this Run method is only a mere 50 lines of code! (It does make calls to some private function; but the entire class is only 160 lines). Since all the 'real' logic is being done in the interfaces that are declared on this class. however, the biggest unit test on this function is 80 lines of code, with 13 calls to Expect.BLAH().. _
This makes re-factoring a huge pain. If I want to change this Run() method around, I have to go edit my three unit tests and add/remove/update Expect() calls. When I need to refactor, I end up spending more time creating my mock calls than I did actually writing the new code. And doing real TDD on this function makes it even more difficult if not impossible. It's making me think that it's not even worth unit testing this top level function at all, since really this class isn't doing much logic, it's just passing around data to its composite objects (which are all fully unit tested and don't require mocking).
So - should I even bother testing this high level function? And what am I gaining by doing this? Or am I completely misusing mock/stub objects here? Perhaps I should scrap the unit tests on this class, and instead just make an automated integration test, which uses the real implementations of the objects and Asserts() against SQL Queries to make sure the right end-state data exists? What am I missing here?
EDIT: Here's the code - the first function is the actual Run() method - then my five tests which test all five possible code paths. I changed it some for NDA reasons but the general concept is still there. Anything you see wrong with how I'm testing this function, any suggestions on what to change to make it better? Thanks.
I guess my advice echos most of what is posted here.
It sounds as if your Run method needs to be broken down more. If its design is forcing you into tests that are more complicated than it is, something is wrong. Remember this is TDD we're talking about, so your tests should dictate the design of your routine. If that means testing private functions, so be it. No technological philosophy or methodology should be so rigid that you can't do what feels right.
Additionally, I agree with some of the other posters, that your tests should be broken down into smaller segments. Ask yourself this, if you were going to be writting this app for the first time and your Run function didn't yet exist, what would your tests look like? That response is probably not what you have currently (otherwise you wouldn't be asking the question). :)
The one benefit you do have is that there isn't a lot of code in the class, so refactoring it shouldn't be very painful.
EDIT
Just saw you posted the code and had some thoughts (no particular order).
Way too much code (IMO) inside your SyncLock block. The general rule is to keep the code to a minimal inside a SyncLock. Does it ALL have to be locked?
Start breaking code out into functions that can be tested independently. Example: The ForLoop that removes ID's from the List(String) if they exist in the DB. Some might argue that the m_dao.BeginJob call should be in some sort of GetID function that can be tested.
Can any of the m_dao procedures be turned into functions that can tested on their own? I would assume that the m_dao class has it's own tests somewhere, but by looking at the code it appears that that might not be the case. They should, along with the functionality in the m_Parser class. That will relieve some of the burden of the Run tests.
If this were my code, my goal would be to get the code to a place where all the individual procedure calls inside Run are tested on their own and that the Run tests just test the final out come. Given input A, B, C: expect outcome X. Give input E, F, G: expect Y. The detail of how Run gets to X or Y is already tested in the other procedures' tests.
These were just my intial thoughts. I'm sure there are a bunch of different approaches one could take.
Two thoughts: first you should have an integration test anyway to make sure everything hangs together. Second, it sounds to me like you're missing intermediate objects. In my world, 50 lines is a long method. It's hard to say anything more precise without seeing the code.
The first thing I would try would be refactroing your unit tests to share the set up code between tests by refactoring to a method that sets up the mocks and expectations. Parameterize the method so your expectations are configurable. You may need one or perhaps more of these set up methods depending on how much alike the set up is from test to test.
So - should I even bother testing this
high level function?
Yes. If there are different code-paths, you should be.
And what am I gainging by doing this?
Or am I completely mis-using mock/stub
objects here?
As J.B. pointed out (Nice seeing you at AgileIndia2010!), Fowler's article is recommended read. As a gross simplification: Use Stubs, when you don't care about the values returned by the collaborators. If you the return values from the collaborator.call_method() changes the behavior(or you need non trivial checks on args, computation for return values), you need mocks.
Suggested refactorings:
Try moving the creation and injection of mocks into a common Setup method. Most unit testing frameworks support this; will be called before each test
Your LogMessage calls are beacons - calling out once again for intention revealing methods. e.g. SubmitBARRequest(). This will shorten your production code.
Try n move each Expect.Blah1(..) into intention revealing methods.
This will shorten your test code and make it immensely readable and easier to modify. e.g.
Replace all instances of
.
Expect.Once.On(mockDao) _
.Method("BeginJob") _
.With(New Object() {submittedBy, clientID, runDate, "Sent For Baring"}) _
.Will([Return].Value(0));
with
ExpectBeginJobOnDAO_AndReturnZero(); // you can name it better
on whether to test such function: you said in a comment
" the tests read just like the actual
function, and since im using mocks,
its only asserting the functions are
called and sent params (i can check
this by eyeballing the 50 line
function)"
imho eyeballing the function isn't enough, haven't you heard: "I can't believe I missed that!" ... you have a fair amount of scenarios that could go wrong in that Run method, covering that logic is a good idea.
on tests being brittle: try having some shared methods that you can use in the test class for the common scenarios. If you are concerned about a later change breaking all the tests, put the pieces that concerned you in specific methods that can be changed if needed.
on tests being too long / hard to know what's in there: don't test single scenarios with every single assertion that its related to it. Break it up, test stuff like it should log x messages when y happens (1 test), it should save to the db when y happens (another separate test), it should send a request to a third party when z happens (yet another test), etc.
on doing integration/system tests instead of these unit tests: you can see from your current situation that there are plenty of scenarios & little variations involved in that part of your system. That's with the shield of replacing yet more logic with those mocks & the ease of simulating different conditions. Doing the same with the whole thing will add a whole new level of complexity to your scenario, something that is surely unmanageable if you want to cover a wide set of scenarios.
imho you should minimize the combinations that you are leaving for your system tests, exercising some main scenarios should already tell you that a Lot of the system is working correctly - it should be a lot about everything being hooked correctly.
The above said, I do recommend adding focused integration tests for all the integration code you have that might not be currently covered by your tests / since by definition unit tests don't get there. This exercises specifically the integration code with all the variations you expect from it - the corresponding tests are much simpler than trying to reach those behaviors in the context of the whole system & tell you very quickly if any assumptions in those pieces is causing trouble.
If you think unit-tests are too hard, do this instead: add post-conditions to the Run method. Post-conditions are when you make an assertion about the code. For example, at the end of that method, you may want some variable to hold a particular value or one value out of some possible choices.
After, you can derive your pre-conditions for the method. This is basically the data type of each parameter and the limits and constraints on each of those parameters (and on any other variable initialized at the beginning of the method).
In this way, you can be sure both the input and output are what is desired.
That probably still won't be enough so you will have to look at the code of the method line by line and look for large sections that you want to make assertions about. If you have an If statement, you should check for some conditions before and after it.
You won't need any mock objects if you know how to check if the arguments to the object are valid and you know what range of outputs are desired.
Your tests are too complicated.
You should test aspects of your class rather than writing a unittest for each member of yor class. A unittest should not cover the entire functionality of a member.
I'm going to guess that each test for Run() set expectations on every method they call on the mocks, even if that test doesn't focus on checking every such method invocation. I strongly recommend you Google "mocks aren't stubs" and read Fowler's article.
Also, 50 lines of code is pretty complex. How many codepaths through the method? 20+? You might benefit from a higher level of abstraction. I'd need to see code to judge more certainly.
This question already has answers here:
When should I mock?
(4 answers)
Closed 9 years ago.
Classes that use other classes (as members, or as arguments to methods) need instances that behave properly for unit test. If you have these classes available and they introduce no additional dependencies, isn't it better to use the real thing instead of a mock?
I say use real classes whenever you can.
I'm a big believer in expanding the boundaries of "unit" tests as much as possible. At this point they aren't really unit tests in the traditional sense, but rather just an automated regression suite for your application. I still practice TDD and write all my tests first, but my tests are a little bigger than most people's and my green-red-green cycles take a little longer. But now that I've been doing this for a little while I'm completely convinced that unit tests in the traditional sense aren't all they're cracked up to be.
In my experience writing a bunch of tiny unit tests ends up being an impediment to refactoring in the future. If I have a class A that uses B and I unit test it by mocking out B, when I decide to move some functionality from A to B or vice versa all of my tests and mocks have to change. Now if I have tests that verify that the end to end flow through the system works as expected then my tests actually help me to identify places where my refactorings might have caused a change in the external behavior of the system.
The bottom line is that mocks codify the contract of a particular class and often end up actually specifying some of the implementation details too. If you use mocks extensively throughout your test suite your code base ends up with a lot of extra inertia that will resist any future refactoring efforts.
It is fine to use the "real thing" as long as you have absolute control over the object. For example if you have an object that just has properties and accessors you're probably fine. If there is logic in the object you want to use, you could run into problems.
If a unit test for class a uses an instance of class b and an change introduced to b breaks b, then the tests for class a are also broken. This is where you can run into problems where as with a mock object you could always return the correct value. Using "the real thing" Can kind of convolute tests and hide the real problem.
Mocks can have downsides too, I think there is a balance with some mocks and some real objects you will have to find for yourself.
There is one really good reason why you want to use stubs/mocks instead of real classes. I.e. to make your unit test's (pure unit test) class under test isolated from everything else. This property is extremely useful and the benefits for keeping tests isolated are plentiful:
Tests run faster because they don't need to call the real class implementation. If the implementation is to run against file system or relational database then the tests will become sluggish. Slow tests make developers not run unit tests as often. If you're doing Test Driven Development then time hogging tests are together a devastating waste of developers time.
It will be easier to track down problems if the test is isolated to the class under test. In contrast to a system test it will be much more difficult to track down nasty bugs that are not apparently visible in stack traces or what not.
Tests are less fragile on changes done on external classes/interfaces because you're purely testing the class that is under test. Low fragility is also an indication of low coupling, which is a good software engineering.
You're testing against external behaviour of a class rather than the internal implementation which is more useful when deciding code design.
Now if you want to use real class in your test, that's fine but then it is NOT a unit test. You're doing a integration test instead, which is useful for the purpose of validating requirements and overall sanity check. Integration tests are not run as often as unit tests, in practice it is mostly done before committing to favorite code repository, but is equally important.
The only thing you need to have in mind is the following:
Mocks and stubs are for unit tests.
Real classes are for integration/system tests.
Extracted and extended from an answer of mine How do I unit-test inheriting objects?">here:
You should always use real objects where possible.
You should only use mock objects if the real objects do something you dont want to set up (like use sockets, serial ports, get user input, retrieve bulky data etc). Essentially, mock objects are for when the estimated effort to implement and maintain a test using a real object is greater than that to implement and maintain a test using a mock object.
I dont buy into the "dependant test failure" argument. If a test fails because a depended-on class broke, the test did exactly what it should have done. This is not a smell! If a depended-on interface changes, I want to know!
Highly mocked testing environments are very high-maintenance, particularly early in a project when interfaces are in flux. Ive always found it better to start integration testing ASAP.
I always use a mock version of a dependency if the dependency accesses an external system like a database or web service.
If that isn't the case, then it depends on the complexity of the two objects. Testing the object under test with the real dependency is essentially multiplying the two sets of complexities. Mocking out the dependency lets me isolate the object under test. If either object is reasonably simple, then the combined complexity is still workable and I don't need a mock version.
As others have said, defining an interface on the dependency and injecting it into the object under test makes it much easier to mock out.
Personally, I'm undecided about whether it's worth it to use strict mocks and validate every call to the dependency. I usually do, but it's mostly habit.
You may also find these related questions helpful:
What is object mocking and when do I need it?
When should I mock?
How are mocks meant to be used?
And perhaps even, Is it just me, or are interfaces overused?
Use the real thing only if it has been unit tested itself first. If it introduces dependencies that prevent that (circular dependencies or if it requires certain other measures to be in place first) then use a 'mock' class (typically referred to as a "stub" object).
If your 'real things' are simply value objects like JavaBeans then thats fine.
For anything more complex I would worry as mocks generated from mocking frameworks can be given precise expectations about how they will be used e.g. the number of methods called, the precise sequence and the parameters expected each time. Your real objects cannot do this for you so you risk losing depth in your tests.
I've been very leery of mocked objects since I've been bitten by them a number of times. They're great when you want isolated unit tests, but they have a couple of issues. The major issue is that if the Order class needs a a collection of OrderItem objects and you mock them, it's almost impossible to verify that the behavior of of the mocked OrderItem class matches the real-world example (duplicating the methods with appropriate signatures is generally not enough). More than once I've seen systems fail because the mocked classes don't match the real ones and there weren't enough integration tests in place to catch the edge cases.
I generally program in dynamic languages and I prefer merely overriding the specific methods which are problematic. Unfortunately, this is sometimes hard to do in static languages. The downside of this approach is that you're using integration tests rather than unit tests and bugs are sometimes harder to track down. The upside is that you're using the actual code that is written, rather than a mocked version of that code.
If you don't care for verifying expectations on how your UnitUnderTest should interact with the Thing, and interactions with the RealThing have no other side-effects (or you can mock these away) then it is in my opinion perfectly fine to just let your UnitUnderTest use the RealThing.
That the test then covers more of your code base is a bonus.
I generally find it is easy to tell when I should use a ThingMock instead of a RealThing:
When I want to verify expectations in the interaction with the Thing.
When using the RealThing would bring unwanted side-effects.
Or when the RealThing is simply too hard/troublesome to use in a test setting.
If you write your code in terms of interfaces, then unit testing becomes a joy because you can simply inject a fake version of any class into the class you are testing.
For example, if your database server is down for whatever reason, you can still conduct unit testing by writing a fake data access class that contains some cooked data stored in memory in a hash map or something.
It depends on your coding style, what you are doing, your experience and other things.
Given all that, there's nothing stopping you from using both.
I know I use the term unit test way too often. Much of what I do might be better called integration test, but better still is to just think of it as testing.
So I suggest using all the testing techniques where they fit. The overall aim being to test well, take little time doing it and personally have a solid feeling that it's right.
Having said that, depending on how you program, you might want to consider using techniques (like interfaces) that make mocking less intrusive a bit more often. But don't use Interfaces and injection where it's wrong. Also if the mock needs to be fairly complex there is probably less reason to use it. (You can see a lot of good guidance, in the answers here, to what fits when.)
Put another way: No answer works always. Keep your wits about you, observe what works what doesn't and why.
A while ago I read the Mocks Aren't Stubs article by Martin Fowler and I must admit I'm a bit scared of external dependencies with regards to added complexity so I would like to ask:
What is the best method to use when unit testing?
Is it better to always use a mock framework to automatically mock the dependencies of the method being tested, or would you prefer to use simpler mechanisms like for instance test stubs?
As the mantra goes 'Go with the simplest thing that can possibly work.'
If fake classes can get the job done, go with them.
If you need an interface with multiple methods to be mocked, go with a mock framework.
Avoid using mocks always because they make tests brittle. Your tests now have intricate knowledge of the methods called by the implementation, if the mocked interface or your implementation changes... your tests break. This is bad coz you'll spend additional time getting your tests to run instead of just getting your SUT to run. Tests should not be inappropriately intimate with the implementation.
So use your best judgment.. I prefer mocks when it'll help save me writing-updating a fake class with n>>3 methods.
Update Epilogue/Deliberation:
(Thanks to Toran Billups for example of a mockist test. See below)
Hi Doug, Well I think we've transcended into another holy war - Classic TDDers vs Mockist TDDers. I think I'm belong to the former.
If I am on test#101 Test_ExportProductList and I find I need to add a new param to IProductService.GetProducts(). I do that get this test green. I use a refactoring tool to update all other references. Now I find all the mockist tests calling this member now blow up. Then I have to go back and update all these tests - a waste of time. Why did ShouldPopulateProductsListOnViewLoadWhenPostBackIsFalse fail? Was it because the code is broken? Rather the tests are broken. I favor the one test failure = 1 place to fix. Mocking freq goes against that. Would stubs be better? If it I had a fake_class.GetProducts().. sure One place to change instead of shotgun surgery over multiple Expect calls. In the end it's a matter of style.. if you had a common utility method MockHelper.SetupExpectForGetProducts() - that'd also suffice.. but you'll see that this is uncommon.
If you place a white strip on the test name, the test is hard to read. Lot of plumbing code for the mock framework hides the actual test being performed.
requires you to learn this particular flavor of a mocking framework
I generally prefer to use mocks because of Expectations. When you call a method on a stub that returns a value, it typically just gives you back a value. But when you call a method on a mock, not only does it return a value, it also enforces the expectation that you set up that the method was even called in the first place. In other words, if you set up an expectation and then don't call that method, an exception gets thrown. When you set an expectation, you are essentially saying "If this method doesn't get called, something went wrong." And the opposite is true, if you call a method on a mock and did not set an expectation, it will throw an exception, essentially saying "Hey, what are you doing calling this method when you didn't expect it."
Sometimes you don't want expectations on every method you're calling, so some mocking frameworks will allow "partial" mocks that are like mock/stub hybrids, in that only the expectations you set are enforced, and every other method call is treated more like a stub in that it just returns a value.
One valid place to use stubs I can think of off the top, though, is when you are introducing testing into legacy code. Sometimes it's just easier to make a stub by subclassing the class you are testing than refactoring everything to make mocking easy or even possible.
And to this...
Avoid using mocks always because they make tests brittle. Your tests now have intricate knowledge of the methods called by the implementation, if the mocked interface changes... your tests break. So use your best judgment..<
...I say if my interface changes, my tests had better break. Because the whole point of unit tests is that they accurately test my code as it exists right now.
It's best to use a combination, and you'll have to use your own judgement. Here's the guidelines I use:
If making a call to external code is part of your code's expected (outward-facing) behavior, this should be tested. Use a mock.
If the call is really an implementation detail which the outside world doesn't care about, prefer stubs. However:
If you're worried that later implementations of the tested code might accidentally go around your stubs, and you want to notice if that happens, use a mock. You're coupling your test to your code, but it's for the sake of noticing that your stub is no longer sufficient and your test needs re-working.
The second kind of mock is a sort of necessary evil. Really what's going on here is that whether you use a stub or a mock, in some cases you have to couple to your code more than you'd like. When that happens, it's better to use a mock than a stub only because you'll know when that coupling breaks and your code is no longer written the way your test thought it would be. It's probably best to leave a comment in your test when you do this so that whoever breaks it knows that their code isn't wrong, the test is.
And again, this is a code smell and a last resort. If you find you need to do this often, try rethinking the way you write your tests.
It just depends on what type of testing you are doing. If you are doing behavior based testing you might want a dynamic mock so you can verify that some interaction with your dependancy occurs. But if you are doing state based testing you might want a stub so you verify values/etc
For example, in the below test you notice that I stub out the view so I can verify a property value is set (state based testing). I then create a dynamic mock of the service class so I can make sure a specific method gets called during the test (interaction / behavior based testing).
<TestMethod()> _
Public Sub Should_Populate_Products_List_OnViewLoad_When_PostBack_Is_False()
mMockery = New MockRepository()
mView = DirectCast(mMockery.Stub(Of IProductView)(), IProductView)
mProductService = DirectCast(mMockery.DynamicMock(Of IProductService)(), IProductService)
mPresenter = New ProductPresenter(mView, mProductService)
Dim ProductList As New List(Of Product)()
ProductList.Add(New Product())
Using mMockery.Record()
SetupResult.For(mView.PageIsPostBack).Return(False)
Expect.Call(mProductService.GetProducts()).Return(ProductList).Repeat.Once()
End Using
Using mMockery.Playback()
mPresenter.OnViewLoad()
End Using
'Verify that we hit the service dependency during the method when postback is false
Assert.AreEqual(1, mView.Products.Count)
mMockery.VerifyAll()
End Sub
Never mind Statist vs. Interaction. Think about the Roles and Relationships. If an object collaborates with a neighbour to get its job done, then that relationship (as expressed in an interface) is a candidate for testing using mocks. If an object is a simple value object with a bit of behaviour, then test it directly. I can't see the point of writing mocks (or even stubs) by hand. That's how we all started and refactored away from that.
For a longer discussion, consider taking a look at http://www.mockobjects.com/book
Read Luke Kanies' discussion of exactly this question in this blog post. He references a post from Jay Fields which even suggests that using [a equivalent to ruby's/mocha's] stub_everything is preferrable to make the tests more robust. To quote Fields' final words: "Mocha makes it as easy to define a mock as it is to define a stub, but that doesn't mean you should always prefer mocks. In fact, I generally prefer stubs and use mocks when necessary."