To mock or not to mock? [closed] - unit-testing

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 11 years ago.
Improve this question
As far as I know from eXtreme Programming and unit testing, tests must be done by another developer before another one develop the tested method (or from the same developers but test must be written before the method implementation).
Ok, seems good, we just need to test if a method has a good behavior when I give it some parameters.
But the difference between theory and practice is that in theory there isn't but in practice there is...
The first time I've tried to test, I've found it difficult in some cases due to relations between objects. I discovered mocking practice and I found it very useful but some concepts make me doubt.
First, mocking implicit says : "You know how the method runs because you must to know what other objects it needs...". Well, in theory that's my friend bob which writes the test and he just knows that the method must return true when I give it "john" string... It's me who code this method using a dao to access a database instead of using an hashtable in memory...
How my poor friend Bob will write its test ? He will anticipate my work...
Ok, seems to not be the pure theory but no matter. But if I look at the documentation of a lot of mock frameworks, they allow me to test how many times a method is called and in what order ! ouch...
But if my friend Bob must test this method like that to ensure good use of dependencies, the method must be written before the test, isn't it ?
Hum... Help my friend Bob...
When do we stop to use mock mechanism (order verification and so on) ?
When mock mechanisms are useful ?
Theory, practice and mock : what is the best balance ?

What you seem to be missing from your description is the concept of separating contract from implementation. In C# and Java, we have interfaces. In C++, a class composed only of pure virtual functions can fill this role. These aren't really necessary, but helpful in establishing the logical separation. So instead of the confusion you seem to be experiencing, practice should go more like: Bob writes the unit tests for one particular class/unit of functionality. In doing so, he defines one or more interfaces (contracts) for other classes/units that will be needed to support this one. Instead of needing to write those right now, he fills them in with mock objects to provide for the indirect input and output required by his test and the system under test. Thus the output of a set of unit tests is not just the tests to drive development of a single unit, but that plus the contracts required to be implemented by other code to support the unit currently under development.

I'm not sure if I understand your question.
Use mocks to verify collaboration between objects. For example, suppose you have a Login() method that takes a username and password. Now suppose you want this method to Log failed log in attempts. In your unit test you would create a mock Logger object and set an expectation on it that it will be called. Then you would dependency inject it into your login class and call your Login method with a bad username and password to trigger a log message.
The other tool you have in your unit testing tool bag is stubs. Use stubs when you're not testing collaborations but to fake dependencies in order to get your class under test to run.
Roy Osherove, the author of the The Art of Unit Testing, has a good video on mocks: TDD - Understanding Mock Objects
Also I recommend going to his website http://artofunittesting.com/ and watching the free videos on the right side under the heading "Unit Testing Videos".

When you are writing a unit test you are testing the outcome and/or behavior of the class under test against an expected outcome and/or behavior.
Expectations can change over the time that you develop the class - new requirements can come in that change how the class should behave or what the outcome of calling a particular method is. It is never set in stone and unit tests and the class under tests evolve together.
Initially you might start out with just a few basic tests on a very granular level, which then evolve into more and more tests, some of which might be very particular to the actual implementation of your class under test (at least as far as the observable behavior of that class is concerned).
To some degree you can write out many of your tests against a raw stub of your class under test, that produces the expected behavior but mostly has no implementation yet. Then you can refactor/develop the class "for real".
In my opinion it is a pipe dream though to write all your tests in the beginning and then fully develop the class - in my experience both tests and class under tests evolve together. Both can be written by the same developer too.
Then again I am certainly not a TDD purist, just trying to get the most out of unit tests in a pragmatic way.

I'm not sure what exactly is the problem. So I may not accurately answer the question, but I'll give it a try.
Suppose you are writing system A, where A need to get data (let's say a String for simplicity) from a provider, and then A inverse that String and send it to another system C.
B and C are provided to you, and they actually interfaces, the implementations in real life may be BImpl, and CImpl.
for the purposes of your work, you know, that you need to call the readData() from system B, and sendData(String) from system C. Your friend Bob should know that as well, you shouldn't send the data before you get it. Also if you get "abcd" you should send "dcba"
looks like both you and Bob should know this, he writes the tests, and you write the code... where is the problems in that?
of course real life is more complicated, but you should still be able to model it with simple interactions that you unit test.

Related

Is creating "testable code" always consistent with following the best OOP design principles?

This is perhaps too general/subjective a question for StackOverflow, but I've been dying to ask it somewhere.
Maybe it's because I'm new to the software engineering world, but it seems like the buzzwords I've been hearing the past couple years are like
"testable code"
"test coverage"
"pure functions"
"every code path in your entire application covered by a test that is a pure in-memory test -- doesn't open database connections or anything. Then we'll know that the code we deploy is guaranteed to work" (yea, right lol)
and sometimes I find this hard to reconcile with the way I want to design my application.
For example, one thing that happens often is I have a complex algorithm inside one or more private methods
private static void DoFancyAlgorithm(string first, string second)
{
// ...
}
and I want or need it to have test coverage.
Well, since you're not supposed to directly test private methods, I have 3 options:
make the method accessible for test (maybe InternalsVisibleTo in C# or friend in C++)
move the logic to a separate class whose "single responsibility" is dealing with this logic, even though from an OOP perspective I believe the logic should be confined to the class it is currently inside, and test that separate class
leave the code as-is and spend 100+ hours figuring out to setup the scenario where I indirectly test the logic of the method. Sometimes this means creating ridiculous mock objects to inject in the class with.
So my question is:
Is creating "testable code" always consistent with the best OOP
practice or is there sometimes a tradeoff?
Creating a testable code has of course consequences on the applicative design.
So it happens that you may do some trade-off on the design but this is generally limited.
Besides unit testing of the component API focuses on input and output of the tested functions.
So I have difficulties to understand as you may finish in such a bad smell:
leave the code as-is and spend 100+ hours figuring out to setup the
scenario where I indirectly test the logic of the method. Sometimes
this means creating ridiculous mock objects to inject in the class
with.
In most of cases as a setup/understanding of the unit test scenarios consumes so much time, it very probably means that :
the component API is not clear.
and/or you mock too many things and so you should wonder whether you should not favor integration tests (without mocks or by limiting them) instead of unit tests (with mocks).
Ideally, in case of legacy code which is running on production, code refactoring to write new unit test cases is NOT the way to go.
Instead, it would be better to first write the unit test cases for the existing code and check-in. With the safety-net in place then onwards (that you have to make all the unit test cases pass at all steps), you can refactor your code (and test cases) in small steps. Goal of these refactoring steps should be to make the code follow the best OOP design principles.
Spending time in writing new unit test cases for a legacy codebase is the biggest disadvantage of working with legacy codebase.

TDD creating some "controller" classes - at what level of intention should its tests be written?

I've recently started practising TDD and unit testing, with my main primers being the excellent GOOSGBT and a perusal of TDD-tagged questions here on SO.
Occasionally, the process I use creates a "controller" class - generally, a class which is a facade over a fairly complex subsystem where, as the number of features implemented in the subsystem grows, responsibilities are continually driven out into helper classes until the original class has essentially no responsibilities beyond making correct calls to a small set of collaborator classes and shunting the returned information (if any) to its other collaborator classes.
Originally, the tests for the soon-to-be controller classes were written at the level of intention of end-users of the class: "If I make this call, what should be the observable effects that I, as an end-user of the class, actually care about?". But as more and more responsibilities and tests for edge-cases were driven out into helper classes (which are replaced by Test Doubles in the tests for the controller class), these tests began to seem really ... vague and non-specific: they were invariably "happy-path" tests that didn't really seem to get to the heart of the matter. It's hard to explain what I mean, but reading the tests back left me with a kind of "So what? Why did you choose this particular happy-path test over any other? What is the significance? If someone breaks the code, will this test pinpoint the exact reason why the code is now broken?" As time went by, I was more and more strongly inclined to instead write the tests in terms of how the classes' collaborators were supposed to be used together: "the class should call this method on this collaborator, and pass the result to this other collaborator" which gave a much more focussed, descriptive and clearly-motivated set of tests (and generally, the resulting set of tests is small).
This obviously has its flaws: the tests are now strongly coupled to the specifics of the implementation of the controller class, rather than the much more flexible "what would an end-user of this class see that he would care about?". But really, the tests are already quite coupled to it by virtue of the fact that they must configure all of the Test Double collaborators to behave in the exact way required by the implementation to give the correct results from an end-user of the classes' point of view.
So my question is: do fellow TDD'ers find that a minority of classes do little but marshall their (small) set of collaborators around? Do you find keeping the tests for such classes to be written from an end-user of the classes' point of view to be imprecise and unsatisfactory and if so, is it acceptable to write tests for such classes explicitly in terms of how it calls and transfers data between their collaborators?
Hope it's reasonably clear what I'm driving at, here! :)
As a concrete example: one practise project I was working on was a TV listings downloader/ viewer (if you've ever seen "Digiguide", you'll know the kind of thing I mean), and I was implementing a core part of the app - the part that actually updates the listings over the net and integrates the newly downloaded listings into the current set of known TV programs. The interface to this (surprisingly complex when all requirements are taken on board) functionality was a class called ListingsUpdater, which had a method called "updateListings".
Now, end-users of ListingsUpdater only really care about a few things: after listingsUpdate has been called, is the database of TV listings now correct, and were the changes made to the database (adding TV programs, changing them if broadcast changes occurred etc) described to the provided change listeners? When the implementation was a very, very simple "fake it till you make it" type of deal, this worked fine: but as I progressively drove the implementation towards one that would work in the real-world, the "real work" got driven further and further away from ListingsUpdater, until it mainly just marshalled a few collaborators: a ListingsRequestPreparer for assessing the current state of the listings and building HTTP requests for a ListingsDownloader, and a ListingsIntegrator which unpacked the newly downloaded listings and incorporated them (it too delegating to collaborators) into the listings database. Now, note that in order to fulfil the contract of ListingsUpdater from a user's point of view, I must, in the test, instruct its ListingsIntegrator Test Double to populate the (fake) database with the correct data(!), which seems bizarre. It seems much more sensible to drop the "from the end-user of ListingsUpdater's point of view" tests and instead add a test that says "when the ListingsDownloader has downloaded the new listings ensure they are handed over to the ListingsIntegrator".
This obviously has its flaws: the tests are now strongly coupled to the specifics of the implementation of the controller class, rather than the much more flexible "what would an end-user of this class see that he would care about?". But really, the tests are already quite coupled to it by virtue of the fact that they must configure all of the Test Double collaborators to behave in the exact way required by the implementation to give the correct results from an end-user of the classes' point of view.
I'll repeat what I said in answer to another question:
I need to create either a mock a stub or a dummy object [a test double] for each dependency
This is commonly stated. But I think it is wrong. If a Car is associated with an Engine object, why not use a real Engine object when unit testing your Car class?
But, someone will declare, if you do that you are not unit testing your code; your test depends on both the Car class and the Engine class: two units, so an integration test rather than a unit test. But do those people mock the String class too? Or HashSet<String>? Of course not. The line between unit and integration testing is not so clear.
More philosophically, you can not create good mock objects [test doubles] in many cases. The reason is that, for most methods, the manner in which an object delegates to associated objects is undefined. Whether it does delegate, and how, is left by the contract as an implementation detail. The only requirement is that, on delegating, the method satisfies the preconditions of its delegate. In such a situation, only a fully functional (non-mock) delegate will do. If the real object checks its preconditions, failure to satisfy a precondition on delegating will cause a test failure. And debugging that test failure will be easy.
And I'll add in response to
they were invariably "happy-path" tests that didn't really seem to get to the heart of the matter
This is a more general testing problem, not specific to TDD or unit testing: how to you select a good set of test-cases, given that comprehensive testing is impossible? I rely on equivalence partitioning. When I start work on some code, I use equivalence partitioning to select the set of test-cases I want the code to pass, then work on each in turn in a TDD manner, but if passing one of the test-cases does not require a code change (because early work has created code that also satisfies that test case) I still add the test-case to my test-suite. My test suite therefore has better coverage of potential error paths.

Unit Testing : what to test / what not to test?

Since a few days ago I've started to feel interested in Unit Testing and TDD in C# and VS2010. I've read blog posts, watched youtube tutorials, and plenty more stuff that explains why TDD and Unit Testing are so good for your code, and how to do it.
But the biggest problem I find is, that I don't know what to check in my tests and what not to check.
I understand that I should check all the logical operations, problems with references and dependencies, but for example, should I create an unit test for a string formatting that's supossed to be user-input? Or is it just wasting my time while I just can check it in the actual code?
Is there any guide to clarify this problem?
In TDD every line of code must be justified by a failing test-case written before the code.
This means that you cannot develop any code without a test-case. If you have a line of code (condition, branch, assignment, expression, constant, etc.) that can be modified or deleted without causing any test to fail, it means this line of code is useless and should be deleted (or you have a missing test to support its existence).
That is a bit extreme, but this is how TDD works. That being said if you have a piece of code and you are wondering whether it should be tested or not, you are not doing TDD correctly. But if you have a string formatting routine or variable incrementation or whatever small piece of code out there, there must be a test case supporting it.
UPDATE (use-case suggested by Ed.):
Like for example, adding an object to a list and creating a test to see if it is really inside or there is a duplicate when the list shouldn't allow them.
Here is a counterexample, you would be surprised how hard it is to spot copy-paste errors and how common they are:
private Set<String> inclusions = new HashSet<String>();
private Set<String> exclusions = new HashSet<String>();
public void include(String item) {
inclusions.add(item);
}
public void exclude(String item) {
inclusions.add(item);
}
On the other hand testing include() and exclude() methods alone is an overkill because they do not represent any use-cases by themselves. However, they are probably part of some business use-case, you should test instead.
Obviously you shouldn't test whether x in x = 7 is really 7 after assignment. Also testing generated getters/setters is an overkill. But it is the easiest code that often breaks. All too often due to copy&paste errors or typos (especially in dynamic languages).
See also:
Mutation testing
Your first few TDD projects are going to probably result in worse design/redesign and take longer to complete as you are learning (at least in my experience). This is why you shouldn't jump into using TDD on a large critical project.
My advice is to use "pure" TDD (acceptance/unit test everything test-first) on a few small projects (100-10,000 LOC). Either do the side projects on your own or if you don't code in your free time, use TDD on small internal utility programs for your job.
After you do "pure" TDD on about 6-12 projects, you will start to understand how TDD affects design and learn how to design for testability. Once you know how to design for testability, you will need to TDD less and maximize the ROI of unit, regression, acceptance, etc. tests rather than test everything up front.
For me, TDD is more of teaching method for good code design than a practical methodology. However, I still TDD logic code and unit test instead of debug.
There is no simple answer to this question. There is the law of diminishing returns in action, so achieving perfect coverage is seldom worth it. Knowing what to test is a thing of experience, not rules. It’s best to consciously evaluate the process as you go. Did something break? Was it feasible to test? If not, is it possible to rewrite the code to make it more testable? Is it worth it to always test for such cases in the future?
If you split your code into models, views and controllers, you’ll find that most of the critical code is in the models, and those should be fairly testable. (That’s one of the main points of MVC.) If a piece of code is critical, I test it, even if it means that I would have to rewrite it to make it more testable. If a piece of code is easy to get wrong or get broken by future updates, it gets a test. I seldom test controllers and views, as it’s not proving worth the trouble for me.
The way I see it all of your code falls into one of three buckets:
Code that is easy to test: This includes your own deterministic public methods.
Code that is difficult to test: This includes GUI, non-deterministic methods, private methods, and methods with complex setup.
Code that you don't want to test: This includes 3rd party code, and code that is difficult to test and not worth the effort.
Of the three, you should focus on testing the easy code. The difficult to test code should be refactored so that into two parts: code that you don't want to test and easy code. And of course, you should test the refactored easy code.
I think you should only unit test entry points to behavior of the system. This include public methods, public accessors and public fields, but not constants (constant fields, enums, methods, etc.). It also includes any code which directly deals with IO, I explain why further below.
My reasoning is as follows:
Everything that's public is basically an entry point to a behavior of the system. A unit test should therefore be written that guarantees that the expected behavior of that entry point works as required. You shouldn't test all possible ways of calling the entry point, only the ones that you explicitly require. Your unit tests are therefore also the specs of what behavior your system supports and your documentation of how to use it.
Things that are not public can basically be deleted/re-factored at will with no impact to the behavior of the system. If you were to test those, you'd create a hard dependency from your unit test to that code, which would prevent you from doing refactoring on it. That's why you should not test anything else but public methods, fields and accessors.
Constants by design are not behavior, but axioms. A unit test that verifies a constant is itself a constant, so it would only be duplicated code and useless effort to write a test for constants.
So to answer your specific example:
should I create an unit test for a string formatting that's supossed
to be user-input?
Yes, absolutely. All methods which receive or send external input/output (which can be summed up as receiving IO), should be unit tested. This is probably the only case where I'd say non-public things that receive IO should also be unit tested. That's because I consider IO to be a public entry. Anything that's an entry point to an external actor I consider public.
So unit test public methods, public fields, public accessors, even when those are static constructs and also unit test anything which receives or sends data from an external actor, be it a user, a database, a protocol, etc.
NOTE: You can write temporary unit tests on non public things as a way for you to help make sure your implementation works. This is more of a way to help you figure out how to implement it properly, and to make sure your implementation works as you intend. After you've tested that it works though, you should delete the unit test or disable it from your test suite.
Kent Beck, in Extreme Programming Explained, said you only need to test the things that need to work in production.
That's a brusque way of encapsulating both test-driven development, where every change in production code is supported by a test that fails when the change is not present; and You Ain't Gonna Need It, which says there's no value in creating general-purpose classes for applications that only deal with a couple of specific cases.
I think you have to change your point of view.
In a pure form TDD requires the red-green-refactor workflow:
write test (it must fail) RED
write code to satisfy test GREEN
refactor your code
So the question "What I have to test?" has a response like: "You have to write a test that correspond to a feature or a particular requirements".
In this way you get must code coverage and also a better code design (remember that TDD stands also for Test Driven "Design").
Generally speaking you have to test ALL public method/interfaces.
should I create an unit test for a string formatting that's supossed
to be user-input? Or is it just wasting my time while I just can check
it in the actual code?
Not sure I understand what you mean, but the tests you write in TDD are supposed to test your production code. They aren't tests that check user input.
To put it another way, there can be TDD unit tests that test the user input validation code, but there can't be TDD unit tests that validate the user input itself.

Unit test documentation [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 5 years ago.
Improve this question
I would like to know from those who document unit tests how they are documenting it. I understand that most TDD followers claim "the code speaks" and thus test documentation is not very important because code should be self-descriptive. Fair enough, but I would like to know how to document unit tests, not whether to document them at all.
My experience as a developer tells me that understanding old code (this includes unit tests) is difficult.
So what is important in a test documentation? When is the test method name not descriptive enough so that documentation is justified?
As requested by Thorsten79, I'll elaborate on my comments as an answer. My original comment was:
"The code speaks" is unfortunately
completely wrong, because a
non-developer cannot read the code,
while he can at least partially read
and understand generated
documentation, and this way he can
know what the tests test. This is
especially important in cases where
the customer fully understands the
domain and just can't read code, and
gets even more important when the unit
tests also test hardware, like in the
embedded world, because then you test
things that can be seen.
When you're doing unit tests, you have to know whether you're writing them just for you (or for your co-workers), or if you're also writing them for other people. Many times, you should be writing code for your readers, rather than for your convenience.
In mixed hardware/software development like in my company, the customers know what they want. If their field device has to do a reset when receiving a certain bus command, there must be a unit test that sends that command and checks whether the device was reset. We're doing this here right now with NUnit as the unit test framework, and some custom software and hardware that makes sending and receiving commands (and even pressing buttons) possible. It's great, because the only alternative would be to do all that manually.
The customer absolutely wants to know which tests are there, and he even wants to run the tests himself. If the tests are not properly documented, he doesn't know what the test does and can't check if all tests he think he'll need are there, and when running the test, he doesn't know what it will do. Because he can't read the code. He knows the used bus system better than our developers, but they just can't read the code. If a test fails, he does not know why and cannot even say what he thinks the test should do. That's not a good thing.
Having documented the unit tests properly, we have
code documentation for the developers
test documentation for the customer, which can be used to prove that the device does what it should do, i.e. what the customer ordered
the ability to generate the documentation in any format, which can even be passed to other involved parties, like the manufacturer
Properly in this context means: Write clear language that can be understood by non-developers. You can stay technical, but don't write things only you can understand. The latter is of course also important for any other comments and any code.
Independent of our exact situation, I think that's what I would want in unit tests all the time, even if they're pure software. A customer can ignore a unit test he doesn't care about, like basic function tests. But just having the docs there does never hurt.
As I've written in a comment to another answer: In addition, the generated documentation is also a good starting point if you (or your boss, or co-worker, or the testing department) wants to examine which tests are there and what they do, because you can browse it without digging through the code.
In the test code itself:
With method level comments explaining
what the test is testing / covering.
At the class level, a comment indicating the actual class being tested (which could actually be inferred from the test class name so that's actually less important than the comments at the method level).
With test coverage reports
Such as Cobertura. That's also documentation, since it indicates what your tests are covering and what they're not.
Comment complex tests or scenarios if required but favour readable tests in the first place.
On the other hand, I try and make my tests speak for themselves. In other words:
[Test]
public void person_should_say_hello() {
// Arrange.
var person = new Person();
// Act.
string result = person.SayHello();
// Assert.
Assert.AreEqual("Hello", result, "Person did not say hello");
}
If I was to look at this test I'd see it used Person (though it would be in PersonTest.cs as a clue ;)) then that if anything breaks it will occur in the SayHello method. The assert message is useful as well, not only for reading tests but when tests are run it's easier to see them in GUI's.
Following the AAA style of Arrange, Act and Assert makes the test essentially document itself. If this test was more complex, you could add comments above the test function explaining what's going on. As always, you should ensure these are kept up to date.
As a side note, using underscore notation for test names makes them much more readably, compare this to:
public void PersonShouldSayHello()
Which for long method names, can make reading the test more difficult. Though this point is often subjective.
When I come back at an old test and don't understand it right away
I refactor if possible
or write that comment that would have made me understand it right away
When you are writing your testcases it is the same as when you are writing your code, everyhting is crystal clear to you. That makes it difficult to envision what you should write to make the code clearer.
Note that this does not mean I never write any comments. There still are plenty of situations when I just know that I will going to have a hard time figuring out what a particular piece of code does.
I usually start with point 1 in these situations...
Improving the unit tests as executable specification is the point of Behaviour-Driven Development : BDD is an evolution of TDD where unit-tests use an Ubiquitous Language (a language based on the business domain and shared by the developers and the stakeholders) and expressive names (testCannotCreateDuplicateEntry) to describe what the code is supposed to do. Some BDD frameworks pushed the idea very far, and show executable written with almost natural language, for example.
I would advice against any detailed documentation separate from code. Why? Because whenever you need it, it will most likely be very outdated. The best place for detailed documentation is the code itself (including comments). BTW, anything you need to say about a specific unit test is very detailed documentation.
A few pointers on how to achieve well self-documented tests:
Follow a standard way to write all tests, like AAA pattern. Use a blank line to separate each part. That makes it much easier for the reader to identify the important bits.
You should include, in every test name: what is being tested, the situation under test and the expected behavior. For example: test__getAccountBalance__NullAccount__raisesNullArgumentException()
Extract out common logic into set up/teardown or helper methods with descriptive names.
Whenever possible use samples from real data for input values. This is much more informative than blank objects or made up JSON.
Use variables with descriptive names.
Think about your future you/teammate, if you remembered nothing about this, would you like any additional information when the test fails? Write that down as comments.
And to complement what other answers have said:
It's great if your customer/Product Owner/boss has a very good idea as to what should be tested and is eager to help, but unit tests are not the best place to do it. You should use acceptance tests for this.
Unit tests should cover specific units of code (methods/functions within classes/modules), if you cover more ground, they will quickly turn into integration tests, which are fine and needed too, but if you do not separate them specifically, people will just get them confused and you will loose some of the benefits of unit testing. For example, when a unit test fails you should get instant bug detection (specially if you follow the naming convention above). When an integration test fails, you know there is a problem, and you know some of its effects, but you might need to debug, sometimes for a long time, to find what it is.
You can use unit testing frameworks for integration tests if you want, but you should know you are not doing unit testing, and you should keep them in separate files/directories.
There are good acceptance/behavior testing frameworks (FitNesse, Robot, Selenium, Cucumber, etc.) that can help business/domain people not just read, but also write the tests themselves. Sure, they will need help from coders to get them to work (specially when starting out), but they will be able to do it, and they do not need to know anything about your modules or classes of functions.

Mocks or real classes? [duplicate]

This question already has answers here:
When should I mock?
(4 answers)
Closed 9 years ago.
Classes that use other classes (as members, or as arguments to methods) need instances that behave properly for unit test. If you have these classes available and they introduce no additional dependencies, isn't it better to use the real thing instead of a mock?
I say use real classes whenever you can.
I'm a big believer in expanding the boundaries of "unit" tests as much as possible. At this point they aren't really unit tests in the traditional sense, but rather just an automated regression suite for your application. I still practice TDD and write all my tests first, but my tests are a little bigger than most people's and my green-red-green cycles take a little longer. But now that I've been doing this for a little while I'm completely convinced that unit tests in the traditional sense aren't all they're cracked up to be.
In my experience writing a bunch of tiny unit tests ends up being an impediment to refactoring in the future. If I have a class A that uses B and I unit test it by mocking out B, when I decide to move some functionality from A to B or vice versa all of my tests and mocks have to change. Now if I have tests that verify that the end to end flow through the system works as expected then my tests actually help me to identify places where my refactorings might have caused a change in the external behavior of the system.
The bottom line is that mocks codify the contract of a particular class and often end up actually specifying some of the implementation details too. If you use mocks extensively throughout your test suite your code base ends up with a lot of extra inertia that will resist any future refactoring efforts.
It is fine to use the "real thing" as long as you have absolute control over the object. For example if you have an object that just has properties and accessors you're probably fine. If there is logic in the object you want to use, you could run into problems.
If a unit test for class a uses an instance of class b and an change introduced to b breaks b, then the tests for class a are also broken. This is where you can run into problems where as with a mock object you could always return the correct value. Using "the real thing" Can kind of convolute tests and hide the real problem.
Mocks can have downsides too, I think there is a balance with some mocks and some real objects you will have to find for yourself.
There is one really good reason why you want to use stubs/mocks instead of real classes. I.e. to make your unit test's (pure unit test) class under test isolated from everything else. This property is extremely useful and the benefits for keeping tests isolated are plentiful:
Tests run faster because they don't need to call the real class implementation. If the implementation is to run against file system or relational database then the tests will become sluggish. Slow tests make developers not run unit tests as often. If you're doing Test Driven Development then time hogging tests are together a devastating waste of developers time.
It will be easier to track down problems if the test is isolated to the class under test. In contrast to a system test it will be much more difficult to track down nasty bugs that are not apparently visible in stack traces or what not.
Tests are less fragile on changes done on external classes/interfaces because you're purely testing the class that is under test. Low fragility is also an indication of low coupling, which is a good software engineering.
You're testing against external behaviour of a class rather than the internal implementation which is more useful when deciding code design.
Now if you want to use real class in your test, that's fine but then it is NOT a unit test. You're doing a integration test instead, which is useful for the purpose of validating requirements and overall sanity check. Integration tests are not run as often as unit tests, in practice it is mostly done before committing to favorite code repository, but is equally important.
The only thing you need to have in mind is the following:
Mocks and stubs are for unit tests.
Real classes are for integration/system tests.
Extracted and extended from an answer of mine How do I unit-test inheriting objects?">here:
You should always use real objects where possible.
You should only use mock objects if the real objects do something you dont want to set up (like use sockets, serial ports, get user input, retrieve bulky data etc). Essentially, mock objects are for when the estimated effort to implement and maintain a test using a real object is greater than that to implement and maintain a test using a mock object.
I dont buy into the "dependant test failure" argument. If a test fails because a depended-on class broke, the test did exactly what it should have done. This is not a smell! If a depended-on interface changes, I want to know!
Highly mocked testing environments are very high-maintenance, particularly early in a project when interfaces are in flux. Ive always found it better to start integration testing ASAP.
I always use a mock version of a dependency if the dependency accesses an external system like a database or web service.
If that isn't the case, then it depends on the complexity of the two objects. Testing the object under test with the real dependency is essentially multiplying the two sets of complexities. Mocking out the dependency lets me isolate the object under test. If either object is reasonably simple, then the combined complexity is still workable and I don't need a mock version.
As others have said, defining an interface on the dependency and injecting it into the object under test makes it much easier to mock out.
Personally, I'm undecided about whether it's worth it to use strict mocks and validate every call to the dependency. I usually do, but it's mostly habit.
You may also find these related questions helpful:
What is object mocking and when do I need it?
When should I mock?
How are mocks meant to be used?
And perhaps even, Is it just me, or are interfaces overused?
Use the real thing only if it has been unit tested itself first. If it introduces dependencies that prevent that (circular dependencies or if it requires certain other measures to be in place first) then use a 'mock' class (typically referred to as a "stub" object).
If your 'real things' are simply value objects like JavaBeans then thats fine.
For anything more complex I would worry as mocks generated from mocking frameworks can be given precise expectations about how they will be used e.g. the number of methods called, the precise sequence and the parameters expected each time. Your real objects cannot do this for you so you risk losing depth in your tests.
I've been very leery of mocked objects since I've been bitten by them a number of times. They're great when you want isolated unit tests, but they have a couple of issues. The major issue is that if the Order class needs a a collection of OrderItem objects and you mock them, it's almost impossible to verify that the behavior of of the mocked OrderItem class matches the real-world example (duplicating the methods with appropriate signatures is generally not enough). More than once I've seen systems fail because the mocked classes don't match the real ones and there weren't enough integration tests in place to catch the edge cases.
I generally program in dynamic languages and I prefer merely overriding the specific methods which are problematic. Unfortunately, this is sometimes hard to do in static languages. The downside of this approach is that you're using integration tests rather than unit tests and bugs are sometimes harder to track down. The upside is that you're using the actual code that is written, rather than a mocked version of that code.
If you don't care for verifying expectations on how your UnitUnderTest should interact with the Thing, and interactions with the RealThing have no other side-effects (or you can mock these away) then it is in my opinion perfectly fine to just let your UnitUnderTest use the RealThing.
That the test then covers more of your code base is a bonus.
I generally find it is easy to tell when I should use a ThingMock instead of a RealThing:
When I want to verify expectations in the interaction with the Thing.
When using the RealThing would bring unwanted side-effects.
Or when the RealThing is simply too hard/troublesome to use in a test setting.
If you write your code in terms of interfaces, then unit testing becomes a joy because you can simply inject a fake version of any class into the class you are testing.
For example, if your database server is down for whatever reason, you can still conduct unit testing by writing a fake data access class that contains some cooked data stored in memory in a hash map or something.
It depends on your coding style, what you are doing, your experience and other things.
Given all that, there's nothing stopping you from using both.
I know I use the term unit test way too often. Much of what I do might be better called integration test, but better still is to just think of it as testing.
So I suggest using all the testing techniques where they fit. The overall aim being to test well, take little time doing it and personally have a solid feeling that it's right.
Having said that, depending on how you program, you might want to consider using techniques (like interfaces) that make mocking less intrusive a bit more often. But don't use Interfaces and injection where it's wrong. Also if the mock needs to be fairly complex there is probably less reason to use it. (You can see a lot of good guidance, in the answers here, to what fits when.)
Put another way: No answer works always. Keep your wits about you, observe what works what doesn't and why.