In my last project, we had Unit Testing with almost 100% cc, and as a result we almost didn’t have any bugs.
However, since Unit Testing must be White Box (you have to mock inner functions to get the result you want, so your tests need to know about the inner structure of your code) any time we changed the implementation of a function, we had to change the tests as well.
Note that we didn't change the logic of the functions, just the implementation.
It was very time-consuming and it felt as if we are working the wrong way.
Since we used all proper OOP guidelines (specifically Encapsulation), every time we changed the implementation we didn't have to change the rest of our code, but had to change the unit tests.
It felt as if we are serving the tests, instead of them serving us.
To prevent this, some of us argued that unit tests should be Black Box Testing.
That would be possible if we create one big mock of our entire Domain and create a stub for every function in every class in one place, and use it in every unit test.
Of course that if a specific test needs specific inner function to be called (Like making sure we write to the DB), we can override our stub.
So, every time we change the implementation of a function (like adding or replacing a call to a help function) we will only need to change our main big mock. Even if we do need to change some unit tests, it will still be much less than before.
Others argue that unit tests must be White Box, since not only do you want to make sure your app writes to the DB in a specific place, you want to make sure your app does not write to the DB anywhere else unless you specifically expect it to. While this is a valid point, I don't think it worth the time of writing White Box tests instead of Black Box tests.
So in conclusion, two questions:
What do you think about the concept of Black Box Unit Testing?
What do you think about the way we want to implement that concept? Do you have better ideas?
You need different types of tests.
Unit-tests which should be white-box testing, as you did
Integration tests (or system tests) which test the ability to use the actual implementations of your system and its communication with external layers (external systems, database, etc.) which should be black-box styled, but each one for a specific feature (CRUD tests for example)
Acceptance tests which should be completely black-box and are driven by functional requirements (as your users would phrase them). End-to-end as much as possible, and not knowing the internal of your chosen implementations. The textbook definition of black-box tests.
And remember code coverage is meaningless in most of the cases. You need a high lines coverage (or methods coverage, whatever your counting method is), but that's usually not sufficient. The concept you need to think about is functional coverage: making sure all your requirements and logical paths are covered.
and as a result we almost didn’t have any bugs
If you were really able to achieve this, then I don't think you should change anything.
Black box testing might sound appealing on paper, but truth is you almost always need to know parts of inner workings of a tested class. The provide input, verify output in reality works only for simple cases. Most of the times your tests need to have at least some knowledge of tested method - how it interacts with external collaborators, what methods it calls, in what order and so forth.
Whole idea behind mocking and SOLID design is to avoid situation where dependency implementation change causes other class test changes/failures. On contrary, if you change implementation details of tested method, so should change implementation details of it tests. That's nothing too uncommon.
Overall, if you were really able to achieve almost no bugs, then I would stick to that approach.
tl;dr version:
Black Box unit testing is exactly how unit testing should be done.
Black Box unit testing is exactly how unit testing should be done. Proper TDD practice does exactly this.
Full version.
There is absolutely no need in testing private methods of the objects. It'll have no impact on code coverage, also.
When you TDD a class, you write tests that check the behavior of that class. Behavior is expressed through the public methods of that class. You should never bother with how that methods are really implemented. Google people described that a lot better than I will ever be able to: http://googletesting.blogspot.ru/2013/08/testing-on-toilet-test-behavior-not.html
If you do the usual mistake and statically depend on other entity classes or worse, on classes from the different layer of application, it's inevitable that you will find yourself in a situation when you need to check a lot of things in your test and prepare a lot of stuff for it. For solving this the Dependency Injection principle and the Law of Demeter exist.
I think you should continue writing unit tests - just make them less fragile.
Unit tests should be low level but should test the result and not how things done. When implementation change cause a lot of test change it means that instead of testing requirements you're actually testing implementation.
There are several rules of the thumb - such as "don't test private methods" and use mock objects.
Mocking/simulating the entire domain usually result in the opposite of what you're trying to accomplish - when the code behavior change you need to update the tests to make sure that your "simulated objects" behaves the same - it becomes really hard really fast as the complexity of the project increase.
I suggest that you continue writing unit tests - just learn how to make them more robust and less fragile.
"as a result we almost didn’t have any bugs" -- so keep it that way.
Sole cause of frustration is necessity to maintain unit tests, which actually is not such a bad thing (alternative is much worse). Just make them more maintainable. "The art of Unit Testing" by Roy Osherove gave me a good start in this way.
So
1) Not an option. (The idea itself contradicts principles of TDD, for instance)
2) You'll have much more maintenance troubles with such approach. Unit testing philosophy is to chop out SUT from other system and test it using stubs as input and mocks as output (signals?) simulating real life situations (or mb I just dont catch the "one big mock of our entire Domain" idea).
For detailed information about black, white and grey box and decision tables refer to the following article, which explains everything.
Testing Web-based applications: The state of the art and future trends (PDF)
Related
Since a few days ago I've started to feel interested in Unit Testing and TDD in C# and VS2010. I've read blog posts, watched youtube tutorials, and plenty more stuff that explains why TDD and Unit Testing are so good for your code, and how to do it.
But the biggest problem I find is, that I don't know what to check in my tests and what not to check.
I understand that I should check all the logical operations, problems with references and dependencies, but for example, should I create an unit test for a string formatting that's supossed to be user-input? Or is it just wasting my time while I just can check it in the actual code?
Is there any guide to clarify this problem?
In TDD every line of code must be justified by a failing test-case written before the code.
This means that you cannot develop any code without a test-case. If you have a line of code (condition, branch, assignment, expression, constant, etc.) that can be modified or deleted without causing any test to fail, it means this line of code is useless and should be deleted (or you have a missing test to support its existence).
That is a bit extreme, but this is how TDD works. That being said if you have a piece of code and you are wondering whether it should be tested or not, you are not doing TDD correctly. But if you have a string formatting routine or variable incrementation or whatever small piece of code out there, there must be a test case supporting it.
UPDATE (use-case suggested by Ed.):
Like for example, adding an object to a list and creating a test to see if it is really inside or there is a duplicate when the list shouldn't allow them.
Here is a counterexample, you would be surprised how hard it is to spot copy-paste errors and how common they are:
private Set<String> inclusions = new HashSet<String>();
private Set<String> exclusions = new HashSet<String>();
public void include(String item) {
inclusions.add(item);
}
public void exclude(String item) {
inclusions.add(item);
}
On the other hand testing include() and exclude() methods alone is an overkill because they do not represent any use-cases by themselves. However, they are probably part of some business use-case, you should test instead.
Obviously you shouldn't test whether x in x = 7 is really 7 after assignment. Also testing generated getters/setters is an overkill. But it is the easiest code that often breaks. All too often due to copy&paste errors or typos (especially in dynamic languages).
See also:
Mutation testing
Your first few TDD projects are going to probably result in worse design/redesign and take longer to complete as you are learning (at least in my experience). This is why you shouldn't jump into using TDD on a large critical project.
My advice is to use "pure" TDD (acceptance/unit test everything test-first) on a few small projects (100-10,000 LOC). Either do the side projects on your own or if you don't code in your free time, use TDD on small internal utility programs for your job.
After you do "pure" TDD on about 6-12 projects, you will start to understand how TDD affects design and learn how to design for testability. Once you know how to design for testability, you will need to TDD less and maximize the ROI of unit, regression, acceptance, etc. tests rather than test everything up front.
For me, TDD is more of teaching method for good code design than a practical methodology. However, I still TDD logic code and unit test instead of debug.
There is no simple answer to this question. There is the law of diminishing returns in action, so achieving perfect coverage is seldom worth it. Knowing what to test is a thing of experience, not rules. It’s best to consciously evaluate the process as you go. Did something break? Was it feasible to test? If not, is it possible to rewrite the code to make it more testable? Is it worth it to always test for such cases in the future?
If you split your code into models, views and controllers, you’ll find that most of the critical code is in the models, and those should be fairly testable. (That’s one of the main points of MVC.) If a piece of code is critical, I test it, even if it means that I would have to rewrite it to make it more testable. If a piece of code is easy to get wrong or get broken by future updates, it gets a test. I seldom test controllers and views, as it’s not proving worth the trouble for me.
The way I see it all of your code falls into one of three buckets:
Code that is easy to test: This includes your own deterministic public methods.
Code that is difficult to test: This includes GUI, non-deterministic methods, private methods, and methods with complex setup.
Code that you don't want to test: This includes 3rd party code, and code that is difficult to test and not worth the effort.
Of the three, you should focus on testing the easy code. The difficult to test code should be refactored so that into two parts: code that you don't want to test and easy code. And of course, you should test the refactored easy code.
I think you should only unit test entry points to behavior of the system. This include public methods, public accessors and public fields, but not constants (constant fields, enums, methods, etc.). It also includes any code which directly deals with IO, I explain why further below.
My reasoning is as follows:
Everything that's public is basically an entry point to a behavior of the system. A unit test should therefore be written that guarantees that the expected behavior of that entry point works as required. You shouldn't test all possible ways of calling the entry point, only the ones that you explicitly require. Your unit tests are therefore also the specs of what behavior your system supports and your documentation of how to use it.
Things that are not public can basically be deleted/re-factored at will with no impact to the behavior of the system. If you were to test those, you'd create a hard dependency from your unit test to that code, which would prevent you from doing refactoring on it. That's why you should not test anything else but public methods, fields and accessors.
Constants by design are not behavior, but axioms. A unit test that verifies a constant is itself a constant, so it would only be duplicated code and useless effort to write a test for constants.
So to answer your specific example:
should I create an unit test for a string formatting that's supossed
to be user-input?
Yes, absolutely. All methods which receive or send external input/output (which can be summed up as receiving IO), should be unit tested. This is probably the only case where I'd say non-public things that receive IO should also be unit tested. That's because I consider IO to be a public entry. Anything that's an entry point to an external actor I consider public.
So unit test public methods, public fields, public accessors, even when those are static constructs and also unit test anything which receives or sends data from an external actor, be it a user, a database, a protocol, etc.
NOTE: You can write temporary unit tests on non public things as a way for you to help make sure your implementation works. This is more of a way to help you figure out how to implement it properly, and to make sure your implementation works as you intend. After you've tested that it works though, you should delete the unit test or disable it from your test suite.
Kent Beck, in Extreme Programming Explained, said you only need to test the things that need to work in production.
That's a brusque way of encapsulating both test-driven development, where every change in production code is supported by a test that fails when the change is not present; and You Ain't Gonna Need It, which says there's no value in creating general-purpose classes for applications that only deal with a couple of specific cases.
I think you have to change your point of view.
In a pure form TDD requires the red-green-refactor workflow:
write test (it must fail) RED
write code to satisfy test GREEN
refactor your code
So the question "What I have to test?" has a response like: "You have to write a test that correspond to a feature or a particular requirements".
In this way you get must code coverage and also a better code design (remember that TDD stands also for Test Driven "Design").
Generally speaking you have to test ALL public method/interfaces.
should I create an unit test for a string formatting that's supossed
to be user-input? Or is it just wasting my time while I just can check
it in the actual code?
Not sure I understand what you mean, but the tests you write in TDD are supposed to test your production code. They aren't tests that check user input.
To put it another way, there can be TDD unit tests that test the user input validation code, but there can't be TDD unit tests that validate the user input itself.
I'm fairly green to unit testing and TDD, so please bear with me as I ask what some may consider newbie questions, or if this has been debated before. If this turns out to be considered a "bad question" (too subjective and open for debate), I will happily close it. However, I've searched for a couple days, and am not getting a definitive answer, and I need a better understand of this, so I know no better way to get more info than to post here.
I've started reading an older book on unit testing (because a colleague had it on hand), and its opening chapter talks about why to unit test. One of the points it makes is that in the long run, your code is much more reliable and cleaner, and less prone to bugs. It also points out that effective unit testing will make tracking and fixing bugs much easier. So it seems to focus quite a bit on the overall prevention/reduction of bugs in your code.
On the other hand, I also found an article about writing great unit tests, and it states that the goal of unit testing is to make your design more robust, and conversely, finding bugs is the goal of manual testing, not unit testing.
So being the newbie to TDD that I am, I'm a little confused as to the state of mind with which I should go into TDD and building my unit tests. I'll admit that part of the reason I'm taking this on now with my recently started project is because I'm tired of my changes breaking previously existing code. And admittedly, the linked article above does at least point this out as an advantage to TDD. But my hope is that by going back in and adding unit tests to my existing code (and then continuing TDD from this point forward) is to help prevent these bugs in the first place.
Are this book and this article really saying the same thing in different tones, or is there some subjectivity on this subject, and what I'm seeing is just two people having somewhat different views on how to approach TDD?
Thanks in advance.
Unit tests and automated tests generally are for both better design and verified code.
Unit test should test some execution path in some very small unit. This unit is usually public method or internal method exposed on your object. The method itself can still use many other protected or private methods from the same object instance. You can have single method and several unit test for this method to test different execution paths. (By execution path I meant something controlled by if, switch, etc.) Writing unit tests this way will validate that your code really does what you expect. This can be especially important in some corner cases where you expect to throw exception in some rare scenarios etc. You can also test how method behaves if you pass different parameters - for example null instead of object instance, negative value for integer used for indexing, etc. That is especially useful for public API.
Now suppose that your tested method also uses instances of other classes. How to deal with it? Should you still test your single method and believe that class works? What if the class is not implemented yet? What if the class has some complex logic inside? Should you test these execution paths as well on your current method? There are two approaches to deal with this:
For some cases you will simply let the real class instance to be tested together with your method. This is for example very common in case of logging (it is not bad to have logs available for test as well).
For other scenarios you would like to take this dependencies from your method but how to do it? The solution is dependency injection and implementing against abstraction instead of implementation. What does it mean? It means that your method / class will not create instances of these dependencies but instead it will get them either through method parameters, class constructor or class properties. It also means that you will not expect concrete implementation but either abstract base class or interface. This will allow you to pass fake, dummy or mock implementation to your tested object. These special type of implementations simply don't do any processing they get some data and return expected result. This will allow you to test your method without dependencies and lead to much better and more extensible design.
What is the disadvantage? Once you start using fakes / mocks you are testing single method / class but you don't have a test which will grab all real implementations and put them together to test if the whole system really works = You can have thousands of unit tests and validate that each your method works but it doesn't mean they will work together. This is scenario for more complex tests - integration or end-to-end tests.
Unit tests should be usually very easy to write - if they are not it means that your design is probably complicated and you should think about refactoring. They should be also very fast to execute so you can run them very often. Other kinds of test can be more complex and very slow and they should run mostly on build server.
How it fits with SW development process? The worst part of development process is stabilization and bug fixing because this part can be very hardly estimated. To be able to estimate how much time bug fixing takes you must know what causes the bug. But this investigation cannot be estimated. You can have bug which will take one hour to fix but you will spend two weeks by debugging your application and searching for this bug. When using good code coverage you will most probably find such bug early during development.
Automated testing don't say that SW doesn't contain bugs. It only say that you did your best to find and solve them during development and because of that your stabilization could be much less painful and much shorter. It also doesn't say that your SW does what it should - that is more about application logic itself which must be tested by some separate tests going through each use case / user story - acceptance tests (they can be also automated).
How this fit with TDD? TDD takes it to extreme because in TDD you will write your test first to drive your quality, code coverage and design.
It's a false choice. "Find/minimize bugs" OR improve design.
TDD, in particular (and as opposed to "just" unit testing) is all about giving you better design.
And when your design is better, what are the consequences?
Your code is easier to read
Your code is easier to understand
Your code is easier to test
Your code is easier to reuse
Your code is easier to debug
Your code has fewer bugs in the first place
With well-designed code, you spend less time finding and fixing bugs, and more time adding features and polish. So TDD gives you a savings on bugs and bug-hunting, by giving you better design. These things are not separate; they are dependent and interrelated.
There can many different reasons why you might want to test your code. Personally, I test for a number of reasons:
I usually design API using a combination of the normal design patterns (top-down) and test-driven development (TDD; bottom-up) to ensure that I have a sound API both from a best practices point-of-view as well as from an actual usage point-of-view. The focus of the tests is both on the major use-cases for the API, but also on the completeness of the API and the behavior - so they are primary "black box" tests. The development sequence is often:
main API based on design patterns and "gut feeling"
TDD tests for the major use-cases according to the high-level specification for the API - primary in order to make sure the API is "natural" and easy to use
fleshed out API and behavior
all the needed test cases to ensure the completeness and correct behavior
Whenever I fix an error in my code, I try to write a test to make sure it stay fixed. Somehow, the error got into my original design and passed my original testing of the code, so it is probably not all that trivial. I have noticed that many of the tests tests are "write box" tests.
In order to be able to make any sort of major re-factoring of the code, you need an extensive set of API tests to make sure the behavior of the code stays the same after the re-factoring. For any non-trivial API, I want the test suite to be in place and working for a long time before the re-factoring to be sure that all the major use-cases are covered in a good way. As often as not, you are forced to throw away most of your "white box" tests as they - by the very definition - makes too many assumptions about the internals. I usually try to "translate" as many as possible of these tests as the same non-trivial problems tend to survive re-factoring of the code.
In order to transfer any code between developers, I usually also want a good test suite with focus on the API and the major use-cases. So basically the tests from the initial TDD...
I think that answer to your question is: both.
You will improve design because there is one particular thing about TDD that is great: while you write tests you put yourself in the position of the client code that will be using the system under test - and this alone makes you think about certain design choices.
For example: UI. When you start writing the tests, you will see that those God-Forms are impossible to test, so you separate the logic behind the screens to a presenter/controller, and you get MVP/MVC/whatever.
Having the concept of unit testing a class and mocking dependencies brings you to Single Responsibility Principle. There is a point about every of SOLID principles.
As for bugs, well, if you unit test every method of every class you write (except properties, very simple methods and such) you will catch most bugs in the start. Write the integration tests, you cover almost all of them.
I'll take my stab at this using a remix of a previous answer I wrote. In short, I don't see this as a dichotomy between driving good design and minimizing bugs. I see it more as one (good design) leading to the other (minimizing bugs).
I tend towards saying TDD is a design process that happens to involve unit testing. It's a design process because within each Red-Green-Refactor iteration, you write the test first for code that doesn't exist. You're designing as you're going.
The first beauty of TDD is that the design of your code is guaranteed to be testable. Testable code tends to have loose coupling and high cohesion. Loose coupling and high cohesion are important because they make the code easy to change when requirements change. The second beauty of TDD is that after you're done implementing your system, you happen to have a huge regression suite to catch any bugs and changes in assumptions. Thus, TDD makes your code easy to change because of the design it creates and it makes your code safe to change because of the test harness it creates.
Trying to retrospectively add Unit tests can be quite painful and expensive. If the code doesn't support Unit test you may be better looking at integration tests to test your code.
Don't mix Unit Testing with TDD.
Unit Testing is just the fact of "testing" your code to ensure quality and maintainability.
TDD is a full blown development methodology in which you first write your tests (based on requirements), and only then you write the needed code (and just the needed code) to make that test pass. This means that you only write code to repair a broken test.
Once done that, you write another test, and the code needed to make it pass. In the way, you may be forced to do "refactoring" of the code to allow a new test run without braking another. This way, the "design" arises from the tests.
The purpose of this methodology is of course reduce bugs and improve design, but the main goal of it is to improve productivity because you write exactly the code you need. And you don't write documentation: the tests are the documentation. If a requirement changes, then you change the tests and the code afterwards. If new requirements appear, just add new tests.
In Osherove's great book "The Art of Unit Testing" one of the test anti-patterns is over-specification which is basically the same as testing the internal state of the object instead of some expected output. To my experience, using Isolation frameworks can cause the same unwanted side effects as testing internal behavior because one tends to only implement the behavior necessary to make your stub interact with the object under test. Now if your implementation changes later on (but the contract remains the same), your test will suddenly break because you are expecting some data from the stub which was not implemented.
So what do you think is the best approach to counter this?
1) Implement your stubs/mocks fully, this has the negative side-effect of potentially making your test less readable and also specifying more than necessary to make your test pass.
2) Favor manual, fully implemented fakes.
3) Implement your stubs/fakes so that they make your test just pass, and then deal with the brittleness that this might introduce.
I do not think you should favor manual testing - unless you prefer to test instead of code.
Instead you have another option - if you test the functionality and not the implementation, try to avoid testing private methods (that can be refactored) and in general write less-fragile tests you'll see that using a mocking/isolation framework does not require you to over specify the system nor does it cause your tests to become more fragile.
In a nutshell - writing fragile tests can be done with or without fakes/mocks and vise-versa.
I tend to use mocks instead of stubbed/fake objects. I find them a lot less trouble and they are way better at keeping test code under control because it's not cluttered with all sorts of half baked implementations. They also help to clarify what is being tested.
Another advantage is that I only have to address where the class under test needs something specific from the mock. So I don't have to code where it's not important. As for verification, again I only have to very the calls from the class under test to the mock that I care about and consider important aspects of the test.
I think, the problem is always the same, although it comes in different flavours: If you have tests that somehow cover the internals of a class, then you will break the tests that cover this internal code.
IMHO there are two ways to deal with that:
Your tests only cover the public contract of a class - a test strategy which is widely adopted for that exact reason: You don't have to change your tests as long as the public contract remains constant. Unfortunately, this is not, what you will have when doing Test-driven development.
If your tests come from a TDD process, then they will regularly cover non-public code. This means that they will break if you change the code. The only way to keep things in sync here is to 'fix' the tests together with the code. This means more maintenance during development. There's no recipe to easily deal with that (other than throw away the test, of course...).
My personal 'way out' is think in terms of 'code elements' rather than just code. A code element consists of three parts: Documentation, test, code. So if you change one part of the element, you have to also adjust the other two - otherwise you leave a broken code element behind.
This question already has answers here:
When should I mock?
(4 answers)
Closed 9 years ago.
Classes that use other classes (as members, or as arguments to methods) need instances that behave properly for unit test. If you have these classes available and they introduce no additional dependencies, isn't it better to use the real thing instead of a mock?
I say use real classes whenever you can.
I'm a big believer in expanding the boundaries of "unit" tests as much as possible. At this point they aren't really unit tests in the traditional sense, but rather just an automated regression suite for your application. I still practice TDD and write all my tests first, but my tests are a little bigger than most people's and my green-red-green cycles take a little longer. But now that I've been doing this for a little while I'm completely convinced that unit tests in the traditional sense aren't all they're cracked up to be.
In my experience writing a bunch of tiny unit tests ends up being an impediment to refactoring in the future. If I have a class A that uses B and I unit test it by mocking out B, when I decide to move some functionality from A to B or vice versa all of my tests and mocks have to change. Now if I have tests that verify that the end to end flow through the system works as expected then my tests actually help me to identify places where my refactorings might have caused a change in the external behavior of the system.
The bottom line is that mocks codify the contract of a particular class and often end up actually specifying some of the implementation details too. If you use mocks extensively throughout your test suite your code base ends up with a lot of extra inertia that will resist any future refactoring efforts.
It is fine to use the "real thing" as long as you have absolute control over the object. For example if you have an object that just has properties and accessors you're probably fine. If there is logic in the object you want to use, you could run into problems.
If a unit test for class a uses an instance of class b and an change introduced to b breaks b, then the tests for class a are also broken. This is where you can run into problems where as with a mock object you could always return the correct value. Using "the real thing" Can kind of convolute tests and hide the real problem.
Mocks can have downsides too, I think there is a balance with some mocks and some real objects you will have to find for yourself.
There is one really good reason why you want to use stubs/mocks instead of real classes. I.e. to make your unit test's (pure unit test) class under test isolated from everything else. This property is extremely useful and the benefits for keeping tests isolated are plentiful:
Tests run faster because they don't need to call the real class implementation. If the implementation is to run against file system or relational database then the tests will become sluggish. Slow tests make developers not run unit tests as often. If you're doing Test Driven Development then time hogging tests are together a devastating waste of developers time.
It will be easier to track down problems if the test is isolated to the class under test. In contrast to a system test it will be much more difficult to track down nasty bugs that are not apparently visible in stack traces or what not.
Tests are less fragile on changes done on external classes/interfaces because you're purely testing the class that is under test. Low fragility is also an indication of low coupling, which is a good software engineering.
You're testing against external behaviour of a class rather than the internal implementation which is more useful when deciding code design.
Now if you want to use real class in your test, that's fine but then it is NOT a unit test. You're doing a integration test instead, which is useful for the purpose of validating requirements and overall sanity check. Integration tests are not run as often as unit tests, in practice it is mostly done before committing to favorite code repository, but is equally important.
The only thing you need to have in mind is the following:
Mocks and stubs are for unit tests.
Real classes are for integration/system tests.
Extracted and extended from an answer of mine How do I unit-test inheriting objects?">here:
You should always use real objects where possible.
You should only use mock objects if the real objects do something you dont want to set up (like use sockets, serial ports, get user input, retrieve bulky data etc). Essentially, mock objects are for when the estimated effort to implement and maintain a test using a real object is greater than that to implement and maintain a test using a mock object.
I dont buy into the "dependant test failure" argument. If a test fails because a depended-on class broke, the test did exactly what it should have done. This is not a smell! If a depended-on interface changes, I want to know!
Highly mocked testing environments are very high-maintenance, particularly early in a project when interfaces are in flux. Ive always found it better to start integration testing ASAP.
I always use a mock version of a dependency if the dependency accesses an external system like a database or web service.
If that isn't the case, then it depends on the complexity of the two objects. Testing the object under test with the real dependency is essentially multiplying the two sets of complexities. Mocking out the dependency lets me isolate the object under test. If either object is reasonably simple, then the combined complexity is still workable and I don't need a mock version.
As others have said, defining an interface on the dependency and injecting it into the object under test makes it much easier to mock out.
Personally, I'm undecided about whether it's worth it to use strict mocks and validate every call to the dependency. I usually do, but it's mostly habit.
You may also find these related questions helpful:
What is object mocking and when do I need it?
When should I mock?
How are mocks meant to be used?
And perhaps even, Is it just me, or are interfaces overused?
Use the real thing only if it has been unit tested itself first. If it introduces dependencies that prevent that (circular dependencies or if it requires certain other measures to be in place first) then use a 'mock' class (typically referred to as a "stub" object).
If your 'real things' are simply value objects like JavaBeans then thats fine.
For anything more complex I would worry as mocks generated from mocking frameworks can be given precise expectations about how they will be used e.g. the number of methods called, the precise sequence and the parameters expected each time. Your real objects cannot do this for you so you risk losing depth in your tests.
I've been very leery of mocked objects since I've been bitten by them a number of times. They're great when you want isolated unit tests, but they have a couple of issues. The major issue is that if the Order class needs a a collection of OrderItem objects and you mock them, it's almost impossible to verify that the behavior of of the mocked OrderItem class matches the real-world example (duplicating the methods with appropriate signatures is generally not enough). More than once I've seen systems fail because the mocked classes don't match the real ones and there weren't enough integration tests in place to catch the edge cases.
I generally program in dynamic languages and I prefer merely overriding the specific methods which are problematic. Unfortunately, this is sometimes hard to do in static languages. The downside of this approach is that you're using integration tests rather than unit tests and bugs are sometimes harder to track down. The upside is that you're using the actual code that is written, rather than a mocked version of that code.
If you don't care for verifying expectations on how your UnitUnderTest should interact with the Thing, and interactions with the RealThing have no other side-effects (or you can mock these away) then it is in my opinion perfectly fine to just let your UnitUnderTest use the RealThing.
That the test then covers more of your code base is a bonus.
I generally find it is easy to tell when I should use a ThingMock instead of a RealThing:
When I want to verify expectations in the interaction with the Thing.
When using the RealThing would bring unwanted side-effects.
Or when the RealThing is simply too hard/troublesome to use in a test setting.
If you write your code in terms of interfaces, then unit testing becomes a joy because you can simply inject a fake version of any class into the class you are testing.
For example, if your database server is down for whatever reason, you can still conduct unit testing by writing a fake data access class that contains some cooked data stored in memory in a hash map or something.
It depends on your coding style, what you are doing, your experience and other things.
Given all that, there's nothing stopping you from using both.
I know I use the term unit test way too often. Much of what I do might be better called integration test, but better still is to just think of it as testing.
So I suggest using all the testing techniques where they fit. The overall aim being to test well, take little time doing it and personally have a solid feeling that it's right.
Having said that, depending on how you program, you might want to consider using techniques (like interfaces) that make mocking less intrusive a bit more often. But don't use Interfaces and injection where it's wrong. Also if the mock needs to be fairly complex there is probably less reason to use it. (You can see a lot of good guidance, in the answers here, to what fits when.)
Put another way: No answer works always. Keep your wits about you, observe what works what doesn't and why.
When I originally was introduced to Mocks I felt the primary purpose was to mock up objects that come from external sources of data. This way I did not have to maintain an automated unit testing test database, I could just fake it.
But now I am starting to think of it differently. I am wondering if Mocks are more effective used to completely isolate the tested method from anything outside of itself. The image that keeps coming to mind is the backdrop you use when painting. You want to keep the paint from getting all over everything. I am only testing that method, and I only want to know how it reacts to these faked up external factors?
It seems incredibly tedious to do it this way but the advantage I am seeing is when the test fails it is because it is screwed up and not 16 layers down. But now I have to have 16 tests to get the same testing coverage because each piece would be tested in isolation. Plus each test becomes more complicated and more deeply tied to the method it is testing.
It feels right to me but it also seems brutal so I kind of want to know what others think.
I recommend you take a look at Martin Fowler's article Mocks Aren't Stubs for a more authoritative treatment of Mocks than I can give you.
The purpose of mocks is to unit test your code in isolation of dependencies so you can truly test a piece of code at the "unit" level. The code under test is the real deal, and every other piece of code it relies on (via parameters or dependency injection, etc) is a "Mock" (an empty implementation that always returns expected values when one of its methods is called.)
Mocks may seem tedious at first, but they make Unit Testing far easier and more robust once you get the hang of using them. Most languages have Mock libraries which make mocking relatively trivial. If you are using Java, I'll recommend my personal favorite: EasyMock.
Let me finish with this thought: you need integration tests too, but having a good volume of unit tests helps you find out which component contains a bug, when one exists.
Don't go down the dark path Master Luke. :) Don't mock everything. You could but you shouldn't... here's why.
If you continue to test each method in isolation, you have surprises and work cut out for you when you bring them all together ala the BIG BANG. We build objects so that they can work together to solve a bigger problem.. By themselves they are insignificant. You need to know if all the collaborators are working as expected.
Mocks make tests brittle by introducing duplication - Yes I know that sounds alarming. For every mock expect you setup, there are n places where your method signature exists. The actual code and your mock expectations (in multiple tests). Changing actual code is easier... updating all the mock expectations is tedious.
Your test is now privy to insider implementation information. So your test depends on how you chose to implement the solution... bad. Tests should be a independent spec that can be met by multiple solutions. I should have the freedom to just press delete on a block of code and reimplement without having to rewrite the test suite.. coz the requirements still stay the same.
To close, I'll say "If it quacks like a duck, walks like a duck, then it probably is a duck" - If it feels wrong.. it probably is. *Use mocks to abstract out problem children like IO operations, databases, third party components and the like.. Like salt, some of it is necessary.. too much and :x *
This is the holy war of State based vs Iteraction based testing.. Googling will give you deeper insight.
Clarification: I'm hitting some resistance w.r.t. integration tests here :) So to clarify my stand..
Mocks do not figure in the 'Acceptance tests'/Integration realm. You'll only find them in the Unit Testing world.. and that is my focus here.
Acceptance tests are different and are very much needed - not belittling them. But Unit tests and Acceptance tests are different and should be kept different.
All collaborators within a component or package do not need to be isolated from each other.. Like micro-optimization that is Overkill. They exist to solve a problem together.. cohesion.
Yes, I agree. I see mocking as sometimes painful, but often necessary, for your tests to truly become unit tests, i.e. only the smallest unit that you can make your test concerned with is under test. This allows you to eliminate any other factors that could potentially affect the outcome of the test. You do end up with a lot more small tests, but it becomes so much easier to work out where a problem is with your code.
My philosophy is that you should write testable code to fit the tests,
not write tests to fit the code.
As for complexity, my opinion is that tests should be simple to write, simply because you write more tests if they are.
I might agree that could be a good idea if the classes you're mocking doesn't have a test suite, because if they did have a proper test suite, you would know where the problem is without isolation.
Most of them time I've had use for mock objects is when the code I'm writing tests for is so tightly coupled (read: bad design), that I have to write mock objects when classes they depend on is not available. Sure there are valid uses for mock objects, but if your code requires their usage, I would take another look at the design.
Yes, that is the downside of testing with mocks. There is a lot of work that you need to put in that it feels brutal. But that is the essence of unit testing. How can you test something in isolation if you don't mock external resources?
On the other hand, you're mocking away slow functionality (such as databases and i/o operations). If the tests run faster then that will keep programmers happy. There is nothing much more painful than waiting for really slow tests, that take more than 10 seconds to finish running, while you're trying to implement one feature.
If every developer in your project spent time writing unit tests, then those 16 layers (of indirection) wouldn't be that much of a problem. Hopefully you should have that test coverage from the beginning, right? :)
Also, don't forget to write a function/integration test between objects in collaboration. Or else you might miss something out. These tests won't need to be run often, but are still important.
On one scale, yes, mocks are meant to be used to simulate external data sources such as a database or a web service. On a more finely grained scale however if you're designing loosely coupled code then you can draw lines throughout your code almost arbitrarily as to what might at any point be an 'outside system'. Take a project I'm working on currently:
When someone attempts to check in, the CheckInUi sends a CheckInInfo object to a CheckInMediator object which validates it using a CheckInValidator, then if it is ok, it fills a domain object named Transaction with CheckInInfo using CheckInInfoAdapter then passes the Transaction to an instance of ITransactionDao.SaveTransaction() for persistence.
I am right now writing some automated integration tests and obviously the CheckInUi and ITransactionDao are windows unto external systems and they're the ones which should be mocked. However, whose to say that at some point CheckInValidator won't be making a call to a web service? That is why when you write unit tests you assume that everything other than the specific functionality of your class is an external system. Therefore in my unit test of CheckInMediator I mock out all the objects that it talks to.
EDIT: Gishu is technically correct, not everything needs to be mocked, I don't for example mock CheckInInfo since it is simply a container for data. However anything that you could ever see as an external service (and it is almost anything that transforms data or has side-effects) should be mocked.
An analogy that I like is to think of a properly loosely coupled design as a field with people standing around it playing a game of catch. When someone is passed the ball he might throw a completely different ball to the next person, he might even throw a multiple balls in succession to different people or throw a ball and wait to receive it back before throwing it to yet another person. It is a strange game.
Now as their coach and manager, you of course want to check how your team works as a whole so you have team practice (integration tests) but you also have each player practice on his own against backstops and ball-pitching machines (unit tests with mocks). The only piece that this picture is missing is mock expectations and so we have our balls smeared with black tar so they stain the backstop when they hit it. Each backstop has a 'target area' that the person is aiming for and if at the end of a practice run there is no black mark within the target area you know that something is wrong and the person needs his technique tuned.
Really take the time to learn it properly, the day I understood Mocks was a huge a-ha moment. Combine it with an inversion of control container and I'm never going back.
On a side note, one of our IT people just came in and gave me a free laptop!
As someone said before, if you mock everything to isolate more granular than the class you are testing, you give up enforcing cohesion in you code that is under test.
Keep in mind that mocking has a fundamental advantage, behavior verification. This is something that stubs don't provide and is the other reason that makes the test more brittle (but can improve code coverage).
Mocks were invented in part to answer the question: How would you unit test objects if they had no getters or setters?
These days, recommended practice is to mock roles not objects. Use Mocks as a design tool to talk about collaboration and separation of responsibilities, and not as "smart stubs".
Mock objects are 1) often used as a means to isolate the code under test, BUT 2) as keithb already pointed out, are important to "focus on the relationships between collaborating objects". This article gives some insights and history related to the subject: Responsibility Driven Design with Mock Objects.