Excluding code from test coverage - unit-testing

Wherever possible I use TDD:
I mock out my interfaces
I use IOC so my mocked ojbects can be injected
I ensure my tests run and that the coverage increases and I am happy.
then...
I create derived classes that actually do stuff, like going to a database, or writing to a message queue etc.
This is where code coverage decreases - and I feel sad.
But then, I liberally spread [CoverageExclude] over these concrete classes and coverage goes up again.
But then instead of feeling sad, I feel dirty. I somehow feel like I'm cheating even though it's not possible to unit-test the concrete classes.
I'm interested in hearing how your projects are organised, i.e. how do you physically arrange code that can be tested against code that can't be tested.
I'm thinking that perhaps a nice solution would be to separate out untestable concrete types into their own assembly and then ban the use of [CoverageExclude] in the assemblies that do contain testable code. This'd also make it easier to create an NDepend rule to fail the build when this attribute is incorrectly found in the testable assemblies.
Edit: the essense of this question touches on the fact that you can test the things that USE your mocked interfaces but you can't (or shouldn't!) UNIT-test the objects that ARE the real implementations of those interfaces. Here's an example:
public void ApplyPatchAndReboot( )
{
_patcher.ApplyPatch( ) ;
_rebooter.Reboot( ) ;
}
patcher and rebooter are injected in the constructor:
public SystemUpdater(IApplyPatches patcher, IRebootTheSystem rebooter)...
The unit test looks like:
public void should_reboot_the_system( )
{
... new SystemUpdater(mockedPatcher, mockedRebooter);
update.ApplyPatchAndReboot( );
}
This works fine - my UNIT-TEST coverage is 100%. I now write:
public class ReallyRebootTheSystemForReal : IRebootTheSystem
{
... call some API to really (REALLY!) reboot
}
My UNIT-TEST coverage goes down and there's no way to UNIT-TEST the new class. Sure, I'll add a functional test and run it when I've got 20 minutes to spare(!).
So, I suppose my question boils down to the fact that it's nice to have near 100% UNIT-TEST coverage. Said another way, it's nice to be able to unit-test near 100% of the behaviour of the system. In the above example, the BEHAVIOUR of the patcher should reboot the machine. This we can verify for sure. The ReallyRebootTheSytemForReal type isn't strictly just behaviour - it has side effects which means it can't be unit-tested. Since it can't be unit-test it affects the test-coverage percentage. So,
Does it matter that these things reduce unit-test coverage percantage?
Should they be segregated into their own assemblies where people expect 0% UNIT-TEST coverage?
Should concrete types like this be so small (in Cyclomatic Complexity) that a unit-test (or otherwise) is superfluous

You are on the right track. Some of the concrete implementations you probably can test, such as Data Access Components. Automated testing against a relational database is most certainly possible, but should also be factored out into its own library (with a corresponding unit test library).
Since you are already using Dependency Injection, it should be a piece of cake for you compose such a dependency back into your real application.
On the other hand, there will also be concrete dependencies that are essentially un-testable (or de-testable, as Fowler once joked). Such implementations should be kept as thin as possible. Often, it is possible to design the API that such a Dependency exposes in such a way that all the logic happens in the consumer, and the complexity of the real implementation is very low.
Implementing such concrete Dependencies is an explicit design decision, and when you make that decision, you simultaneously decide that such a library should not be unit tested, and thus code coverage should not be measured.
Such a library is called a Humble Object. It (and many other patterns) are described in the excellent xUnit Test Patterns.
As a rule of thumb I accept that code is untested if it has a Cyclomatic Complexity of 1. In that case, it's more or less purely declarative. Pragmatically, untestable components are in order as long as they have low Cyclomatic Complexity. How low 'low' is you must decide for yourself.
In any case, [CoverageExclude] seems like a smell to me (I didn't even know it existed before I read your question).

I don't understand how your concrete classes are untestable. This smells horrible to me.
If you have a concrete class that is writing to a message queue, you should be able to pass it a mock queue, and test all it's methods just fine. If your class is going to a database, then you should be able to hand it a mock database to go to.
There can be situations that might lead to untestable code, I won't deny that - but that should be the exception, not the rule. All your concrete class worker objects? Something isn't right.

To expand on womps answer: I suspect you are considering more to be "untestable" than really is. Untestable in the strict "one unit at a time" unit testing without testing any of the dependencies simultanously? Sure. But it should be easily achievable with slower and more infrequently running integration style tests.
You mention accessing database and writing messages to queues. As womp mentions, you can feed them mock databases and mock queues during unit testing and the testing the actual concrete beahviour in integration tests. Personally I don't see anything wrong with testing concrete implementations directly as unit tests either, at least when they are not remote (or legacy). Sure they run a bit slower, but hey, at least they get covered by automated tests.
Would you put a system into production where messages are written to queues and not having actually tested that the messages are written to the actual physical/logical queue? I wouldn't.

Related

Unit Testing Classes VS Methods

When unit testing, is it better practice to test a class or individual methods?
Most of the examples I've seen, test the class apart from other classes, mocking dependencies between classes. Another method I've played around w/ is mocking methods you're not testing (by overriding) so that you're only testing the code in one method. Thus 1 bug breaks 1 test since the methods are isolated from each other.
I was wondering if there is a standard method and if there are any big disadvantages to isolating each method for testing as opposed to isolating classes.
The phrase unit testing comes from hardware systems testing, and is more or less semantics-free when applied to software. It can get used for anything from isolation testing of a single routine to testing a complete system in headless mode with an in-memory database.
So don't trust anyone who argues that the definition implies there is only one way to do things independently of context; there a variety of ways, some of which are sometimes more useful than others. And presumably every approach a smart person would argue for has at least some value somewhere.
The smallest unit of hardware is the atom, or perhaps some subatomic particle. Some people test software like they were scanning each atom to see if the laws of quantum mechanics still held. Others take a battleship and see if it floats.
Something in between is very likely better. Once you know something about the kind of thing you are producing beyond 'it is software', you can start to come up with a plan that is appropriate to what you are supposed to be doing.
The point of unit testing is to test a unit of code i.e. class.
This gives you confidence that part of the code on its one is doing what is expected.
This is also the first part of the testing process. It helps to catch those pesky bugs as early as possible and having a unit test to demonstrate it makes it easier to fix that further down the line.
Unit testing by definition is testing the smallest piece of written code you can. "Units" are not classes they are methods.
Every public method should have at least 1 unit test, that tests that method specifically.
If you follow the rule above, you will eventually get to where class interactions are being covered. As long as you write 1 test per method, you will cover class interaction as well.
There is probably no one standard answer. Unit tests are for the developer (or they should be), do what is most helpful to you.
One downside of testing individual methods is you may not test the actual behavior of the object. If the mocking of some methods is not accurate that may go undetected. Also mocks are a lot of work, and they tend to make the tests very fragile, because they make the tests care a lot about what specific method calls take place.
In my own code I try whenever possible to separate infrastructure-type dependencies from business logic so that I can write tests of the business logic classes entirely without mocks. If you have a nasty legacy code base it probably makes more sense to test individual methods and mock any collaborating methods of the object, in order to insulate the parts from each other.
Theoretically objects are supposed to be cohesive so it would make sense to test them as a whole. In practice a lot of things are not particularly object-oriented. In some cases it is easier to mock collaborator methods than it is to mock injected dependencies that get called by the collaborators.

Black Box Unit Testing

In my last project, we had Unit Testing with almost 100% cc, and as a result we almost didn’t have any bugs.
However, since Unit Testing must be White Box (you have to mock inner functions to get the result you want, so your tests need to know about the inner structure of your code) any time we changed the implementation of a function, we had to change the tests as well.
Note that we didn't change the logic of the functions, just the implementation.
It was very time-consuming and it felt as if we are working the wrong way.
Since we used all proper OOP guidelines (specifically Encapsulation), every time we changed the implementation we didn't have to change the rest of our code, but had to change the unit tests.
It felt as if we are serving the tests, instead of them serving us.
To prevent this, some of us argued that unit tests should be Black Box Testing.
That would be possible if we create one big mock of our entire Domain and create a stub for every function in every class in one place, and use it in every unit test.
Of course that if a specific test needs specific inner function to be called (Like making sure we write to the DB), we can override our stub.
So, every time we change the implementation of a function (like adding or replacing a call to a help function) we will only need to change our main big mock. Even if we do need to change some unit tests, it will still be much less than before.
Others argue that unit tests must be White Box, since not only do you want to make sure your app writes to the DB in a specific place, you want to make sure your app does not write to the DB anywhere else unless you specifically expect it to. While this is a valid point, I don't think it worth the time of writing White Box tests instead of Black Box tests.
So in conclusion, two questions:
What do you think about the concept of Black Box Unit Testing?
What do you think about the way we want to implement that concept? Do you have better ideas?
You need different types of tests.
Unit-tests which should be white-box testing, as you did
Integration tests (or system tests) which test the ability to use the actual implementations of your system and its communication with external layers (external systems, database, etc.) which should be black-box styled, but each one for a specific feature (CRUD tests for example)
Acceptance tests which should be completely black-box and are driven by functional requirements (as your users would phrase them). End-to-end as much as possible, and not knowing the internal of your chosen implementations. The textbook definition of black-box tests.
And remember code coverage is meaningless in most of the cases. You need a high lines coverage (or methods coverage, whatever your counting method is), but that's usually not sufficient. The concept you need to think about is functional coverage: making sure all your requirements and logical paths are covered.
and as a result we almost didn’t have any bugs
If you were really able to achieve this, then I don't think you should change anything.
Black box testing might sound appealing on paper, but truth is you almost always need to know parts of inner workings of a tested class. The provide input, verify output in reality works only for simple cases. Most of the times your tests need to have at least some knowledge of tested method - how it interacts with external collaborators, what methods it calls, in what order and so forth.
Whole idea behind mocking and SOLID design is to avoid situation where dependency implementation change causes other class test changes/failures. On contrary, if you change implementation details of tested method, so should change implementation details of it tests. That's nothing too uncommon.
Overall, if you were really able to achieve almost no bugs, then I would stick to that approach.
tl;dr version:
Black Box unit testing is exactly how unit testing should be done.
Black Box unit testing is exactly how unit testing should be done. Proper TDD practice does exactly this.
Full version.
There is absolutely no need in testing private methods of the objects. It'll have no impact on code coverage, also.
When you TDD a class, you write tests that check the behavior of that class. Behavior is expressed through the public methods of that class. You should never bother with how that methods are really implemented. Google people described that a lot better than I will ever be able to: http://googletesting.blogspot.ru/2013/08/testing-on-toilet-test-behavior-not.html
If you do the usual mistake and statically depend on other entity classes or worse, on classes from the different layer of application, it's inevitable that you will find yourself in a situation when you need to check a lot of things in your test and prepare a lot of stuff for it. For solving this the Dependency Injection principle and the Law of Demeter exist.
I think you should continue writing unit tests - just make them less fragile.
Unit tests should be low level but should test the result and not how things done. When implementation change cause a lot of test change it means that instead of testing requirements you're actually testing implementation.
There are several rules of the thumb - such as "don't test private methods" and use mock objects.
Mocking/simulating the entire domain usually result in the opposite of what you're trying to accomplish - when the code behavior change you need to update the tests to make sure that your "simulated objects" behaves the same - it becomes really hard really fast as the complexity of the project increase.
I suggest that you continue writing unit tests - just learn how to make them more robust and less fragile.
"as a result we almost didn’t have any bugs" -- so keep it that way.
Sole cause of frustration is necessity to maintain unit tests, which actually is not such a bad thing (alternative is much worse). Just make them more maintainable. "The art of Unit Testing" by Roy Osherove gave me a good start in this way.
So
1) Not an option. (The idea itself contradicts principles of TDD, for instance)
2) You'll have much more maintenance troubles with such approach. Unit testing philosophy is to chop out SUT from other system and test it using stubs as input and mocks as output (signals?) simulating real life situations (or mb I just dont catch the "one big mock of our entire Domain" idea).
For detailed information about black, white and grey box and decision tables refer to the following article, which explains everything.
Testing Web-based applications: The state of the art and future trends (PDF)

Interfaces and unit tests - always white-box testing?

I have finally got in my mind what worried me about Dependency Injection and similar techniques that should make unit tests easier. Let's take this example:
public interface IRepository { void Item Find(); a lot of other methods here; }
[Test]
public void Test()
{
var repository = Mock<IRepository>();
repository.Expect(x => x.Find());
var service = new Service(repository);
service.ProcessWithItem();
}
Now, what's wrong with the code above? It's that our test roughly peeks into ProcessWithItem() implementation. What if it wants to do "from x in GetAll() where x..." - but no, our test knows what is going to happen there. And that's just a simple example. Imaging few calls that our test now is tied with, and when we want to change from GetAll() to a better GetAllFastWithoutStuff() inside the method... our test(s) are broken. Please change them. A lot of crappy work that happens so often without any real need.
And that's what often makes me to stop write tests. I just don't see how I can test without knowing implementation details. And knowing them, tests are now very fragile and pain to do.
Sure, it's not about interface (or DI) only. POCOs (and POJOs, why not) also suffer from the same thing, but they're now tied with the data, not with the interface. But the principle is the same - our final assertion is tightly coupled with our knowledge of what our SUT is going to do. "Yes you HAVE to provide this field, sir, and this better be of this value".
As a consequence, tests ARE going to fail - soon and often. This is pain. And the problem.
Are there any techniques to deal with this? AutoMockingContainer (which basically takes care all ALL methods and nested DI hierarchies) looks promising, but with its own drawback. Anything else?
Dependency Injection, per se, would let you inject an implementation of IRepository that accepts whatever calls are made on it, checks that the invariants and preconditions are satisfied, and returns results satisfying the postconditions. When you choose to inject a mock object that has very specific expectations for what methods will be called, then yes, you're doing highly implementation-specific testing -- but Dependency Injection is totally innocent in the matter, since it never dictates WHAT you should inject; rather, your beef appears to be with Mocking -- in fact, specifically the somewhat-automated mocking approach that you have chosen to use, which is one based on very specific expectations.
Mocking with very specific expectations IS indeed useful for white-box testing only. Depending on the tools / frameworks / libraries you're using (and you're not even specifying the exact programming language in a tag, so I assume your question is totally open ended) you may be able to specify the degrees of freedom allowed (these calls are allowed to come in any orders, these arguments must only satisfy the following preconditions, etc, etc). However, I don't know of an automated tool to perform exactly what you need for opaque-box testing, which is the "generic, tolerant implementation of yonder interface with all the ''programming by contract'' checks that are needed and no other".
What I tend to do over the life of a project is to build up a library of "not quite mocks" for the major interfaces needed. In some cases those may be somewhat obvious from the start, but in other cases they emerge incrementally as I'm considering some major refactoring, as follows (typical scenario)...:
The early stages of the refactoring break some aspect of the fragile strong-expectations mocking that I have cheaply put in place initially, I ponder whether to just tweak the expectations or go whole hog, if I decide it's not a one-off (i.e. the return in future refactorings and tests will justify the investment) then I hand-code a good "not quite mock" and stash it away in the project's specific bag of tricks -- actually often reusable across projects; such classes/packages as MockFilesystem, MockBigtable, MockDom, MockHttpClient, MockHttpServer, etc etc, go into a project-agnostic repository and get reused for testing all kinds of future projects (and in fact may be shared with other teams across the company, if several teams are using filesystem interfaces, bigtable interfaces, DOMs, http client/server interfaces, etc etc, that are uniform across the teams).
I acknowledge that the use of the word "mock" may be slightly inappropriate here if you take "mock" to refer specifically to the precise-expectation style of "fake implementation for testing purposes" of interfaces. Maybe Stub, Shim, Fake, Test, or some other prefix yet might be preferable (I do tend to use Mock for historical reasons, except when I remember to specifically call it Fake or the like;-).
If I was using languages with clear and precise way to express in the language itself the various design-by-contract specs in an interface, I imagine I'd get automatic tool support for most of this faking/shimming/etc; however I mostly code in other languages so I have to do a bit more manual work here. But I think that's a separate issue.
I read the excellent book http://www.manning.com/rainsberger/.
I would like to provide some insight I gained from it.
I believe several advice could help you to reduce the coupling between your tests and your implementation.
Edited: included in this coupling is the test asserting that the code under test calls some methods. Calling some method is never a functional need, it is an implementation concern. It relates to an interface other than the one being tested.
In many cases, the testing should be about the external behavior of an interface, and be completely black-box testing them.
The author gives the example that the test classes should be in a different package than the class to test. At first, I was sure this was wrong, because it makes it more difficult to test protected and package methods. But he argues that you should only test the external behavior of a system, that is the public methods. The non-public methods are implementation-details, and testing it results in coupling the test with the implementation. This was very insightful to me.
By the way, this book has so many excellent practical advice on how to design tests (say JUnit tests), that I would buy it on my own money if it wasn't provided by the company! ;-)
An excellent other advice from the book was to test at the functionality level, not the method level. For example, testing the add() method for a list requires trusted size() and get() methods, but they in turn require add() so we have a loop, we can't test safely. But testing the list's behavior globally (accross all methods) when adding involves testing the three methods at the same time, not proving that each is correct in isolation, but checking that together they provide the expected behavior. Often, when you try to test one of your methods in isolation, you cannot write a sensible test without using other methods, so you end up testing the implementation instead ; the consequence are coupling between test and implementation.
Only test functionalities, not methods.
Also, note that testing using external ressources (the database being the more common, but many others exist) is much slower, requires some access (IP, licence etc) from the executing machine, require a started container, may be sensitive to simultaneous access (a database can't run reliably multiple JUnit campaign at the same time), and has many other drawbacks. If all your tests use external resources, then you are in trouble, you can't run all your tests all the time, from any machine, from many machines at once, etc. So I understood (still from the book):
Test only once each external resource (database for example), in a dedicated test that is not a unit-test, but an integration test (although it can still use the same JUnit technology if appropriate).
Test enough dedicated tests to trust the resource is working. Then, other tests should never test it again, this is a waste, they should trust it.
Note that the current Maven best-practices give similar advice (see free book "Better builds with Maven"). I believe this is not a coincidence:
The JUnits in the test directory of a project are real unit tests. They run every time you do something with your project (except just compile).
The integration and functional tests should be provided in a different project, an integration-test project. They only run in a much later (optional) phase, after you have deployed your whole application in the container.
As a consequence, tests ARE going to
fail - soon and often. This is pain.
And the problem.
Well yes, unit tests can depend on internal implementation details. And sure, such "white box" tests are more brittle than "black box" tests which only rely on the externally published contract.
But I don't agree that this has to cause regular test failures. Think about how you arrived at testing with mocks in the first place: you've used dependency injection to limit the responsibilities of the class, to decrease coupling to other code, and to enable testing the class in isolation.
Are there any techniques to deal with
this?
A good unit test can only fail if you change the class under test, even if it depends on internal implementation details. And you can limit the responsibilities and coupling (to other classes) of your class, so that you will rarely have to change it.
In practice you'll have to be pragmatic; every now and then you'll write "unit tests" that are actually integration tests involving multiple classes or over-sized classes. Brittle tests depending on internal implementation details are more dangerous in that case. But for truly TDD-style classes, not so much.
Remember when you're writing a test you're not testing your repository, you're testing your Service class. In this specific example ProcessWithItem method. You create your expectations for repository object. By the way, you forgot to specify expected return for your x.Find method. That's the beauty of DI that you isolate everything from the code you about to write (I assume you do TDD).
To be honest I cannot relate to the problem you describe.
Yeah, that's one of the big problems with unit testing. That, and refactoring. And design changes that are a regular occurrence with Agile. And the inexperience of those creating the tests. And etc etc...
I think the only thing the average non-critical-systems developer can do is pick and choose your battles wisely. Early in development identify the truly critical paths and test those. Weigh the likelihood of that code changing before spending lots of time testing the rest of it.
If anybody figures it all out please let us know.

Mocks or real classes? [duplicate]

This question already has answers here:
When should I mock?
(4 answers)
Closed 9 years ago.
Classes that use other classes (as members, or as arguments to methods) need instances that behave properly for unit test. If you have these classes available and they introduce no additional dependencies, isn't it better to use the real thing instead of a mock?
I say use real classes whenever you can.
I'm a big believer in expanding the boundaries of "unit" tests as much as possible. At this point they aren't really unit tests in the traditional sense, but rather just an automated regression suite for your application. I still practice TDD and write all my tests first, but my tests are a little bigger than most people's and my green-red-green cycles take a little longer. But now that I've been doing this for a little while I'm completely convinced that unit tests in the traditional sense aren't all they're cracked up to be.
In my experience writing a bunch of tiny unit tests ends up being an impediment to refactoring in the future. If I have a class A that uses B and I unit test it by mocking out B, when I decide to move some functionality from A to B or vice versa all of my tests and mocks have to change. Now if I have tests that verify that the end to end flow through the system works as expected then my tests actually help me to identify places where my refactorings might have caused a change in the external behavior of the system.
The bottom line is that mocks codify the contract of a particular class and often end up actually specifying some of the implementation details too. If you use mocks extensively throughout your test suite your code base ends up with a lot of extra inertia that will resist any future refactoring efforts.
It is fine to use the "real thing" as long as you have absolute control over the object. For example if you have an object that just has properties and accessors you're probably fine. If there is logic in the object you want to use, you could run into problems.
If a unit test for class a uses an instance of class b and an change introduced to b breaks b, then the tests for class a are also broken. This is where you can run into problems where as with a mock object you could always return the correct value. Using "the real thing" Can kind of convolute tests and hide the real problem.
Mocks can have downsides too, I think there is a balance with some mocks and some real objects you will have to find for yourself.
There is one really good reason why you want to use stubs/mocks instead of real classes. I.e. to make your unit test's (pure unit test) class under test isolated from everything else. This property is extremely useful and the benefits for keeping tests isolated are plentiful:
Tests run faster because they don't need to call the real class implementation. If the implementation is to run against file system or relational database then the tests will become sluggish. Slow tests make developers not run unit tests as often. If you're doing Test Driven Development then time hogging tests are together a devastating waste of developers time.
It will be easier to track down problems if the test is isolated to the class under test. In contrast to a system test it will be much more difficult to track down nasty bugs that are not apparently visible in stack traces or what not.
Tests are less fragile on changes done on external classes/interfaces because you're purely testing the class that is under test. Low fragility is also an indication of low coupling, which is a good software engineering.
You're testing against external behaviour of a class rather than the internal implementation which is more useful when deciding code design.
Now if you want to use real class in your test, that's fine but then it is NOT a unit test. You're doing a integration test instead, which is useful for the purpose of validating requirements and overall sanity check. Integration tests are not run as often as unit tests, in practice it is mostly done before committing to favorite code repository, but is equally important.
The only thing you need to have in mind is the following:
Mocks and stubs are for unit tests.
Real classes are for integration/system tests.
Extracted and extended from an answer of mine How do I unit-test inheriting objects?">here:
You should always use real objects where possible.
You should only use mock objects if the real objects do something you dont want to set up (like use sockets, serial ports, get user input, retrieve bulky data etc). Essentially, mock objects are for when the estimated effort to implement and maintain a test using a real object is greater than that to implement and maintain a test using a mock object.
I dont buy into the "dependant test failure" argument. If a test fails because a depended-on class broke, the test did exactly what it should have done. This is not a smell! If a depended-on interface changes, I want to know!
Highly mocked testing environments are very high-maintenance, particularly early in a project when interfaces are in flux. Ive always found it better to start integration testing ASAP.
I always use a mock version of a dependency if the dependency accesses an external system like a database or web service.
If that isn't the case, then it depends on the complexity of the two objects. Testing the object under test with the real dependency is essentially multiplying the two sets of complexities. Mocking out the dependency lets me isolate the object under test. If either object is reasonably simple, then the combined complexity is still workable and I don't need a mock version.
As others have said, defining an interface on the dependency and injecting it into the object under test makes it much easier to mock out.
Personally, I'm undecided about whether it's worth it to use strict mocks and validate every call to the dependency. I usually do, but it's mostly habit.
You may also find these related questions helpful:
What is object mocking and when do I need it?
When should I mock?
How are mocks meant to be used?
And perhaps even, Is it just me, or are interfaces overused?
Use the real thing only if it has been unit tested itself first. If it introduces dependencies that prevent that (circular dependencies or if it requires certain other measures to be in place first) then use a 'mock' class (typically referred to as a "stub" object).
If your 'real things' are simply value objects like JavaBeans then thats fine.
For anything more complex I would worry as mocks generated from mocking frameworks can be given precise expectations about how they will be used e.g. the number of methods called, the precise sequence and the parameters expected each time. Your real objects cannot do this for you so you risk losing depth in your tests.
I've been very leery of mocked objects since I've been bitten by them a number of times. They're great when you want isolated unit tests, but they have a couple of issues. The major issue is that if the Order class needs a a collection of OrderItem objects and you mock them, it's almost impossible to verify that the behavior of of the mocked OrderItem class matches the real-world example (duplicating the methods with appropriate signatures is generally not enough). More than once I've seen systems fail because the mocked classes don't match the real ones and there weren't enough integration tests in place to catch the edge cases.
I generally program in dynamic languages and I prefer merely overriding the specific methods which are problematic. Unfortunately, this is sometimes hard to do in static languages. The downside of this approach is that you're using integration tests rather than unit tests and bugs are sometimes harder to track down. The upside is that you're using the actual code that is written, rather than a mocked version of that code.
If you don't care for verifying expectations on how your UnitUnderTest should interact with the Thing, and interactions with the RealThing have no other side-effects (or you can mock these away) then it is in my opinion perfectly fine to just let your UnitUnderTest use the RealThing.
That the test then covers more of your code base is a bonus.
I generally find it is easy to tell when I should use a ThingMock instead of a RealThing:
When I want to verify expectations in the interaction with the Thing.
When using the RealThing would bring unwanted side-effects.
Or when the RealThing is simply too hard/troublesome to use in a test setting.
If you write your code in terms of interfaces, then unit testing becomes a joy because you can simply inject a fake version of any class into the class you are testing.
For example, if your database server is down for whatever reason, you can still conduct unit testing by writing a fake data access class that contains some cooked data stored in memory in a hash map or something.
It depends on your coding style, what you are doing, your experience and other things.
Given all that, there's nothing stopping you from using both.
I know I use the term unit test way too often. Much of what I do might be better called integration test, but better still is to just think of it as testing.
So I suggest using all the testing techniques where they fit. The overall aim being to test well, take little time doing it and personally have a solid feeling that it's right.
Having said that, depending on how you program, you might want to consider using techniques (like interfaces) that make mocking less intrusive a bit more often. But don't use Interfaces and injection where it's wrong. Also if the mock needs to be fairly complex there is probably less reason to use it. (You can see a lot of good guidance, in the answers here, to what fits when.)
Put another way: No answer works always. Keep your wits about you, observe what works what doesn't and why.

How do you unit test a class that is dependent on many other classes?

I have been hearing that with unit testing we can catch most of the bugs in the code and I really believe that this is true. But my question is in large projects where each class is dependent on many other classes how do you go about unit testing the class ? Stubbing out every other class doesn't make much sense because of both complexity and the time required to write the stubs. What is your opinion about it?
Part of the advantage of using tests, is that it forces you to minimize dependencies thereby creating testable code. By minimizing dependencies you will increase maintainability and reusability of your code, both highly desirable traits.
Since you are introducing tests into an existing code-base, you will no doubt encounter many hard-to-test situations which will require refactoring to test properly. This refactoring will increase the testability of your code, decreasing dependencies at the same time.
The reason why its hard to retrofit code with tests is why many advocate following Test-Driven-Development. If you write your tests first and then write the code to pass the tests, your code will by default be much more testable and decoupled.
Use a mocking framework to stub your classes for you. Mocking frameworks (I use Rhino Mocks for C#/.NET) make it pretty easy to stub out your dependencies. Used with dependency injection, your classes can be easily decoupled from other classes and don't take much work to make them so. Note that sometimes it does become easier to "fake" something rather than mock. I do end up faking a few classes, but usually these are pretty easy to write -- they just contain "good" or "bad" data and return it. No logic involved.
I'm not up to speed on the whole unit testing thing, but I would think that each unit should represent some sort of test. A test case should test some procedure. A procedure may not be confined to a single class.
For instance, if your application can be broken down into atomic components, then a test case could exist for each component and for each applicable chain of transitions between components.
In our project developers have the responsibility of writing and maintaining stubs right from the start.
Even though stubs cost time and money, unit testing provides some undeniable advantages (and you agreed to it too). It allows for automation of the testing process, reduces difficulties of discovering errors contained in more complex pieces of the application, and test coverage is often enhanced because attention is given to each unit.
With my experience I have seen that in projects where we had put unit testing at low priority had to suffer at a later stage where single change either break things or needs extra testing.
Unit testing increases the confidence in me as a software developer.
i prefer to unit test features, which may or may not correspond to individual classes. This granularity of unit testing seems to me to be a better trade-off in the work required for testing, the feedback for the customer (since features are what they pay for), and the long-term utility of the unit tests (embody requirements, illustrate how-to cases)
as a consequence, mocking is rarely required, but some tests span multiple classes
But my question is in large projects
where each class is dependent on many
other classes how do you go about unit
testing the class ?
First, you understand that "each class is dependent on many other classes" is a bad thing, right? And that it isn't a function of having a large project but rather of bad design? (This is one of the benefits of TDD, that it tends to discourage such highly coupled code.)
Stubbing out every other class doesn't
make much sense because of both
complexity and the time required to
write the stubs.
Not to mention that it doesn't help the design problem. Worse, the investment in all the stubs can actually be a hinderance to refactoring, if only a psychological one.
A better approach (IMHO) is to start from inside out on the classes, changing the design as you go. Typically I would approach this by doing what I call "internal dependency injetion". This involves leaving the method signatures unchanged but extracting the data needed from the dependencies and then extracting functions that hold the actual logic. A trivial example might be:
public int foo(A a, B b) {
C c = a.getC();
D d = b.getD();
Bar bar = calculateBar(c, d);
return calculateFoo(bar, this.baz);
}
Bar calculateBar(C c, D d) {
...
}
int calculateFoo(Bar bar, Baz baz) {
...
}
Now I can write a test for calculateBar and calculateBaz, but I don't need to worry in setting up internal states (this.baz) and I don't need to worry about creating mock versions of A and B.
After creating tests like this with the existing interfaces then I look for opportunity to push the changes out to the interface, like changing the Foo method to take C and D rather than A and B.
Any of that make sense?
You're right, stubbing all the classes on which the class you're testing depends on is not worth the effort. And you will have to maintain your mock as well, ifever you change the interfaces.
If the classes used by the one you're testing have already been tested, then it's fine to have a test spanning over several classes.
It sometimes simpler to mock an object : when
object is or uses an external resource such as database or network connection, disk
object is a GUI
object is not [yet] available
object behavior is not deterministic
object is expensive to setup
I feel you may need to refactor to reduce the complexity.
I have found that it can be very helpful to refactor classes (ie break them down) so each has a single responsibility (less complex). Then you can unit test this responsibility. you can use mocking frameworks to easily mock and initialize the dependencies. I find FakeItEasy to be very good and easy to use and you can find good examples
Here are the principles that worked well for us for testability but also for robust design in a very big enterprise project:
Make your dependencies explicit by taking them through class constructors.
Instead of taking concrete classes as dependencies, create interfaces that defines the functionality you need and take those interfaces as dependencies from class constructors instead. This also allows you to apply Dependency Injection pattern if needed.
Use one of the existing frameworks like Moq so you do not need to write a full test implementation of your interfaces but on run time you create your Moq objects from inside your unit test.