Is it still unit testing, when I write a WebInterface? - unit-testing

I have a more general question:
Assuming I have a web application, for example using the Struts2 Framework.
Therefore it becomes quite complicated to write Unit tests for functions, as you have to mock every aspect of the Framework.
The Database+Connection, The Session, a LDAP-Connection or what ever else is needed, which I do not have written on my own
It would be much easier to write the unit Tests so, that they run in a WebInterface inside the Base-Application, as all these things then already would exist.
The question:
Would you guys still consider this as unit testing?

Some thoughts..
The question is very general. My suggestion is that you still want to write some sort of Unit Tests for number of reasons. Firstly you can run them as an automated test suite so if something breaks you know quickly. Secondly you get a better designed system - Your objects are loosely coupled. You get more confident on the code you write.
If you have a framework harder to test,
a. Try abstracting away some dependencies, so they code can be injected without interfering with real instances.
b. Use a testing framework that can break any tightly coupled harder dependencies.
Harder to provide a comprehensive answer, but this is the general direction, which I would suggest.

You should consider what you really want to test first. A framework, for its definition, will use the classes you provide to do some "magic". Do you want to test that has already been tested "magic" or the business core of the app you programmed?.
Also, something you should consider, is where to stop testing. You probably don't want to test the connection to the database (considering what you wrote) so just mock it.
Take in consideration that you will have to test just one functionality at the time, don't think of having, for example, the database connection and the ldap in the same test, it wouldn't be unit testing.
Take a look at this tutorial also :http://tutorials.jenkov.com/java-unit-testing/index.html

Related

Learning About Unit Testing Using When and Should and TDD

The tests at my new job are nothing like the tests I have encountered before.
When they're writing their unit tests (presumably before the code), they create a class starting with "When". The name describes the scenario under which the tests will run (the fixture). They'll created subclasses for each branch through the code. All of the tests within the class start with "should" and they test different aspects of the code after running. So, they will have a method for verifying that each mock (DOC) is called correctly and for checking the return value, if applicable. I am a little confused by this method because it means the exact same execution code is being run for each test and this seems wasteful. I was wondering if there is a technique similar to this that they may have adapted. A link explaining the style and how it is supposed to be implemented would be great. I sounds similar to some approaches of BDD I've seen.
I also noticed that they've moved the repeated calls to "execute" the SUT into the setup methods. This causes issues when they are expecting exceptions, because they can't use built-in tools for performing the check (Python unittest's assertRaises). This also means storing the return value as a backing field of the test class. They also have to store many of the mocks as backing fields. Across class hierarchies it becomes difficult to tell the configuration of each mock.
They also test code a little differently. It really comes down to what they consider an integration test. They mock out anything that steals the context away from the function being tested. This can mean private methods within the same class. I have always limited mocking to resources that can affect the results of the test, such as databases, the file system or dates. I can see some value in this approach. However, the way it is being used now, I can see it leading to fragile tests (tests that break with every code change). I get concerned because without an integration test, in this case, you could be using a 3rd party API incorrectly but your unit tests would still pass. I'd like to learn more about this approach as well.
So, any resources about where to learn more about some of these approaches would be nice. I'd hate to pass up a great learning opportunity just because I don't understand they way they are doing things. I would also like to stop focusing on the negatives of these approaches and see where the benefits come in.
If I understood you explanation in the first paragraph correctly, that's quite similar to what I often do. (Depending on whether the testing framework makes it easy or not. Also many mocking frameworks don't support it, but spy frameworks like Mockito do better.)
For example see the stack example here which has a common setup (adding things to the stack) and then a bunch of independent tests which each check one thing. Here's still another example, this time one where none of the tests (#Test) modify the common fixture (#Before), but each of them focuses on checking just one independent thing that should happen. If the tests are very well focused, then it should be possible to change the production code to make any single test fail while all other tests pass (I wrote about that recently in Unit Test Focus Isolation).
The main idea is to have each test check a single feature/behavior, so that when tests fail it's easier to find out why it failed. See this TDD tutorial for more examples and to learn that style.
I'm not worried about the same code paths executed multiple times, when it takes a millisecond to run one test (if it takes more than a couple of seconds to run all unit tests, the tests are probably too big). From your explanation I'm more worried that the tests might be too tightly coupled to the implementation, instead of the feature, if it's systematic that there is one test for each mock. The name of the test would be a good indicator of how well structured or how fragile the tests are - does it describe a feature or how that feature is implemented.
About mocking, a good book to read is Growing Object-Oriented Software Guided by Tests. One should not mock 3rd party APIs (APIs which you don't own and can't modify), for the reason you already mentioned, but one should create an abstraction over it which better fits the needs of the system using it and works the way you want it. That abstraction needs to be integration tested with the 3rd party API, but in all tests using the abstraction you can mock it.
First, the pattern that you are using is based on Cucumber - here's a link. The style is from the BDD (Behavior-driven development) approach. It has two advantages over traditional TDD:
Language - one of the tenants of BDD is that the language you use influences the thoughts you have by forcing you to speak in the language of the end user, you will end up writing different tests than when you write tests from the focus of a programmer
Tests lock code - BDD locks the code at the appropriate level. One problem common in testing is that you write a large number of tests, which makes your codebase more brittle as when you change the code you must also change a large number of tests too. BDD forces you to lock the behavior of your code, rather than the implementation of your code. This way, when a test breaks, it is more likely to be meaningful.
It is worth noting that you do not have to use the Cucumber style of testing to achieve these affects and using it does add an extra layer of overhead. But very few programmers have been successful in keeping the BDD mindset while using traditional xUnit tools (TDD).
It also sounds like you have some scenarios where you would like to say 'When I do , then verify '. Because the current BDD xUnit frameworks only allow you to verify primitives (strings, ints, doubles, booleans....), this usually results in a large number of individual tests (one for each Assert). It is possible to do more complicated verifications using a Golden Master paradigm test tool, such as ApprovalTests. Here's a video example of this.
Finally, here's a link to Dan North's blog - he started it all.

BDD and functional tests

I am starting to buy into BDD. Basically, as I understand it, you write scenario that describes well acceptance criteria for certain story. You start by simple tests, from the outside-in, using mocks in place of classes you do not implement as yet. As you progress, you should replace mocks with real classes. From Introduction to BDD:
At first, the fragments are
implemented using mocks to set an
account to be in credit or a card to
be valid. These form the starting
points for implementing behaviour. As
you implement the application, the
givens and outcomes are changed to use
the actual classes you have
implemented, so that by the time the
scenario is completed, they have
become proper end-to-end functional
tests.
My question is: When you finish implementing a scenario, should all classes you use be real, like in integration tests? For example, if you use DB, should your code write to a real (but lightweight in-memory) DB? In the end, should you have any mocks in your end-to-end tests?
Well, it depends :-) As I understand, the tests produced by BDD are still unit tests, so you should use mocks to eliminate dependency on external factors like DB.
In full fledged integration / functionality tests, however, you should obviously test against the whole production system, without any mocks.
Integration tests might contain stubs/mocks to fake the code/components outside the modules that you are integrating.
However, IMHO the end-to-end test should mean no stubs/mocks along the way but production code only. Or in other words - if mocks are present - it is not really end-to-end test.
Yes, by the time a scenario runs, ideally all your classes will be real. A scenario exercises the behaviour from a user's point of view, so the system should be as a user would see it.
In the early days of BDD we used to start with mocks in the scenarios. I don't bother with this any more, because it's a lot of work to keep mocking as you go down the levels. Instead I will sometimes do things like hard-code data or behaviour if it lets me get feedback from the stakeholders more quickly.
We still keep mocks in the unit tests though.
For things like databases, sure, you can use an in-memory DB or whatever helps you get feedback faster. At some point you should probably run your scenarios on a system that's as close to production as possible. If this is too slow, you might do it overnight instead of as part of your regular build cycle.
As for what you "should" do, writing the right code is far more tricky than writing the code right. I worry about using my scenarios to get feedback from the stakeholders and users before I worry about how close my environment is to a production environment. When you get to the point where changes are deployed every couple of weeks, sure, then you probably want more certainty that you're not introducing any bugs.
Good luck!
I agree with Peter and ratkok. I think you keep the mocks forever, so you always have unit tests.
Separately, it is appropriate to additionally have integration tests (no mocks, use a DB, etc. etc.).
You may even find in-betweens helpful at times (mock one piece of depended-on code (DOC), but not another).
I've only recently been looking into BDD and in particular jBehave. I work in fairly large enterprises with a lot of waterfall, ceremony orientated people. I'm looking at BDD as a way to take the businesses use cases and turn then directly into tests which the developer can then turn into either unit test or integration tests.
BDD seems to me to be not just a way to help drive the developers understanding of what the business wants, but also a way to ensure as much a spossible that those requirements are accurately represented.
My view that if you are dealing with mocks then you are doing unit tests. You need both unit testing to test out the details of a classes operation, and integrations to test out that the class plays well with others. I find developers often get infused between the two, but it's best to be as clear as possible and keep there separate from each other.

Deploying a big test disguised as a program

If someone has created a small diagnostic, internal web app that uses (unit) tests as its logic, is there ever a valid reason to do that? Keep in mind that Nunit has to be deployed as well where ever this website goes.
I'm of the view that programs should contain their own logic and possibly reusable parts (if available) but not wrap tests for their logic. Tests serve the purpose of validation for code logic. If you than say tests are going to be code logic, shouldn't you need to write tests to validate the tests? Why is that fundamentally wrong?
Hint: Because now you are stringing all of these tests together and interrelating them, which means they are no longer dependent(?).
Using a Unit test framework for something else than unit tests is usually not the most proper path. You don't have to write tests for your unit tests, since you write them first and see them fail. That's how you know they're working properly. I'm guessing testing code written within a unit test framework is nontrivial, and if I had a diagnostics app for a critical piece of software, I would really like to be certain it worked as it should.
Edit: It seems that you've already made up your mind but need support in expressing why the current strategy is less than ideal to, perhaps, other project members. If that's the case, I suggest you put your code where your mouth is, and throw together a small sample app designed differently. If utilizing a unit test framework in this specific case was a bad design decision, then that would make it clear as sunshine.
I'm pretty sure if you look at this question TDD Anti-patters catalogue, you will find that you are commiting sever anti-patterns.
Code is code. Just because it's labeled a test doesn't mean it's not also a useful application.
Suppose we need to write a lot of verifications of beahviour. Why is using test framework a bad idea? SHoudl we instead write a new framework with identical function and call it something different?
Take an external view. This applciation claims to do certain things. Does it do them correctly? reliably? Can it be maintained, enhanced? Understood?
If so, why do you care that it happens to use a test framework in its implementation. If its behaviour or structure are defective then we criticise.
One lovely thing about great technology is that it has unexpected applications.
The purpose of unit testing is to make sure your class works the way it should and according to the interface. So if you are using your unit tests for some testing app it seems like using assert in programm logic.
If you need some testing app - implement it and use WITH unit tests. It maybe to configure something or to get some user interaction.
One of the other reasons I see - unit test are written according to assumptions on how everything works. If your assumption is right tests should pass. When adding functionality unit tests let you feel confident that all assumptions are still there. So every test should be kept as simple as you can keep it. that`s why no need to test test code and no need to use testing code in any testing applications.

What are the pros and cons of automated Unit Tests vs automated Integration tests?

Recently we have been adding automated tests to our existing java applications.
What we have
The majority of these tests are integration tests, which may cover a stack of calls like:-
HTTP post into a servlet
The servlet validates the request and calls the business layer
The business layer does a bunch of stuff via hibernate etc and updates some database tables
The servlet generates some XML, runs this through XSLT to produce response HTML.
We then verify that the servlet responded with the correct XML and that the correct rows exist in the database (our development Oracle instance). These rows are then deleted.
We also have a few smaller unit tests which check single method calls.
These tests are all run as part of our nightly (or adhoc) builds.
The Question
This seems good because we are checking the boundaries of our system: servlet request/response on one end and database on the other. If these work, then we are free to refactor or mess with anything inbetween and have some confidence that the servlet under test continues to work.
What problems are we likely to run into with this approach?
I can't see how adding a bunch more unit tests on individual classes would help. Wouldn't that make it harder to refactor as it's much more likely we will need to throw away and re-write tests?
Unit tests localize failures more tightly. Integration-level tests more closely correspond to user requirements and so are better predictor of delivery success. Neither of them is much good unless built and maintained, but both of them are very valuable if properly used.
(more...)
The thing with units tests is that no integration level test can exercise all the code as much as a good set of unit tests can. Yes, that can mean that you have to refactor the tests somewhat, but in general your tests shouldn't depend on the internals so much. So, lets say for example that you have a single function to get a power of two. You describe it (as a formal methods guy, I'd claim you specify it)
long pow2(int p); // returns 2^p for 0 <= p <= 30
Your test and your spec look essentially the same (this is sort of pseudo-xUnit for illustration):
assertEqual(1073741824,pow2(30);
assertEqual(1, pow2(0));
assertException(domainError, pow2(-1));
assertException(domainError, pow2(31));
Now your implementation can be a for loop with a multiple, and you can come along later and change that to a shift.
If you change the implementation so that, say, it's returning 16 bits (remember that sizeof(long) is only guaranteed to be no less than sizeof(short)) then this tests will fail quickly. An integration-level test should probably fail, but not certainly, and it's just as likely as not to fail somewhere far downstream of the computation of pow2(28).
The point is that they really test for diferent situations. If you could build sufficiently details and extensive integration tests, you might be able to get the same level of coverage and degree of fine-grained testing, but it's probably hard to do at best, and the exponential state-space explosion will defeat you. By partitioning the state space using unit tests, the number of tests you need grows much less than exponentially.
You are asking pros and cons of two different things (what are the pros and cons of riding a horse vs riding a motorcycle?)
Of course both are "automated tests" (~riding) but that doesn't mean that they are alternative (you don't ride a horse for hundreds of miles, and you don't ride a motorcycle in closed-to-vehicle muddy places)
Unit Tests test the smallest unit of the code, usually a method. Each unit test is closely tied to the method it is testing, and if it's well written it's tied (almost) only with that.
They are great to guide the design of new code and the refactoring of existing code. They are great to spot problems long before the system is ready for integration tests. Note that I wrote guide and all the Test Driven Development is about this word.
It does not make any sense to have manual Unit Tests.
What about refactoring, which seems to be your main concern? If you are refactoring just the implementation (content) of a method, but not its existence or "external behavior", the Unit Test is still valid and incredibly useful (you cannot imagine how much useful until you try).
If you are refactoring more aggressively, changing methods existence or behavior, then yes, you need to write a new Unit Test for each new method, and possibly throw away the old one. But writing the Unit Test, especially if you write it before the code itself, will help to clarify the design (i.e. what the method should do, and what it shouldn't) without being confused by the implementation details (i.e. how the method should do the thing that it needs to do).
Automated Integration Tests test the biggest unit of the code, usually the entire application.
They are great to test use cases which you don't want to test by hand. But you can also have manual Integration Tests, and they are as effective (only less convenient).
Starting a new project today, it does not make any sense not to have Unit Tests, but I'd say that for an existing project like yours it does not make too much sense to write them for everything you already have and it's working.
In your case, I'd rather use a "middle ground" approach writing:
smaller Integration Tests which only test the sections you are going to refactor. If you are refactoring the whole thing, then you can use your current Integration Tests, but if you are refactoring only -say- the XML generation, it does not make any sense to require the presence of the database, so I'd write a simple and small XML Integration Test.
a bunch of Unit Tests for the new code you are going to write. As I already wrote above, Unit Tests will be ready as soon as you "mess with anything in between", making sure that your "mess" is going somewhere.
In fact your Integration Test will only make sure that your "mess" is not working (because at the beginning it will not work, right?) but it will not give you any clue on
why it is not working
if your debugging of the "mess" is really fixing something
if your debugging of the "mess" is breaking something else
Integration Tests will only give the confirmation at the end if the whole change was successful (and the answer will be "no" for a long time). The Integration Tests will not give you any help during the refactoring itself, which will make it harder and possibly frustrating. You need Unit Tests for that.
I agree with Charlie about Integration-level tests corresponding more to user actions and the correctness of the system as a whole. I do think there is alot more value to Unit Tests than just localizing failures more tightly though. Unit tests provide two main values over integration tests:
1) Writing unit tests is as much an act of design as testing. If you practice Test Driven Development/Behavior Driven Development the act of writing the unit tests helps you design exactly what you code should do. It helps you write higher quality code (since being loosely coupled helps with testing) and it helps you write just enough code to make your tests pass (since your tests are in effect your specification).
2) The second value of unit tests is that if they are properly written they are very very fast. If I make a change to a class in your project can I run all the corresponding tests to see if I broke anything? How do I know which tests to run? And how long will they take? I can guarantee it will be longer than well written unit tests. You should be able to run all of you unit tests in a couple of minutes at the most.
Just a few examples from personal experience:
Unit Tests:
(+) Keeps testing close to the relevant code
(+) Relatively easy to test all code paths
(+) Easy to see if someone inadvertently changes the behavior of a method
(-) Much harder to write for UI components than for non-GUI
Integration Tests:
(+) It's nice to have nuts and bolts in a project, but integration testing makes sure they fit each other
(-) Harder to localize source of errors
(-) Harder to tests all (or even all critical) code paths
Ideally both are necessary.
Examples:
Unit test: Make sure that input index >= 0 and < length of array. What happens when outside bounds? Should method throw exception or return null?
Integration test: What does the user see when a negative inventory value is input?
The second affects both the UI and the back end. Both sides could work perfectly, and you could still get the wrong answer, because the error condition between the two isn't well-defined.
The best part about Unit testing we've found is that it makes devs go from code->test->think to think->test->code. If a dev has to write the test first, [s]he tends to think more about what could go wrong up front.
To answer your last question, since unit tests live so close to the code and force the dev to think more up front, in practice we've found that we don't tend to refactor the code as much, so less code gets moved around - so tossing and writing new tests constantly doesn't appear to be an issue.
The question has a philisophical part for sure, but also points to pragmatic considerations.
Test driven design used as the means to become a better developer has its merits, but it is not required for that. Many a good programmer exists who never wrote a unit test. The best reason for unit tests is the power they give you when refactoring, especially when many people are changing the source at the same time. Spotting bugs on checkin is also a huge time-saver for a project (consider moving to a CI model and build on checkin instead of nightly). So if you write a unit test, either before or after you written the code it tests, you are sure at that moment about the new code you've written. It is what can happen to that code later that the unit test ensures against - and that can be significant. Unit tests can stop bugs before tehy get to QA, thereby speeding up your projects.
Integration tests stress the interfaces between elements in your stack, if done correctly. In my experience, integration is the most unpredictable part of a project. Getting individual pieces to work tends not to be that hard, but putting everything together can be very difficult because of the types of bugs that can emerge at this step. In many cases, projects are late because of what happens in integration. Some of the errors encountered in this step are found in interfaces that have been broken by some change made on one side that was not communicated to the other side. Another source of integration errors are in configurations discovered in dev but forgotten by the time the app goes to QA. Integration tests can help reduce both types dramatically.
The importance of each test type can be debated, but what will be of most importance to you is the application of either type to your particular situation. Is the app in question being developed by a small group of people or many different groups? Do you have one repository for everything, or many repos each for a particular component of the app? If you have the latter, then you will have challenges with inter compatability of different versions of different components.
Each test type is designed to expose the problems of different levels of integration in the development phase to save time. Unit tests drive the integration of the output many developers operating on one repository. Integration tests (poorly named) drive the integration of components in the stack - components often written by separate teams. The class of problems exposed by integration tests are typically more time-consuming to fix.
So pragmatically, it really boils down to where you most need speed in your own org/process.
The thing that distinguishes Unit tests and Integration tests is the number of parts required for the test to run.
Unit tests (theoretically) require very (or no) other parts to run.
Integration tests (theoretically) require lots (or all) other parts to run.
Integration tests test behaviour AND the infrastructure. Unit tests generally only test behaviour.
So, unit tests are good for testing some stuff, integration tests for other stuff.
So, why unit test?
For instance, it is very hard to test boundary conditions when integration testing. Example: a back end function expects a positive integer or 0, the front end does not allow entry of a negative integer, how do you ensure that the back end function behaves correctly when you pass a negative integer to it? Maybe the correct behaviour is to throw an exception. This is very hard to do with an integration test.
So, for this, you need a unit test (of the function).
Also, unit tests help eliminate problems found during integration tests. In your example above, there are a lot of points of failure for a single HTTP call:
the call from the HTTP client
the servlet validation
the call from the servlet to the business layer
the business layer validation
the database read (hibernate)
the data transformation by the business layer
the database write (hibernate)
the data transformation -> XML
the XSLT transformation -> HTML
the transmission of the HTML -> client
For your integration tests to work, you need ALL of these processes to work correctly. For a Unit test of the servlet validation, you need only one. The servlet validation (which can be independent of everything else). A problem in one layer becomes easier to track down.
You need both Unit tests AND integration tests.
Unit tests execute methods in a class to verify proper input/output without testing the class in the larger context of your application. You might use mocks to simulate dependent classes -- you're doing black box testing of the class as a stand alone entity. Unit tests should be runnable from a developer workstation without any external service or software requirements.
Integration tests will include other components of your application and third party software (your Oracle dev database, for example, or Selenium tests for a webapp). These tests might still be very fast and run as part of a continuous build, but because they inject additional dependencies they also risk injecting new bugs that cause problems for your code but are not caused by your code. Preferably, integration tests are also where you inject real/recorded data and assert that the application stack as a whole is behaving as expected given those inputs.
The question comes down to what kind of bugs you're looking to find and how quickly you hope to find them. Unit tests help to reduce the number of "simple" mistakes while integration tests help you ferret out architectural and integration issues, hopefully simulating the effects of Murphy's Law on your application as a whole.
Joel Spolsky has written very interesting article about unit-testing (it was dialog between Joel and some other guy).
The main idea was that unit tests is very good thing but only if you use them in "limited" quantity. Joel doesn't recommend to achive state when 100% of your code is under testcases.
The problem with unit tests is that when you want to change architecture of your application you'll have to change all corresponding unit tests. And it'll take very much time (maybe even more time than the refactoring itself). And after all that work only few tests will fail.
So, write tests only for code that really can make some troubles.
How I use unit tests: I don't like TDD so I first write code then I test it (using console or browser) just to be sure that this code do nessecary work. And only after that I add "tricky" tests - 50% of them fail after first testing.
It works and it doesn't take much time.
We have 4 different types of tests in our project:
Unit tests with mocking where necessary
DB tests that act similar to unit tests but touch db & clean up afterwards
Our logic is exposed through REST, so we have tests that do HTTP
Webapp tests using WatiN that actually use IE instance and go over major functionality
I like unit tests. They run really fast (100-1000x faster than #4 tests). They are type safe, so refactoring is quite easy (with good IDE).
Main problem is how much work is required to do them properly. You have to mock everything: Db access, network access, other components. You have to decorate unmockable classes, getting a zillion mostly useless classes. You have to use DI so that your components are not tightly coupled and therefore not testable (note that using DI is not actually a downside :)
I like tests #2. They do use the database and will report database errors, constraint violations and invalid columns. I think we get valuable testing using this.
#3 and especially #4 are more problematic. They require some subset of production environment on build server. You have to build, deploy and have the app running. You have to have a clean DB every time. But in the end, it pays off. Watin tests require constant work, but you also get constant testing. We run tests on every commit and it is very easy to see when we break something.
So, back to your question. Unit tests are fast (which is very important, build time should be less than, say, 10 minutes) and the are easy to refactor. Much easier than rewriting whole watin thing if your design changes. If you use a nice editor with good find usages command (e.g. IDEA or VS.NET + Resharper), you can always find where your code is being tested.
With REST/HTTP tests, you get a good a good validation that your system actually works. But tests are slow to run, so it is hard to have a complete validation at this level. I assume your methods accept multiple parametres or possibly XML input. To check each node in XML or each parameter, it would take tens or hundreds of calls. You can do that with unit tests, but you cannot do that with REST calls, when each can take a big fraction of a second.
Our unit tests check special boundary conditions far more often than #3 tests. They (#3) check that main functionality is working and that's it. This seems to work pretty well for us.
As many have mentioned, integration tests will tell you whether your system works, and unit tests will tell you where it doesn't. Strictly from a testing perspective, these two kinds of tests complement each other.
I can't see how adding a bunch more
unit tests on individual classes would
help. Wouldn't that make it harder to
refactor as it's much more likely we
will need to throw away and re-write
tests?
No. It will make refactoring easier and better, and make it clearer to see what refactorings are appropriate and relevant. This is why we say that TDD is about design, not about testing. It's quite common for me to write a test for one method and in figuring out how to express what that method's result should be to come up with a very simple implementation in terms of some other method of the class under test. That implementation frequently finds its way into the class under test. Simpler, more solid implementations, cleaner boundaries, smaller methods: TDD - unit tests, specifically - lead you in this direction, and integration tests do not. They're both important, both useful, but they serve different purposes.
Yes, you may find yourself modifying and deleting unit tests on occasion to accommodate refactorings; that's fine, but it's not hard. And having those unit tests - and going through the experience of writing them - gives you better insight into your code, and better design.
Although the setup you described sounds good, unit testing also offers something important. Unit testing offers fine levels of granularity. With loose coupling and dependency injection, you can pretty much test every important case. You can be sure that the units are robust; you can scrutinise individual methods with scores of inputs or interesting things that don't necessarily occur during your integration tests.
E.g. if you want to deterministically see how a class will handle some sort of failure that would require a tricky setup (e.g. network exception when retrieving something from a server) you can easily write your own test double network connection class, inject it and tell it to throw an exception whenever you feel like it. You can then make sure that the class under test gracefully handles the exception and carries on in a valid state.
You might be interested in this question and the related answers too. There you can find my addition to the answers that were already given here.

How are Mocks meant to be used?

When I originally was introduced to Mocks I felt the primary purpose was to mock up objects that come from external sources of data. This way I did not have to maintain an automated unit testing test database, I could just fake it.
But now I am starting to think of it differently. I am wondering if Mocks are more effective used to completely isolate the tested method from anything outside of itself. The image that keeps coming to mind is the backdrop you use when painting. You want to keep the paint from getting all over everything. I am only testing that method, and I only want to know how it reacts to these faked up external factors?
It seems incredibly tedious to do it this way but the advantage I am seeing is when the test fails it is because it is screwed up and not 16 layers down. But now I have to have 16 tests to get the same testing coverage because each piece would be tested in isolation. Plus each test becomes more complicated and more deeply tied to the method it is testing.
It feels right to me but it also seems brutal so I kind of want to know what others think.
I recommend you take a look at Martin Fowler's article Mocks Aren't Stubs for a more authoritative treatment of Mocks than I can give you.
The purpose of mocks is to unit test your code in isolation of dependencies so you can truly test a piece of code at the "unit" level. The code under test is the real deal, and every other piece of code it relies on (via parameters or dependency injection, etc) is a "Mock" (an empty implementation that always returns expected values when one of its methods is called.)
Mocks may seem tedious at first, but they make Unit Testing far easier and more robust once you get the hang of using them. Most languages have Mock libraries which make mocking relatively trivial. If you are using Java, I'll recommend my personal favorite: EasyMock.
Let me finish with this thought: you need integration tests too, but having a good volume of unit tests helps you find out which component contains a bug, when one exists.
Don't go down the dark path Master Luke. :) Don't mock everything. You could but you shouldn't... here's why.
If you continue to test each method in isolation, you have surprises and work cut out for you when you bring them all together ala the BIG BANG. We build objects so that they can work together to solve a bigger problem.. By themselves they are insignificant. You need to know if all the collaborators are working as expected.
Mocks make tests brittle by introducing duplication - Yes I know that sounds alarming. For every mock expect you setup, there are n places where your method signature exists. The actual code and your mock expectations (in multiple tests). Changing actual code is easier... updating all the mock expectations is tedious.
Your test is now privy to insider implementation information. So your test depends on how you chose to implement the solution... bad. Tests should be a independent spec that can be met by multiple solutions. I should have the freedom to just press delete on a block of code and reimplement without having to rewrite the test suite.. coz the requirements still stay the same.
To close, I'll say "If it quacks like a duck, walks like a duck, then it probably is a duck" - If it feels wrong.. it probably is. *Use mocks to abstract out problem children like IO operations, databases, third party components and the like.. Like salt, some of it is necessary.. too much and :x *
This is the holy war of State based vs Iteraction based testing.. Googling will give you deeper insight.
Clarification: I'm hitting some resistance w.r.t. integration tests here :) So to clarify my stand..
Mocks do not figure in the 'Acceptance tests'/Integration realm. You'll only find them in the Unit Testing world.. and that is my focus here.
Acceptance tests are different and are very much needed - not belittling them. But Unit tests and Acceptance tests are different and should be kept different.
All collaborators within a component or package do not need to be isolated from each other.. Like micro-optimization that is Overkill. They exist to solve a problem together.. cohesion.
Yes, I agree. I see mocking as sometimes painful, but often necessary, for your tests to truly become unit tests, i.e. only the smallest unit that you can make your test concerned with is under test. This allows you to eliminate any other factors that could potentially affect the outcome of the test. You do end up with a lot more small tests, but it becomes so much easier to work out where a problem is with your code.
My philosophy is that you should write testable code to fit the tests,
not write tests to fit the code.
As for complexity, my opinion is that tests should be simple to write, simply because you write more tests if they are.
I might agree that could be a good idea if the classes you're mocking doesn't have a test suite, because if they did have a proper test suite, you would know where the problem is without isolation.
Most of them time I've had use for mock objects is when the code I'm writing tests for is so tightly coupled (read: bad design), that I have to write mock objects when classes they depend on is not available. Sure there are valid uses for mock objects, but if your code requires their usage, I would take another look at the design.
Yes, that is the downside of testing with mocks. There is a lot of work that you need to put in that it feels brutal. But that is the essence of unit testing. How can you test something in isolation if you don't mock external resources?
On the other hand, you're mocking away slow functionality (such as databases and i/o operations). If the tests run faster then that will keep programmers happy. There is nothing much more painful than waiting for really slow tests, that take more than 10 seconds to finish running, while you're trying to implement one feature.
If every developer in your project spent time writing unit tests, then those 16 layers (of indirection) wouldn't be that much of a problem. Hopefully you should have that test coverage from the beginning, right? :)
Also, don't forget to write a function/integration test between objects in collaboration. Or else you might miss something out. These tests won't need to be run often, but are still important.
On one scale, yes, mocks are meant to be used to simulate external data sources such as a database or a web service. On a more finely grained scale however if you're designing loosely coupled code then you can draw lines throughout your code almost arbitrarily as to what might at any point be an 'outside system'. Take a project I'm working on currently:
When someone attempts to check in, the CheckInUi sends a CheckInInfo object to a CheckInMediator object which validates it using a CheckInValidator, then if it is ok, it fills a domain object named Transaction with CheckInInfo using CheckInInfoAdapter then passes the Transaction to an instance of ITransactionDao.SaveTransaction() for persistence.
I am right now writing some automated integration tests and obviously the CheckInUi and ITransactionDao are windows unto external systems and they're the ones which should be mocked. However, whose to say that at some point CheckInValidator won't be making a call to a web service? That is why when you write unit tests you assume that everything other than the specific functionality of your class is an external system. Therefore in my unit test of CheckInMediator I mock out all the objects that it talks to.
EDIT: Gishu is technically correct, not everything needs to be mocked, I don't for example mock CheckInInfo since it is simply a container for data. However anything that you could ever see as an external service (and it is almost anything that transforms data or has side-effects) should be mocked.
An analogy that I like is to think of a properly loosely coupled design as a field with people standing around it playing a game of catch. When someone is passed the ball he might throw a completely different ball to the next person, he might even throw a multiple balls in succession to different people or throw a ball and wait to receive it back before throwing it to yet another person. It is a strange game.
Now as their coach and manager, you of course want to check how your team works as a whole so you have team practice (integration tests) but you also have each player practice on his own against backstops and ball-pitching machines (unit tests with mocks). The only piece that this picture is missing is mock expectations and so we have our balls smeared with black tar so they stain the backstop when they hit it. Each backstop has a 'target area' that the person is aiming for and if at the end of a practice run there is no black mark within the target area you know that something is wrong and the person needs his technique tuned.
Really take the time to learn it properly, the day I understood Mocks was a huge a-ha moment. Combine it with an inversion of control container and I'm never going back.
On a side note, one of our IT people just came in and gave me a free laptop!
As someone said before, if you mock everything to isolate more granular than the class you are testing, you give up enforcing cohesion in you code that is under test.
Keep in mind that mocking has a fundamental advantage, behavior verification. This is something that stubs don't provide and is the other reason that makes the test more brittle (but can improve code coverage).
Mocks were invented in part to answer the question: How would you unit test objects if they had no getters or setters?
These days, recommended practice is to mock roles not objects. Use Mocks as a design tool to talk about collaboration and separation of responsibilities, and not as "smart stubs".
Mock objects are 1) often used as a means to isolate the code under test, BUT 2) as keithb already pointed out, are important to "focus on the relationships between collaborating objects". This article gives some insights and history related to the subject: Responsibility Driven Design with Mock Objects.